This echoes Sam Altman's idea that the age of big wins through model size alone is already over - more petaflops only go so far. But what does that mean for future models?

economist.com/science-and-technology/20… hunch is that the future is going to be both:

Multi-modal: Text, image, audio, video inputs and outputs.

Multi-**model**: Multiple models running under the hood, bouncing output off of each other.,I don't have evidence for this, but I suspect this is happening with ChatGPT - a second model behind the scenes that evaluates the input/output and processes it. I've seen cases of ChatGPT writing most of a problematic response, then self-censoring and deleting its answer.,It also explains how two things can simultaneously be true: the GPT-4 API hasn't changed, and OpenAI has patched ChatGPT jailbreaks.

1
Like
0
Restacks