MAMLMs: Feeding the internet to a GPT LLM produced an extremely persuasive and linguistically fluent internet s***poster. And there, I think, the simple “scaling law” story ends. Now do not get me wrong—that is an amazing accomplishment. But it is not the road to A(G)I, true AI, or SuperIntelligence. And the usefulness of a tireless immediately responsive internet s***poster is limited.

There now appear to be two possible roads:

  • Back up, and train a GPT LLM as a summarization engine on an authoritative set of information both through pre-training and RAG, and so produce true natural-language interfaces to structured and unstructured knowledge databases. That would be wonderful. But it is best provided not by building a bigger, more expensive model but rather by slimming down to keep linguistic fluency while reducing costs. Moreover, that would be profitable to provide: it would essentially be performing the service of creating a bespoke intellectual Jeeves for each use case. Doing that would produce profitable businesses. But it would not validate $3 trillion corporate market cap expectations.

  • Keep building bigger and more expensive models, but then thwack them to behave by confining them to domains—Tim Lee says coding, and mathing—where you can automate the generation of near-infinite amounts of questions with correct answers for reinforcement learning. That would be a tremendous boon for programmers and mathematical modelers. But expensive:

Timothy B. Lee: It's the end of pretraining as we know it: ‘Frontier labs have been releasing smaller models with surprisingly strong performance, but not bigger models that are dramatically better than the previous state of the art…. Small models have been exceeding expectations… [while] all three frontier labs have been disappointed in the results of recent large training runs…. When I started this newsletter in March 2023… it was widely assumed that OpenAI would continue pursuing the scaling strategy that had led to GPT-4…. What a difference a year makes…. Sutzkever… at… NeurIPS machine learning <youtube.com/watch?v=WQQ…>…[:] “Data is the fossil fuel of AI. It was created somehow and now we use it, and we’ve achieved Peak Data, and there will be no more…”. Sutzkever said people are now “trying to figure out what to do after pretraining,” and he mentioned three specific possibilities: agents, synthetic data, and inference compute…. Post-training has increasingly focused on improving model capabilities in specific areas like coding and mathematical reasoning. Because answers in these domains can be checked automatically, labs have been able to use existing LLMs to create synthetic training data. Labs can then use a training technique called reinforcement learning to improve a model’s performance in these domains… <understandingai.org/p/i…>

Is MAMLM financial winter coming—not for the technologists and the engineers and those of us who will greatly enjoy building out the usefulness of the capabilities we already have, but for the financiers who want to raise nine-figure money to train next-generation and beyond models?

It's the end of pretraining as we know it
Dec 31
at
2:45 PM