One of Jack's best essays. I think he's definitely right in the specific sense, AI will make AI researchers more productive, probably 10x more than its doing today, and fairly likely to be wrong in the more important sense, where "attention is all you need" level discoveries happen automatically, say every week.
The more important point though is that the former is still incredibly valuable and will cause quite a bit of social and economic tumult, and change how we think about work and life.
One way to think is, the former could've conceivably gotten us from Opus 4.6 to 4.7 I'm sure, or GPT 5.5 from 5.4. Could it have done GPT 2 to 3? Or GPT 4 to o1?
That said, it's really really hard to figure out where the "model can see over the horizon just that bit and can discover the new thing" is vs "conceptual leap the models can't make" and whether that's downstream of not enough training data or something else.
Like could it come up with Muon? I think yes. Given an evolutionary process runner like github.com/strangeloopc… and smarter ways to slice the results and suggest new approaches, it's definitely possible.
Even at this level TEVO above does pretty well with that.
Part of the slipperiness is that we do not have a good understanding of what constitutes a breakthrough. Often even after we've done it - see the mRNA work as an example.
The models however do have the distilled intuitions of its creators, which will stand in good stead.