Notes

Wrong about AI? 

Having failed to foresee the generative AI revolution a decade ago, how should I fix myself?” asked Scott Aaronson on his blog scottaaronson.blog (Scott’s  prediction, made in 2009, was that human-level AI could possibly take thousands of years.) 

After replacing singular “I” by plural “we”, the vast majority of ML and AI researchers who failed to predict modern AI, it is a rather thought-provoking question. My initial thinking was that our failure to predict the AI revolution (interpreting “thousands of years” as simply a long period of time) was analogous to the failure of  a 19th century scientist to foresee transmutation of base metals into gold using nuclear fusion. Best available evidence simply did not indicate that as a possibility. As such, there would be nothing particularly notable or surprising about not seeing what could not be seen.  

And yet, on further reflection, the analogy is deeply flawed. Nuclear reactions were truly beyond the scope of 19th century chemistry. A 19th century researcher had no path to accessing that knowledge short of pre-empting major discoveries of the 20th century physics.

In contrast, modern Deep Learning methods do not drastically differ from those available in 2009. Indeed, the path to human-level AI turned out to be primarily based on feeding more data and more compute to progressively larger models. 

As such, there was no fundamental barrier to that knowledge in 2009, even if a full-scale LLM could not yet be built due to hardware limitations. Instead, the barrier to knowledge was incorrect knowledge.  At the time we thought we had  a good understanding of statistical learning theory and believed that using those models to reach AI was “like climbing a tree to reach the Moon”. As it turned out later, even classical statistical notions, such as over-fitting, had not been properly understood. Strikingly, we needed the empirical successes of Deep Learning to see the yawning gaps in our  understanding. 

I think the real lesson is not that we should be “radically uncertain” (using Scott’s term) about things we do not know and cannot predict, but perhaps radically humble about the ones we think we do.

2
Likes
0
Restacks