This neuroscientist argues that AI models are doomed by their own simplicity.
The fact that AI guys handwave the stark difference - not merely in kind but in complexity - of the human brain (down to the neuron and even parts and types of them) vs AI models will always surprise me.
Like, do they really believe they’ll get to human-level intelligence and other human-level capabilities across the board with a theoretical edifice built on overly simplified assumptions? I just don't see it.
AI needn't be like us but come on. Down to the most basic elements, AI models are what a kid's drawing is to reality: a honest but ultimately vain attempt at replication.
Here's Anil Seth on Noema last year (he focuses on consciousness but his points apply more broadly):
Apr 10
at
8:05 AM
Relevant people
Log in or sign up
Join the most interesting and insightful discussions.