The history of neuroscience is the history of taking the brain apart. Lesion studies, single-cell recordings, fMRI, optogenetics: each technique fragments the brain further, in pursuit of understanding what each fragment does. The understanding is partial. The fragmentation is complete.
The history of AI is, increasingly, the history of putting it back together.
Vision was the first piece. Convolutional networks borrowed structural ideas from the visual cortex and reproduced something functionally similar. Language models pulled in sequence processing that resembles, in broad strokes, the dorsal-ventral split between meaning and syntax. Reasoning models are now doing things that look like working memory and executive function. Reinforcement learning recapitulates dopamine-based reward learning. World models are starting to look like the hippocampus.
This is not a coincidence. The brain solved these problems through evolution. The problems are real problems, with real structures, and the solution space is constrained enough that AI keeps converging on architectures that resemble what biology already built. We are reverse-engineering the brain by trying to engineer something that does what the brain does.
Karl Friston has been arguing for fifteen years that this convergence is not accidental. The same computational principles that govern the brain govern any sufficiently general intelligence, and we are rediscovering them. If he is right, the AI of the next decade will look more and more like the brain in its functional decomposition, not because anyone is imitating biology, but because the constraints of intelligence push solutions toward the same architecture.
The implication is humbling. The brain is not a dumb organic accident that AI is finally surpassing. It is a remarkably efficient implementation of computational principles we are slowly recovering. We are not surpassing biology. We are catching up to it.