Really, it would be better to couch the argument in terms of the ways human intelligence is necessarily inferior to LLMs, for instance in terms of the sheer limitations of computing power to what we can process through the senses. It’s because we’re forced to work with woefully incomplete datasets that every toddler learns techniques of inference that work pretty well (but very far from perfectly). LLMs have no experience of inadequacy, of not knowing things; they just compute the probabilities from their models of what is truthlike and offer those, and that’s what the “hallucinations” are. Intelligence in the ordinary language sense is about the ability to struggle. (Even etymologically, “intellego” is “selecting from among” the objects of a restricted perception and memory.)