It's a bit difficult to say with certainty we have AGI, given that there is no precise consensus definition of it, but agree wholeheartedly that we should be taking a functionalist approach. Much to my surprise, for all practical intents and purposes, current AI systems have achieved the kinds of capabilities once depicted by e.g. HAL 9000 in movies like 2001: A Space Odyssey. Yet some people still argue that it's some kind of fancy billion-dollar parlour trick.Imagine a Neolithic human encountering a rocket for the first time. This person might dispute, from a theoretical standpoint, that a rocket, lacking wings, doesn't fit their concept of "genuine" flight. But this perception is limited by the observer's lack of understanding of physics.We now have sufficient understanding of physics that we can explain in great detail why e.g. a rocket can fly to the moon whereas a bird cannot. But we are very far from having a complete understanding of how minds work. Therefore, we should base claims about AI capability on empirical tests of what they actually can and cannot do, not on prescriptive theoretical ideas on how we think they should be doing it. People often point the dangers of anthropomorphizing LLMs, but imposing on them our own preconceived ideas about thinking, when those ideas have not led to well-developed and well-tested theories, is itself a form of anthropomorphism .At some point we will be able to explain exactly how LLM cognition differs from our own, but that will have to wait until we have better theories of cognition (both human, and artificial).