A lot of people say LLM’s are not AI’s because they fail in various ways, they think this is the properly cynical position.

I think it’s not cynical enough, the concept of AGI is not that well specified. When people say “hallucinating means it’s not an AGI” I think they assume there’s a clear and demanding concept of “Artificial general intelligence” they are appealing to, but I don’t know what that concept is, and very rarely is spelt out. When it is spelt out, it sounds like a much higher bar than “Artificial General Intelligence”. I don’t see anything in the nature of LLM’s that precludes them from being AGI’s in the most straightforward sense. They are artificial, they are domain general, and they are capable of intelligent behaviour (AI is typically defined in terms of intelligent capabilities rather than possessing some intelligence essence). For almost any given task that can be done with written inputs and outputs there are countless people less capable than them. I don’t think vision is a requirement for general intelligence -blind people have general intelligence- and even if I did, plenty of LLM’s have vision now. To the extent AGI has any meaning at all, LLM’s are AGI’s.

15 Likes
9 Replies
1 Restack
12:01 AM
Oct 22