Wait, what's LM? Do you mean ML? I'm going to assume you do. So what you're taking issue with more the idea that AI cannot ever become sentient? If so, I think the things I described really do not weigh in either direction regarding sentience. My point is that human behavior is clearly guided by models that are far more complex and adaptive than just an enormous storehouse of examples, + analysis of the examples yielding info about what word pair frequency, etc. But I wasn't necessarily claiming that these models took the form of a conscious experience. of having a model. In fact, a lot of the regularities and rules captured in people's models are not accessible to introspection. Native speakers of a language who have no formal education about the language cannot explain to someone else about nouns, verb, adjectives, tenses, etc. Clearly they have are guided by the rules, but they don't know consciously what they are -- they just know the normal way to something.
About sentience, I guess what I think is that some kind of internal structure is a requirement. The AI can't be a giant parrot. It has to have some internal model that captures some of the rules for what it knows, both grammatical rules and conceptual rules (like that it doesn't make sense to talk about what blue tastes like) and rules having to do with what is likely happen if it gives various kinds of output. It has to have some model of self, so that if you ask it questions about itself, it's referring to something other than a library of possible answers when it responds to you.
Whether that's all that's required for "sentience," I dunno -- the above seems like the bare minimum to me. If an AI can reach that bare minimum then at the very least it stops seeming absurd to me to talk about it being sentient. I don't really think the way you do about sentience. I don't think of it as an ineffable thing that we clearly "have." Even though, sitting here at my desk, I have the powerful sense that I am conscious, dammit, and the lamp is not, and that makes me a whole different kind of entity -- I don't really buy the idea that a lamp is just a lamp, whereas I'm a lamp + an ineffable thing called consciousness that nobody understands at all, even though the difference between being a conscious being and being an object is huge, and hugely important. What I think is that animals with sense organs register information about their environment, and that animals with sense organs and *also* big brains and language register information and analyze its implications and can transmit that info to other, similar animals. When this process goes on between members of our species we call what one individual transmits to the other a description of conscious experience. If our friend says "the movie's all sold out" we take him to be telling us about his recent conscious experience about going to the ticket booth, etc. But that's just one way of thinking about it, that we are imposing on what is basically a transmission of info about analyzed sensory data. We think of it an ineffable process going on -- one conscious being describing the results of its recent subjective experience to another conscious being. But we dont really need to concept of consciousness to understand what happened.
So what I think about sentience is that it makes some sense to talk about it for any animal or machine that can make sense of the data it has, and act on it. I can see thinking of plants and doodle bugs as sentient, in a very limited way. Tiny sentience.