People often say things along these lines (I.e. equating the evidence we have of AI consciousness with animal consciousness) but I think the evidence for animal consciousness in the case of e.g. dogs is greatly understated. With most chatbot LLMs the extent of our behavioral evidence of consciousness (I.e. bracketing considerations about neural structure) come purely from them responding to our questions with the right string of text. We can broadly put this as responding to what we say in the appropriate way.
Dogs do this to some extent (they follow commands and often demonstrate the ability to respond to different utterances in different ways) but of course much less appropriately than AI. Alongside this though is the evidence that dogs behave like us along a wide array of sensory modalities. They also react to pain in similar ways as we do, loud sounds, they form preferences for yummier food, etc etc etc. Because AI aren’t embodied (by and large) we never get this sort of evidence for their consciousness.
Now, we could at this point appeal to neural structure, but then it just seems that neural structure is strong evidence for consciousness. If there was a digital copy of your brain you should minimally think that the fact it is a copy of your brain is good evidence that it is conscious even if you think it ultimately is not (e.g. for substrate dependence reasons).
I worry that treating animals like they can suffer purely because they appear to suffer when hurt, is a slippery slope to ascribing similar ideas to AI - which is even more convincingly “like us” than any animal.
Apr 14
at
12:26 AM
Relevant people
Log in or sign up
Join the most interesting and insightful discussions.