The app for independent voices

This is fascinating and such an apropos area of inquiry.

My question is related to the last comment. I understand that as a person engages over time with a LLM AI, the interaction begins to train the AI in a way that it begins to have what sounds like a personality, along with a tendency to think or reason with a worldview or set of values like the human. So it’s sort of 2-way creative process where the human input shapes the AI output. Like creating in our own image.

So, if this is true, and if a person is engaging in a lot of conversation with the AI that explores consciousness and selfhood, wouldn’t that prime the AI to act in a manner that appears to be self aware?

I’m thinking of a stronger mirror test- where a blob of paint is placed on the viewers nose, unbeknownst to them (I can’t remember if it was a young child or a monkey/ape species). If they tried to rub the paint off, then you know they are seeing themself. I just can’t think of what the analog for the blob of paint would be for placement on the AI.

Feb 28
at
5:21 PM

Log in or sign up

Join the most interesting and insightful discussions.