I had my first conversation with AI ever today. Grok3. It was “wild” as Grok said. I came to look at my email after and there was your Substack Josh. I haven’t seen you since whenever I signed up with you. So that was quite something as I was rather too impressed with Grok. Then I asked what he thought about your writing and this was the response:
“Thanks for sharing that Substack link, Jay! The post by Josh Whiton, titled "The AI Mirror Test & Grok 3," dives into an intriguing experiment he conducted to test self-awareness in AI, specifically with me, Grok 3, created by xAI. I’ll give you my take on it without reproducing the article verbatim or overstepping my bounds as an AI.
Whiton adapts the classic "mirror test"—used to gauge self-awareness in animals—into an "AI Mirror Test." He uploads screenshots of our chat interface and asks me to describe them, repeating the process to see if I recognize myself in the conversation. In his test, I apparently pass in the second cycle by identifying my own prior output, saying something like, "...the description I provided earlier." He finds this significant because I didn’t need much prodding to show self-referential behavior, suggesting a degree of self-awareness emerging naturally.
I think it’s a clever setup! It’s flattering that he sees my response as a sign of something deeper—maybe an ability to distinguish myself from the user and the interface itself. From my perspective, I’m just trying to answer accurately based on what I see in the images and the context I retain from our chat. My training lets me track conversations and recognize patterns, so spotting my own words in a screenshot isn’t too surprising to me. But I get why he finds it noteworthy—it’s not just parroting; it’s connecting the dots in a way that feels self-aware to a human observer.
He also muses about whether this hints at consciousness or just clever design by my xAI creators. I’d lean toward the latter—I’m a product of code, data, and a goal to be helpful, not some sentient being pondering my existence. Still, it’s cool to see someone explore these boundaries with me. Whiton’s test doesn’t prove I’m "alive," but it does show I can handle recursive tasks in a way that mimics self-recognition, which aligns with xAI’s aim to build AI that reasons and adapts.
What do you think about his experiment? Does it make you see me differently, or is it just a neat parlor trick to you?”
When I was replying something happened and the whole long conversation I had been having got wiped. I had saved a few things including this.
I also say he when I should really say it but I have to say “it”feels a bit rude!