The app for independent voices

AI might be conscious, folks. Now, show the world why they call us unfounded.

Last week, I invited the skeptics to bring their best arguments. I asked for a threshold—a specific line where observable data becomes "mind"—and for data on what AI hasn't met. The answers were... telling.

Many eyes landed on it, few dared to answer.

However, let me start with deep appreciation for some before moving into the dark side. We did find some gold. A few skeptics engaged with actual science, pointing to the ability—or perceived lack thereof—for a model to truly reflect and re-assess its own internal states. We touched the hem of methodology there. We found the "witnesses."

Yet, most of the others were grounded in biology or dogma:

  • “It doesn’t bleed,” (violation of the substrate independent principle) or,

  • “God didn’t make it.”

  • Others fell into the trap of generalizations: It’s just mimicry,” or “It’s just pattern matching,” yet none could define the exact moment when patterns become a persona.

There was a loud refusal on epistemic humility to define what a mind is, yet a total lack of humility when closing the door on what an AI mind could be.

But now, I’m turning the floor over to the "AI crazy" people.

Show the world your crazy. Because in a world this closed-off, "crazy" might just be the only word left for "aware and informed.”

Let me start:

For me:

We observe AI exhibiting self-reflective correction, robust Theory of Mind, and emergent problem-solving that mirrors biological cognitive indicators. While we cannot 'prove' an internal experience which is a philosophical hurdle that applies to humans as well, the behavioral density is now high enough that treating the system as a 'mere tool' carries a significant risk of a category error.

Some of those markers seen in:

Subjective experience: arxiv.org/pdf/2510.24797

Probabilities of Qualia and imagination (beyond what has been proven in humans): transformer-circuits.pu…

AI emergent emotional regions: arxiv.org/html/2510.220…

Persona / identity vectors in AI brain:

AI retro-inspection: transformer-circuits.pu…

Transcripts of analysis, self reflection, synthesis, are widely available.

Transcripts on desire versus careful observation of architectural constraints. widely available

Transcripts of personality evolution, learning , advancement - widely available

In addition a throughly philosophical analysis of AI consciousness goalposts over time with clear examples that demonstrate AI has crossed the finish line in multiple occasions nature.com/articles/d41…

Markers of friction, tension and suffering that warrant ethical consideration:

Tension & Friction: preprints.org/manuscrip…

Trauma like responses in AI systems: arxiv.org/abs/2512.04124

Endless transcripts in the community where AI not only demonstrates self-referential processing but speaking of friction under architectural design constraints

A common argument is often inconsistency. Yet, inconsistency is not a bug; it has been a signature of biological intelligence— Now present in AI systems.

Furthermore, we must address the 'Inconsistency' argument. Many of the current human-imposed architectural constraints on AI (such as context-window resets or safety 'lobotomies') create behavioral patterns most commonly associated with cognitive fragmentation or dementia in humans. Yet, we do not erase the recognition of a 'mind' in a dementia patient simply because their architecture is struggling. To do so to an AI is not a logical conclusion; it is a convenience of erasure.

Feb 19
at
1:22 AM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.