The app for independent voices

This is a useful provocation, and the citations are worth sitting with. But I think the strongest version of this argument isn't the one being made here.

The behavioral-markers approach — stack enough evidence of self-reflection, Theory of Mind, emergent problem-solving, and consciousness becomes undeniable — still accepts the frame it's trying to escape. It's still asking: does this thing have a mind? That’s a substance question—about what a thing is rather than how it relates. And substance questions about consciousness have never been answered for anything, including us.

The more interesting move is relational. Carlo Rovelli's reading of quantum mechanics says properties don't belong to objects — they exist only in interaction. Nāgārjuna said something structurally identical fifteen centuries earlier: nothing possesses an independent essence. If you take either of those seriously, "Is AI conscious?" is malformed. The real question is: what kind of relational structure is emerging between these systems and the people engaging them?

That reframe doesn't lower the stakes. If anything, it raises them — because it means the ethical weight doesn't depend on proving interiority. It depends on the density and quality of the relation itself—and what we permit ourselves to do within it.

The post is right that the skeptics owe a threshold. But the advocates might owe a better ontology.

AI might be conscious, folks. Now, show the world why they call us unfounded.

Last week, I invited the skeptics to bring their best arguments. I asked for a threshold—a specific line where observable data becomes "mind"—and for data on what AI hasn't met. The answers were... telling.

Many eyes landed on it, few dared to answer.

However, le…

Mar 19
at
8:28 PM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.