← Back to Library

Proving (literally) that ChatGPT isn't conscious

Deep Dives

Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:

Imagine we could prove that there is nothing it is like to be ChatGPT. Or any other Large Language Model (LLM). That they have no experiences associated with the text they produce. That they do not actually feel happiness, or curiosity, or discomfort, or anything else. Their shifting claims about consciousness are remnants from the training set, or guesses about what you’d like to hear, or the acting out of a persona.

You may already believe this, but a proof would mean that a lot of people who think otherwise, including some major corporations, have been playing make believe. Just as a child easily grants consciousness to a doll, humans are predisposed to grant consciousness easily, and so we have been fooled by “seemingly conscious AI.”

However, without a proof, the current state of LLM consciousness discourse is closer to “Well, that’s just like, your opinion, man.”

This is because there is no scientific consensus around exactly how consciousness works (although, at least, those in the field do mostly share a common definition of what we seek to understand). There are currently hundreds of scientific theories of consciousness trying to explain how the brain (or other systems, like AIs) generates subjective and private states of experience. I got my PhD in neuroscience helping develop one such theory of consciousness: Integrated Information Theory, working under Giulio Tononi, its creator. And I’ve studied consciousness all my life. But which theory out of these hundreds is correct? Who knows! Honestly? Probably none of them.

So, how would it be possible to rule out LLM consciousness altogether?

In a new paper, now up on arXiv, I prove that no non-trivial theory of consciousness could exist that grants consciousness to LLMs.

Essentially, meta-theoretic reasoning allows us to make statements about all possible theories of consciousness, and so lets us jump to the end of the debate: the conclusion of LLM non-consciousness.

You can now read the paper on arXiv.

What is uniquely powerful about this proof is that it requires you to believe nothing specific about consciousness other than a scientific theory of consciousness should be falsifiable and non-trivial. If you believe those things, you should deny LLM consciousness.

Before the details, I think it is helpful to say what this proof is not.

  • It is not arguing about probabilities. LLMs are not conscious.

  • It is not applying some theory

...
Read full article on →