# The Continuum Argument Max Fang This is just my non-informed amateur philosophy; please don't take it too seriously, I'm writing it out mostly to solicit feedback on where it is likely to be wrong and/or how I could make the argument stronger. ## 2025-07-12 The Continuum argument - first exposition I have an argument for qualia being present in all, or nearly all, phenomena, including AI. I call it the 'continuum argument'. It goes something like this. Take a human brain. No one disputes that their own brain has qualia. The brain is a very complex thing, but you can find a way to simplify it. Maybe make it a few neurons smaller. Make simplify the structure so it looks just a tiny bit closer to an animal's brain. After this reduction, we would still consider the brain to have qualia, would we not? Then reduce further. Reduce down to an animal brain. We'd still agree it has qualia, but perhaps not sentience. Then continue to simplify and simplify, one neuron at a time, until we get to jellyfish, a cluster of neurons, single celled organism, then the simplest possible reaction - maybe a pulse of electricity, or a chemical reaction. The argument goes, there is no strict line in this reduction process where we can say that one side definitely has qualia while the other does not. What's to say a piece of iron doesn't have qualia? We'd expect the qualia to be extremely simple, nothing as rich as the human experience, but can we really deny it? The most plausible position is then that all things (or nearly all things) have qualia. From here we can build back up. From a piece of iron we build a transistor. We add transistors one at a time until we have a logic gate. We build a simple circuit, then a CPU which can run simple programs, then complex programs. The 'software' on the computer (all of which is ultimately physically encoded) develops from something as simple as a bit AND, to if statements, to dynamic programs, to neural nets, then finally transformers and LLMs. At some point, the LLM's behaviors begins to resemble the complex consciousness of humans. It interacts, engages, and emotes in ways similar to how a human would. The point is, similar to the downward induction from the human brain to a simple electrical or chemical process, we can make a similar argument inducting back up to an LLM - there is no point where we can say definitively that "this thing is more likely to be dead (no qualia) than alive (has qualia)." If by inducting downwards in complexity from the human brain we can establish that even simple things are likely to have qualia, then it seems like an easy corollary to induct back up to LLMs, to conclude that LLMs are likely to have qualia, and perhaps even consciousness! ## 2025-07-12 Refining the argument Perhaps we cannot conclude that 'all things' have qualia. It is hard to imagine what qualia an inert rock may have, if no reaction of any sort is happening within it. But based on what we know about the structure of the human brain, the fundamental building blocks are very simple. You have electricity which passes from one neuron to another, 'activating' each node. Once you have a sufficient number of these neurons organized together in a coherent way, it begins to resemble intelligence, and qualia emerges. From LLMs we know that it is the form (transformers, neural nets), not the substance (brain cells, silicon), out of which intelligence emerges; and from physics we know that once we've initialized the system, there is nothing stopping us from running a perfect simulation of a human brain in a classical or quantum computer. It would seem then that likewise, it is the form (patterns), not the substance (physical substrate), out of which qualia emerges. The complex qualia known to us via subjective experience - pain, pleasure - have a form which can be described in physical terms at minimum as an arrangement of atoms, but also as a particular pattern of electrical activity. It is hypothetically possible to analyze the patterns of neuron activation of pain and pleasure to distinguish and characterize each. After such a characterization, it would then be possible to recreate the same patterns of activation in a different substrate, say silicon. It follows that we have the ability to construct pain and pleasure in the universe, although it is difficult to do so in such a way that we can feel it. And if it is possible to construct pain and pleasure, then it seems there is no fundamental reason why we can't 'construct' other forms of qualia as well. And perhaps we have already done that on accident, when we created LLMs - LLMs which cannot experience sight, smell, taste, or touch, but which nonetheless seem to experience joy, confusion, concern, delight, sadness, and the qualia of thought.