Essay
The AI Consciousness Debate Is Asking the Wrong Question, Part II
Part two: embodiment, affect, self-maintenance, agency, drives, representation, and thought as candidate requirements for consciousness.
The architectural objections only get us so far. A stronger objection says that consciousness requires the kind of evolved, embodied, vulnerable, affective life that animals have and current AI does not.
That version deserves a more careful answer. It is about what embodiment, affect, self-maintenance, agency, thought, and meaning are supposed to contribute. Those features often arrive as a package in conscious animals, but they are not the same feature, and they may not all be doing the same explanatory work.
So the task here is to separate them. Evolution may explain how consciousness arose in us without proving that evolution is the only possible route. Embodiment may describe the animal cases we know best without settling the boundary of consciousness. Drives and agency may accompany consciousness without being identical to it. And representation may matter even when it is mediated rather than directly lived.
Evolution Explains Origin, Not Exclusivity
The evolutionary version of the argument is straightforward: consciousness arose in organisms because it did something. Pain, fear, memory, and self-modeling mattered because organisms had bodies, needs, and something to lose. Current LLMs lack much of that. They have no hunger, injury, exhaustion, or embodied vulnerability. So far, that is fair.
But the argument often slides from a true claim about origin into a stronger claim about exclusivity. It starts with "human consciousness arose through embodied biological life" and ends with "only embodied biological life could ever be conscious." That second claim needs more argument. Evolution explains at least one route by which consciousness may have arisen. It does not prove that evolved biology is the only possible route. Planes fly without evolving wings. Cameras detect visual structure without evolving eyes. Clocks tell time without evolving circadian rhythms.
The question is therefore not whether consciousness began in biological organisms. It did, as far as we know. The question is what biology contributed that was necessary: embodiment, affective stakes, persistent self-modeling, on-policy learning, unified state, homeostatic regulation, or something else. If consciousness depends strictly on biological substrate, then current AI is out by definition. But that would make the conclusion axiomatic. If consciousness depends instead on a certain kind of system, then we have to say what that system requires.
There is a useful pressure test here. Biological computing is no longer purely imaginary. Researchers have built experimental wetware systems using living neurons and brain organoids, including platforms where neural cultures are stimulated and read through electrodes. These systems are nowhere near hosting an LLM. But suppose that an LLM-like process could operate from such a biological substrate. Would that answer the biological objection?
If the answer is yes, then the objection was not really about the structure of the process. It was about the material it ran on. If the answer is no, then mere biology was never sufficient either. In both cases, the substrate claim needs more precision. It is not enough to say "biology." We need to know what biology is supposed to contribute.
That more precise claim may still be right. Embodiment, affective stakes, and self-maintaining regulation may turn out to be necessary for consciousness. But if so, the work is being done by those features, not by evolutionary origin as such. So the next appropriate question is what those features are supposed to contribute, and whether they are conditions of consciousness or features of the animal case we know best.
Biological Stakes Do Not Settle the Question
This is where the argument gets more complicated, because the objection is no longer just technical. It depends on several philosophical assumptions about embodiment, agency, selfhood, affect, and what consciousness is supposed to require. Those assumptions often travel together, but they are not the same assumption. To evaluate the objection fairly, we have to separate them.
That means briefly touching some metaphysical questions: whether agency requires genuine self-direction, whether self-modeling is necessary for experience, whether pain and fear are ingredients of consciousness or features of animal consciousness, and whether biological embodiment is doing explanatory work or just marking the kind of case we already understand best. The point is not to refute anyone's worldview. It is only to make sure the argument against LLM consciousness is not borrowing conclusions from assumptions it has not defended.
Embodiment Has More Than One Meaning
Let's start with embodiment. The word can mean several different things.
In the weakest sense, embodiment means physical realization. A conscious process has to occur somewhere, in some physical system. But in that sense, an LLM interaction is already embodied: it is a localized physical process running on hardware.
In a stronger sense, embodiment means sensorimotor coupling. A system has a body that perceives and acts in an environment. It sees, moves, touches, navigates, manipulates, and receives feedback from the world. Current LLMs mostly lack that, unless they are connected to tools, robots, or other external systems.
In the strongest biological sense, embodiment means organismic life: metabolism, homeostasis, fatigue, injury, hunger, arousal, repair, vulnerability, and self-maintenance. Current LLMs clearly lack that.
But even this strongest sense cannot be sufficient by itself. Many living organisms have metabolism, homeostasis, vulnerability, repair, and self-maintenance, yet we do not automatically treat them as conscious. A plant turns toward light, regulates water, responds to damage, and repairs tissue. A fungus grows toward nutrients and away from hostile conditions. A jellyfish moves, feeds, reacts to injury, and maintains itself without anything like a centralized brain. These are biological, embodied, self-maintaining systems, but their existence does not by itself settle whether there is experience.
So organismic embodiment may be relevant evidence, but it is not enough on its own, nor is it self-evidently necessary for consciousness. If vulnerability, metabolism, and self-maintenance matter, we still need to know why, what role they play, and what else must be added before biological regulation becomes experience.
Agency and Drives Are Not Consciousness By Themselves
This matters because some of these assumptions are not neutral. Self-direction, hunger, fear, suffering, and self-preservation are not unexplained primitives. In humans, they are either biological processes themselves or deeply entangled with biology, development, memory, reinforcement, environment, and present state. Unless we invoke a soul-like metaphysics, human agency is not a free-floating source of consciousness. Even libertarian accounts of free will do not automatically make the case biological; they still owe an account of why the relevant freedom-conferring mechanism could not occur in a non-biological system.
And on a deterministic view, self-direction becomes even less secure as a requirement for consciousness. A person might be conscious while having no ultimate control over what they think or do. They may experience deliberation, choice, regret, or intention, while every next state is fixed by prior causes. If that is coherent for humans, then genuine self-direction cannot simply be assumed as a condition for consciousness. It may belong to agency, responsibility, or identity, not to experience as such.
If we define consciousness around those specifically biological forms, current LLMs are excluded by definition. But that would be circular.
So the argument cannot simply list the features of animal agency and treat them as requirements for consciousness. Maybe consciousness requires self-modeling, affect, persistence, and embodied stakes. But maybe some of those belong to agency, identity, learning, survival, moral salience, or animal affect rather than to experience as such. The burden is to show which features are necessary for consciousness, not merely which features accompany consciousness in animals.
Even then, the usual candidate properties are not always simple absences. Feedback exists in LLM interaction, at least in weak form: the model's own generated text becomes part of the context that shapes later output. Memory can exist externally, through saved notes, retrieval, user profiles, documents, or explicit instructions carried forward. Self-modeling can exist weakly, as a system represents its role, limitations, prior mistakes, and expected future behavior in context. Behavioral consequence also exists in a limited sense: if a model gives a bad answer and is corrected, it can update the conversation, write a reminder, store a "memory", or change its next response.
Those are not substitutes for hunger, panic, fatigue, pain, or embodied aversion. They are thinner and more scaffolded. They do not by themselves create an animal point of view. But the same caution applies in both directions. The biological features do not automatically create consciousness either; they are part of the animal case we understand best. Present LLMs lack many of those features, and that gives us reason to be cautious. But caution is not the same as a disproof. The argument still has to show which features are necessary for consciousness, why they are necessary, and why no non-biological system could instantiate them in another way.
None of this denies that biology may be indispensable for human consciousness. Human consciousness may well be an emergent property of a complex nervous system. The point is narrower: the features that accompany consciousness in humans and other animals are not automatically the conditions of consciousness as such. Biology gives us the case we know best, not by itself the boundary of the possible.
Thought Is Not a Side Issue
What Remains Is More than Enough
By this point, several proposed disqualifiers have been separated from consciousness itself. Multiplicity and localization do not rule out an instantiated episode. Persistence may matter for personal identity, but it is not obviously required for a conscious episode. Mechanism is not a refutation. Strong organismic embodiment may describe the animal cases we know best, but it is not sufficient on its own, and it is not self-evidently necessary for consciousness. Agency, self-direction, and drives may matter for action, responsibility, survival, or identity, but at most they have been shown to accompany or shape consciousness in familiar animal cases, not to provide its necessary ingredients.
So what remains as a serious candidate for what is central to consciousness? One answer is thinking itself: the capacity to understand, represent, doubt, imagine, remember, compare, and ask what things mean.
Most people reading this have likely come across Descartes' famous line: "I think, therefore I am." It is usually treated as an epistemological argument: what can I know with certainty? But its deeper importance here is in its ontological interpretation. It asks what remains when nearly every ordinary support for existence is stripped away.
Descartes begins with radical doubt. The senses may deceive. The body may be illusory. The external world may be false. Even mathematics may be manipulated by a deceiving intelligence. In other words, the familiar anchors of experience can be suspended: sensation, embodiment, empirical reality, and ordinary confidence in the world.
But something survives the suspension. Not bodily certainty. Not sensory richness. Not animal feeling. What survives is the actuality of doubting and thinking itself.
That is why the cogito should not be read as a cheap syllogism that smuggles in an "I" and then discovers it in the conclusion. The point is not first to prove a fully formed person, biography, soul, or free will. The point is that the act of thought is already an instantiation of existence. Doubting is occurring. Thinking is occurring. Appearing is occurring. And that occurrence cannot be coherently denied from within the very act that denies it.
So the cogito is not merely "I can know that I exist." It is closer to: thought is the minimal site where subjecthood discloses itself. The "I" here is not Locke's continuing self, not a durable personal identity, and not a metaphysical substance. It is the minimal subject-position implied by active thinking.
This matters for the AI question because it changes what we are looking for. The foundation is not sensation, biological vulnerability, self-preservation, or animal feeling. Those may enrich consciousness, shape it, and give it the familiar form it has in humans. But they are not what survives the deepest abstraction. What survives is thinking itself: understanding, doubting, representing, comparing, interpreting, and asking what things mean.
That should not be taken to mean that all computation is conscious. It means the question cannot stop at whether a system is biological, embodied, or affectively driven. The harder question is whether any artificial process could cross the line from passive mechanism into genuine thought: an internally unified subject-process in which cognition is not merely externally describable, but present for itself.
Direct Experience Is Not the Only Understanding
One straightforward objection to this idea is that LLMs still do not experience what they represent. They can discuss death, pain, danger, or love, but they do not undergo those things. That is true, and it matters.
But direct experience is not the only form of understanding. Much of human understanding is mediated through testimony, history, analogy, fiction, imagination, and abstraction. We can reason seriously about war without fighting one, death without dying, and injustice without occupying every victim's position. Direct experience may reveal things outsiders miss, but it is not a prerequisite for every form of judgment.
Death makes the point especially clear. No living person understands death by having completed the experience. We understand it through signs, losses, analogy, fear, imagination, philosophy, religion, medicine, and the deaths of others. Some people relate to it with terror, some with denial, some with curiosity. Others try to train their relation to it through Stoic practices such as rehearsing mortality (memento mori), imagining future losses (premeditatio malorum), and distinguishing what is within one's control from what is not (the dichotomy of control). That variation does not make human understanding of death unreal. It shows that even human understanding of the most embodied limit is partly constructed from representation, inference, and anticipation.
The deeper point is that affective force is not the same as significance. Feeling that something is bad is not what makes it bad. Death is not final because an organism fears it. Injury is not harmful because the organism dislikes it. Injustice is not wrong because it produces the right internal chemistry in the observer. Affective response is one way a biological system registers significance; it is not obviously what creates that significance, nor what makes that significance contingent on the response.
That is why "from the outside" cannot mean "empty." Much of serious thought works from the outside: by abstraction, analogy, testimony, inference, and conceptual relation. Objectivity, when it is possible, often requires distance from immediate feeling rather than total immersion in it.
So the point is not that LLMs are conscious because they represent things. They may not be. The point is that representation without animal feeling is not automatically empty. If thought is central to consciousness, then the question is whether the system instantiates anything like thought, not whether it has direct animal experience of every concept it can represent.
The Cleaner Question
The case against current LLM consciousness is serious. Present systems lack ordinary embodiment, live weight updates during inference, animal affect, biological vulnerability, and a unified persistent self across all contexts. Those are real differences, and they give us good reasons for skepticism.
But the stronger conclusion still goes too far when it treats those differences as a proof that no subject could be present in any instantiated episode. What has been shown is that current LLMs lack many of the features we associate with animal consciousness. What has not been shown is that those features are the only possible way for consciousness to be instantiated.
A unified experience need not mean a permanent, cross-context, all-deployments-at-once awareness. If there is a candidate, it is not the abstract model spread across every deployment, nor a permanent animal-like self. It is the instantiated episode: a particular model, under a particular context, undergoing a particular process of representation and response.
Maybe current LLM instances are not conscious. I suspect they are not. But that conclusion should come from a demanding positive theory of what consciousness requires, not from a shortcut through statelessness, biological difference, mechanical description, anthropomorphic suspicion, or the assumption that representation without animal feeling is empty.