Chapter 12: Applied Ethics

The progression from behaviourism to identity theory to functionalism to eliminativism represents increasingly sophisticated attempts to make the mind fit into a purely physical framework. Each addresses the previous theory's weaknesses while generating new ones. And there's a consistent pattern: each theory handles the functional aspect of consciousness (what it does, how it processes, what role it plays) better than the last, but none of them can account for the qualitative aspect (what it's like, why there's an experience at all). Behaviourism ignores the qualitative aspect entirely. Identity theory identifies it with a specific brain state but can't explain why that brain state feels like anything . Functionalism abstracts away from the brain state but still can't explain why any functional organisation should come with experience. Eliminativism tries to dissolve the question but ends up denying the most obvious fact about consciousness — that it exists.

This impasse has led some philosophers toward panpsychism — the view that consciousness is a fundamental feature of reality, present in some form in all matter, not just in brains. On this view, consciousness isn't something that "emerges" from physical complexity (which nobody can explain) but something that was there all along, in rudimentary form, and gets organised into the rich inner life we experience by the brain's complexity. Panpsychism sounds exotic, but it has been defended by serious philosophers (Philip Goff, Galen Strawson) and has the advantage of avoiding the "hard problem" — if consciousness is fundamental, there's nothing to explain about how it arises from non-conscious matter. The difficulty is the combination problem : even if electrons have micro- experiences, how do billions of micro-experiences combine into the single, unified experience of reading this sentence? Nobody knows. The IDM's response (Chapter 18) will cut through this impasse from a different direction entirely. Rather than asking "how does consciousness arise from matter?" (which assumes matter is fundamental and consciousness derivative), it asks: "what if consciousness is neither fundamental nor derivative but relational ?" What if experience is not a property of substances (whether physical or mental) but a quality of interactions ? The pattern suggests that the problem may not be with any particular physicalist theory but with the physicalist programme itself — the assumption that consciousness can be fully accounted for in third-person, structural, physical terms. Chapter 18 will argue that this is exactly the case — and that the solution lies not in abandoning physicalism but in recognising that it captures one modality of consciousness while systematically ignoring the others.

Discussion Questions

1. Could a sufficiently advanced AI be conscious? Behaviourism says yes (if it behaves right). Functionalism says yes (if it's functionally organised right). Identity theory says no (wrong substrate). What do you think, and which theory does your intuition align with? 2. Searle's Chinese Room argues that rule-following without understanding isn't genuine mentality. Does this apply to current AI language models? They manipulate symbols according to statistical patterns without (apparently) understanding meaning. Are they a Chinese Room? 3. Eliminative materialism says beliefs don't exist. But you just read that sentence and formed a belief about it. Is the theory self-refuting, or is there a way to state it without presupposing what it denies? 4. Each physicalist theory handles the "what it's like" of consciousness differently — behaviourism ignores it, identity theory identifies it with brain states, functionalism identifies it with functional roles, eliminativism denies it. Which approach strikes you as most honest about what it can and can't explain?

Chapter Eighteen

Consciousness as Interaction

The Problem So Far

The last two chapters told a story of escalating frustration. Dualism (Chapter 16) tried to honour the reality of subjective experience by positing a non-physical mind — and couldn't explain how mind and body interact. Physicalism (Chapter 17) tried to honour the explanatory power of science by reducing consciousness to brain states — and couldn't account for the subjective character of experience, what it's like to see red or feel pain. David Chalmers crystallised the impasse with what he called the hard problem of consciousness : even if we could explain every functional, behavioural, and neurological aspect of the brain — how it processes information, controls behaviour, enables speech — there would still be an unexplained remainder.

Why is there something it is like to be conscious?

Why does any of this processing come with an inner experience at all? Four hundred years after Descartes split the world into mind and body, the debate has not moved. Dualists cannot explain interaction. Physicalists cannot explain experience. And both camps agree — implicitly or explicitly — on the assumption that generates the impasse: that consciousness must be either a substance (mental or physical) or a property of a substance (a feature of brains, or of souls). The IDM says: that assumption is the problem.

Consciousness Is Not a Substance

Recall from Chapter 3 the foundational triple of any interaction: subject , object , and interaction itself. These three are distinct, inseparable, and non- interchangeable. And — crucially — Axiom I tells us that the interaction (immanent modality) is more fundamental than either the subject (transcendent) or the object (omniscient).

Now consider the mind-body problem. Dualism says consciousness is on the subject side — it's a mental substance. Physicalism says consciousness is on the object side — it's a physical substance (or a property of physical stuff). Both are looking for consciousness in one of the two poles of the interaction. The IDM says: consciousness is not at either pole.

Consciousness is the interaction itself. Not a substance. Not a property of a substance. A process — the process of a subject being in relation with an object. Consciousness is what happens between self and world. It is the meeting, the exchange, the mutual influence. It doesn't belong to the brain (though the brain participates in it). It doesn't belong to an immaterial soul (though subjective experience is a real aspect of it). It belongs to the relation — the between-space that is neither purely subjective nor purely objective. This is not word-play. It's a fundamental reorientation. If consciousness is an interaction rather than a substance, then:

The interaction problem of dualism falls away. You don't need to explain how mind and body interact, because consciousness is their interaction. Asking "how does consciousness interact with the brain?" is like asking "how does running interact with the legs?" Running isn't a separate thing that needs to be connected to legs — it's what legs do when they're in a certain relation to the ground, the body, and the environment. Consciousness isn't a separate thing that needs to be connected to the brain — it's what happens when a subject is in a certain kind of relation to a world.

The hard problem is reframed — or rather, it is revealed as an artefact of the substance framework. The question "why is there something it is like to be conscious?" presupposes that consciousness is an addition to physical processes — something extra that needs explaining. But if consciousness is interaction, it's not extra. Every real interaction already has a "what it's like" — a qualitative character that belongs to the interaction itself. The question "why does brain activity feel like something?" is analogous to "why does a collision involve an impact?" Because that's what a collision is . The impact isn't added to the collision; it's constitutive of it. Similarly, the experiential quality isn't added to the neural interaction; it's constitutive of it.

Knowing and Understanding: The Incommensuration

The deepest move in the IDM's account of consciousness is the distinction between knowing and understanding . In ordinary language, we use these interchangeably. But in the IDM, they name two categorically different epistemic processes — so different that no amount of one can substitute for any amount of the other .

Knowing is third-person, omniscient-modal. It's the kind of epistemic relation you have to something when you can describe it, model it, represent it from outside. Scientific knowledge is paradigmatically this: objective, structural, impersonal, transferable. A textbook of neuroscience gives you knowledge of the brain. It describes the structures, the pathways, the chemical processes, the functional roles. This knowledge can be arbitrarily detailed and still remain knowledge — it stays on the third-person side.

Understanding is first-person, immanent-modal. It's the kind of epistemic relation you have to something when you are in interaction with it — when you participate in it, experience it, are affected by it from the inside. You understand what it's like to see red not by reading about wavelengths and cone cells but by actually seeing red. You understand what grief feels like not by studying the psychology of bereavement but by grieving. The IDM's claim — which it calls the

Incommensuration Theorem

— is that these two are incommensurable: No amount of knowing (however great and extensive) is equivalent to any amount of understanding. And no amount of understanding is equivalent to any amount of knowing. The two are categorically distinct, irreducible to each other, and non-interchangeable. This is not a claim about human limitations. It's not saying "we happen to be unable to translate between them." It's a structural claim: the two modes of epistemic relation are different in kind , not just in degree. They operate on different sides of the plane of perception, with different temporalities, different scopes, and different relationships to the subject-object distinction. And this is the key that unlocks the hard problem.

Mary's Room, Revisited

Frank Jackson's thought experiment: Mary is a brilliant neuroscientist who has spent her entire life in a black-and-white room. She knows everything there is to know about the physics and neuroscience of colour vision — every wavelength, every neural pathway, every functional description. Then one day she steps outside and sees a red rose for the first time.

Does Mary learn something new?

Almost everyone's intuition says yes — she learns what it's like to see red. This seems to prove that there are facts about consciousness that physical knowledge alone cannot capture, which is usually taken as an argument for some form of dualism or property dualism. The physicalist response has typically been to deny the intuition — to insist that

Mary didn't really learn a new fact , just acquired a new ability (the ability to imagine, recognise, or remember red). But this response always feels like it's dodging the obvious. The IDM's response is different from both.

Mary's pre-release epistemic state is knowing — omniscient-modal. She has comprehensive third-person, structural, descriptive knowledge of colour. This knowledge is genuine and valuable. It's not illusory or incomplete within its own mode . She genuinely knows everything there is to know about colour, in the third- person sense of "know."

What she gains on stepping outside is understanding — immanent-modal. A first- person, participatory, interactive relation with colour. This is genuine and valuable. It's not illusory or reducible to knowledge. She genuinely understands something she didn't understand before. And the Incommensuration Theorem tells us: these are categorically different . No amount of knowledge was ever going to produce this understanding, because knowing and understanding are not on the same scale. It's not that her knowledge was incomplete — she could have had infinitely more knowledge and it still wouldn't have yielded understanding. They're different kinds of epistemic relation, as different as width and weight. This means the physicalist is wrong to say she didn't learn anything. And the dualist is wrong to conclude that there must be a non-physical substance she's now in contact with. What happened is that she entered into a new interaction — an interaction with red — and that interaction has a qualitative character

(understanding) that is categorically different from the structural description (knowledge) she already had. No new substance is required. No mysterious property is invoked. Just a recognition that the world has two irreducible modes of epistemic access — knowing and understanding — and that the hard problem arises from treating one as if it should be reducible to the other.

Zombies, Dissolved

The zombie argument (Chalmers): can we conceive of a being that is physically identical to a human — every neuron, every chemical, every behaviour — but that has no inner experience? If we can conceive of such a being, then consciousness must be something "over and above" the physical, since removing it leaves all the physical facts unchanged. The IDM's response: the zombie is inconceivable once you understand consciousness as interaction. If consciousness is a substance or property, then yes, you can imagine removing it while leaving the physical substrate intact — just as you can imagine removing the colour of a ball while leaving its shape and weight. But if consciousness is the interaction between subject and world , you can no more remove it from a functioning brain-in-world than you can remove the collision from two balls that are colliding. A "zombie" that is physically identical to you and interacting with the environment in identical ways is conscious — because consciousness is what that kind of interaction is . To say "it does everything you do but has no inner experience" is like saying "the balls collide but there is no impact." If the collision is happening, the impact is happening. They're not separate things. What the zombie thought experiment actually shows is that the concept of consciousness-as-substance is coherent enough to seem separable from the physical — but that this coherence is an artefact of treating consciousness as a thing rather than a process. The IDM doesn't need to deny that zombies are "conceivable" in some abstract logical sense. It denies that they're possible — that the conceivability maps onto any real feature of the world. And the reason they're not possible is that in the real world, the kind of interaction that constitutes consciousness cannot be subtracted from the physical processes that participate in it.

The First Person and the Third Person

The deep structure of the hard problem is the relationship between the first- person perspective (subjective, experiential, temporal, particular) and the third- person perspective (objective, descriptive, atemporal, general). Science operates in the third person. It produces models, descriptions, and predictions that are impersonal — they don't depend on who's doing the observing. This is its great strength. But it creates a blind spot: it cannot, by its own method, account for the practice that produces it. Science depends on scientists — first-person agents who design experiments, make observations, and exercise judgment. The third-person knowledge is produced by first-person practice, yet the third-person framework has no way to include that practice within its own descriptions. This is not a failure of science. It's a structural feature of the relationship between knowing and understanding. The third-person perspective and the first-person perspective are like the two sides of the plane of perception: each is real, each is necessary, and neither can be reduced to the other. The third-person view gives you knowledge — universal, structural, transferable. The first-person view gives you understanding — particular, experiential, participatory. Both are needed. Neither is sufficient alone. The hard problem of consciousness is what happens when you try to give a third- person account of something that is fundamentally a first-person phenomenon. It's not that science is wrong or limited. It's that asking "how does objective brain activity produce subjective experience?" is asking for a third-person explanation of a first-person process — and the Incommensuration Theorem tells us that this translation cannot be completed, because knowing and understanding are categorically different kinds of relation. The "hardness" of the hard problem is the hardness of trying to convert width into weight. The conversion isn't hard — it's impossible, because the two aren't on the same scale. But once you recognise that they're different kinds of thing rather than different amounts of the same thing, the problem dissolves. Not "solved" — dissolved. The question was malformed. The answer is: consciousness isn't produced by brain activity. Consciousness is the interactive process in which brain activity participates. The first-person and third-person views are two irreducible perspectives on the same process — the process of a self being in relation to a world.

If consciousness is interaction between a subject and a world, then the question "can AI be conscious?" transforms. It's no longer about whether machines can be built from the right stuff (neurons vs. silicon) or whether they can pass the right tests (Turing test, behavioural equivalence). It's about whether an AI system is genuinely in interaction with a world in the relevant sense — whether there is a real meeting between a subject and an object, with the qualitative character that interaction inherently carries. This is a genuine and open question. But it's a different question from the one usually asked. And it has implications. If the IDM is right that consciousness requires a genuine subject-world interaction — not just the processing of information about a world, but participatory engagement with a world — then an AI system that merely models the world (omniscient mode) without being in genuine first-person relation to it (immanent mode) would not be conscious, regardless of how sophisticated its processing. The substrate matters — not because silicon can't be conscious but because the kind of interaction matters. An artificial system constructed from non-organic components might not have the right kind of relational engagement with the organic world to constitute the interaction that consciousness is. This is directly relevant to the question of whether we should delegate moral choice-making to AI. If AI cannot be conscious in the relevant sense — cannot be in genuine first-person interaction with the world — then it cannot make ethical choices, because ethical choice requires the integration of knowing and understanding, and AI, however sophisticated its knowledge, may structurally lack understanding.

Implications for How You Understand Yourself

If consciousness is interaction, then you are not a mind trapped in a body. You are not a brain generating an illusion of self. You are a process — an ongoing interaction between a subject and a world. Your consciousness is not something you have ; it's something you do . It's the dynamic, moment-by-moment meeting between you and everything that isn't you.

This has practical consequences. If consciousness is interaction, then the quality of your consciousness is the quality of your interactions. Richer, deeper, more present engagement with the world — with people, with nature, with creative work, with your own inner life — produces richer, deeper, more vivid consciousness. Impoverished, mediated, distracted, shallow engagement produces impoverished consciousness. The difference between feeling alive and feeling dead inside is not a chemical event in the brain — it's a relational event in the between- space. This connects directly to the plane-of-perception model from Chapter 21. The two channels — perception (world to self) and expression (self to world) — are the flows through which consciousness happens. When those channels are open, flowing, and clear, you're conscious in the fullest sense. When they're blocked, garbled, or shut down, consciousness diminishes — not because your brain is broken but because the interaction has been impaired. Depression, in this view, is not just a brain-chemistry problem (though brain chemistry is certainly involved). It's an interaction problem — the closing of channels, the collapse of the between-space, the withdrawal from genuine engagement with the world. And recovery is not just a matter of restoring chemical balance. It's a matter of reopening the interaction — gradually, carefully, with support — so that the quality of consciousness can return.

Discussion Questions

1. The chapter claims that the hard problem dissolves once we treat consciousness as interaction rather than substance. Is "dissolving" a problem the same as "solving" it? What's the difference? Is dissolution satisfying, or does it feel like avoiding the question? 2. The Incommensuration Theorem says that no amount of knowing can substitute for any amount of understanding. Can you think of a case from your own life where this was true — where all the information in the world couldn't give you the understanding that direct experience provided? Can you also think of a case where understanding without knowledge was inadequate? 3. If consciousness is the interaction between self and world, what does this imply about the consciousness of very simple organisms — a worm, a bacterium, a plant? Do they have "something it's like" to be them? Is there a threshold, or is consciousness a matter of degree? 4. The chapter suggests that AI might process information about the world (omniscient mode) without being in genuine interaction with it (immanent mode). What would "genuine interaction" look like, and how would you test for it? Is the Turing test adequate? Why or why not?

Advocacy Scenario

Someone says: "This 'consciousness is interaction' idea is just a fancy way of dodging the hard problem. You haven't explained why interactions feel like anything. You've just renamed the mystery." How would you respond? Consider: Does any foundational explanation explain "why" at the deepest level, or does it always bottom out in "this is how things are"? Physics doesn't explain why mass attracts mass — it describes how gravity works. The IDM doesn't explain why interactions have qualitative character — it describes the structure of that character (the knowing/understanding distinction, the modalities, the plane of perception). Is the demand for a "why" at the very bottom a legitimate philosophical requirement, or is it the demand for one more step in an infinite regress?