Chapter Sixteen
Substance Dualism
The Mind as a Separate Thing
What is the relationship between your mind and your body? When you decide to raise your hand, what connects the mental event (the decision) to the physical event (the arm rising)? When you stub your toe, what connects the physical event (tissue damage) to the mental event (the experience of pain)?
This is the mind-body problem , and it has structured Western philosophy of mind since Descartes. This chapter examines the first major answer: they are two fundamentally different kinds of thing.
Descartes' Substance Dualism
René Descartes argued that reality consists of two fundamentally different substances: res cogitans (thinking substance — the mind) and res extensa (extended substance — physical matter). The mind is non-physical: it has no spatial location, no size, no shape, no mass. The body is physical: it occupies space, has dimensions, obeys the laws of physics. You are, essentially, a non- physical mind inhabiting a physical body — a ghost in a machine.
The Conceivability Argument
I can clearly and distinctly conceive of my mind existing without my body (as the cogito showed — even if my body is an illusion, my mind exists). I can also conceive of my body existing without my mind (a corpse is a body without a mind). If I can conceive of each without the other, they must be genuinely distinct substances — because if they were the same thing, conceiving of one without the other would be a contradiction.
This argument relies on what's called the conceivability-possibility principle : if you can clearly and distinctly conceive of something, it's genuinely possible. And if mind-without-body and body-without-mind are both genuinely possible, then mind and body are genuinely distinct. The standard objection:
Conceivability might not entail possibility. Before chemistry, you could conceive of water that isn't H₂O — a liquid that looks, tastes, and behaves like water but has a different molecular structure. But water is
the identity is necessary, not contingent. So your ability to "conceive" of water without H₂O doesn't prove they're distinct — it just shows your concept of water was incomplete. Similarly, your ability to conceive of mind without body might just show that your concept of mind is incomplete, not that mind and body are genuinely separable. Saul Kripke complicated this picture by arguing that some identity statements (like "water = H₂O") are necessarily true even though they're discovered a posteriori — they couldn't have been otherwise. Whether "mind = brain" could be the same kind of necessary identity is one of the deepest open questions in philosophy of mind.
The Divisibility Argument
The body is divisible — you can lose an arm and still be you. Physical matter can always be divided into smaller parts. But the mind seems indivisible — there's no way to cut a thought in half, or remove a portion of consciousness while leaving the rest intact. If the body is divisible and the mind is not, they can't be the same substance. The objection from neuroscience:
Split-brain patients (whose corpus callosum has been severed) exhibit behaviours suggesting two independent streams of consciousness in the same skull. One hemisphere may know something the other doesn't. In extreme cases, one hand may do something the other hand tries to prevent — reaching for a shirt with the left hand while the right hand puts it back. If the mind were truly indivisible, split-brain phenomena would be impossible. The mind seems to be at least partially divisible — which undermines the argument from a direction Descartes never anticipated. The deeper question: The divisibility argument assumes we have clear access to the nature of the mind. But do we? We experience our consciousness as unified, but experience might be misleading. Cognitive science suggests that consciousness is actually composed of many sub-processes (perception, memory, emotion, language, spatial reasoning, motor control) that are normally so well integrated that they feel like a single thing. Brain damage, drugs, sleep deprivation, and meditation can all partially decompose this unity — revealing that what felt like an indivisible whole is actually a coordinated assembly of parts. The
unity of consciousness may be a product of physical organisation — an achievement of the brain, not evidence of a non-physical substance. If so, the divisibility argument fails not because the mind is divisible in the way bodies are, but because the premise that it's indivisible rests on an illusion of introspective transparency.
"Who am I, really? Am I my body, my brain, my memories, my personality — or something more?" Substance dualism says: you are a non-physical mind. Your body is a vehicle, not you. This has immediate implications for questions about identity that your generation navigates constantly. If you're not your body, then changes to your body (gender transition, disability, ageing, augmentation) don't change you . If you are your body, then bodily changes are changes to who you are. The mind-body problem isn't abstract — it shapes how you understand yourself, your identity, and what it means to be the person you are. The IDM will offer a third option in Chapter 18: you are neither a mind nor a body but an interaction — a dynamic, ongoing process of engagement between self and world. This reframes the identity question entirely.
The Interaction Problem
This is dualism's fatal difficulty. If mind and body are completely different substances — one non-physical, one physical — how do they interact? When you decide to move your arm, how does a non-physical mental event cause a physical movement? And when you stub your toe, how does a physical event cause a non- physical experience of pain? Descartes famously suggested the pineal gland as the point of interaction — but this just moves the problem. How does a non-physical mind interact with a physical pineal gland? The problem isn't about where the interaction happens but about how it's possible for two categorically different substances to causally affect each other at all. Princess Elisabeth of Bohemia pressed this objection on Descartes in their correspondence, and he never gave a satisfactory answer. Over three and a half centuries later, nobody else has either.
The Causal Closure Objection
Modern physics strengthens the interaction problem considerably. The causal closure of the physical is the principle that every physical event has a sufficient physical cause. When your arm rises, physics can (in principle) trace the causal chain entirely through physical events: neural signals, muscle contractions, electrochemical processes. There's no gap in the physical chain where a non- physical mind could insert a cause. If the physical world is causally closed, there's no room for non-physical mental causation. The dualist has three options, none of them comfortable. First, deny causal closure — claim there are gaps in the physical chain where mental causes operate. But this contradicts everything we know about physics. Second, accept epiphenomenalism — the mind is real but causally inert; it doesn't actually cause anything. But this means your decision to raise your arm didn't cause your arm to rise, which is absurd. Third, claim overdetermination — every physical event has both a physical and a mental cause simultaneously. But this is explanatorily extravagant and raises the question of why we need the mental cause at all if the physical one is sufficient.
The Problem of Other Minds
Dualism also generates the problem of other minds . If the mind is non-physical and private — if I have direct access only to my own mental states — how do I know that anyone else has a mind at all? I can observe your behaviour, but behaviour is physical. I can't observe your consciousness, because it's non-physical and belongs to your private mental substance. For all I know, you could be a philosophical zombie — a body going through the motions with no inner life. This might sound like an academic puzzle, but it has real bite. In an era of deepfakes, AI chatbots, and parasocial relationships, the question "does this entity actually have inner experience, or is it just performing the appearance of inner experience?" is no longer hypothetical. Dualism makes this question unanswerable in principle, because it locates consciousness in a realm that is by definition unobservable from outside. You can never verify that anyone — human or artificial — actually has an inner life. You can only observe behaviour and infer. The problem extends to digital identity more broadly. When you interact with someone online — through text, through video, through an avatar in a virtual space — you're interacting with a representation of a person, not the person themselves. The representation might be accurate (a genuine person expressing genuine thoughts), or it might be misleading (a persona, a bot, a deepfake). Dualism says the real person is the non-physical mind behind the representation —
but you have no access to that mind. All you have is the representation. In a world increasingly mediated by digital representations, the problem of other minds is not a philosophical curiosity. It is the fundamental epistemological challenge of online life.
Property Dualism
Many philosophers who reject substance dualism — who don't think the mind is a separate thing — nonetheless accept property dualism : the view that there are mental properties (qualia, subjective experiences) that are real but can't be reduced to physical properties. The brain is a physical thing, but it has properties (what it's like to see red, to feel pain, to taste chocolate) that cannot be captured by any purely physical description.
The explanatory gap
(Joseph Levine): Even if we knew every physical fact about the brain — every neuron, every synapse, every electrochemical process — there would still be an explanatory gap between the physical description and the subjective experience. We could describe exactly what happens in the brain when someone sees red, but the description wouldn't explain why that process feels the way it does. The gap is not in our knowledge (which might just be incomplete) but in the kind of explanation available. Physical descriptions explain structure and function; they don't explain qualitative character. Supervenience:
Property dualists typically accept that mental properties supervene on physical properties — meaning that any change in mental properties requires a change in physical properties. You can't have a change in experience without a change in the brain. But supervenience is a correlation, not an identity. Two things can be perfectly correlated without being the same thing. Temperature and the kinetic energy of molecules are perfectly correlated — but arguably, temperature just is kinetic energy. The property dualist says consciousness and brain activity are not like that: they're correlated, but the experiential property is something over and above the physical property. Chalmers' zombie argument: We can conceive of a being physically identical to a human — every atom, every neural firing pattern — but with no subjective experience. This "philosophical zombie" behaves exactly like you but has no inner life. If zombies are conceivable, then consciousness is not entailed by the physical — there must be something over and above the physical that accounts for subjective experience.
Jackson's knowledge argument:
Mary the neuroscientist (Chapter 18 will revisit this in detail) knows everything physical about colour but has never seen colour. When she sees red for the first time, she learns something new. Therefore, there are non-physical facts about consciousness — facts about what it's like to have an experience — that physical knowledge alone cannot capture. Property dualism avoids the interaction problem (mental properties are properties of the same physical system, not a separate substance) but raises its own puzzle: how do non-physical properties arise from physical systems? If the brain is purely physical, what gives rise to the non-physical experiential properties? This is the hard problem of consciousness — the question Chapter 18 will address.
Dualism and Neuroscience
Modern neuroscience presents perhaps the strongest challenge to substance dualism. Brain scans show neural correlates for every mental state that's been studied — specific patterns of brain activity that correspond to specific thoughts, emotions, decisions, and perceptions. Damage to specific brain regions produces specific mental deficits: damage to Broca's area impairs speech production; damage to the hippocampus impairs memory formation; damage to the prefrontal cortex impairs decision-making and impulse control. If the mind were a non-physical substance, why would physical damage to the brain affect it? A non-physical mind should be independent of brain states — it shouldn't matter whether the brain is damaged, drugged, or dead. But it manifestly does matter. Every known alteration of brain chemistry — alcohol, caffeine, SSRIs, anaesthesia, psychedelics — produces corresponding alterations in mental experience. This is exactly what you'd expect if the mind depends on the brain, and exactly what you wouldn't expect if the mind is a separate substance that merely interacts with the brain. The dualist can respond: the brain is the interface through which the non-physical mind interacts with the physical world. Damage to the interface doesn't damage the mind itself — it just impairs the mind's ability to express itself physically. (An analogy: damage to your keyboard doesn't damage your thoughts — it just prevents you from typing them.) This is logically possible but explanatorily weak. It requires the non-physical mind to have a suspiciously detailed correspondence
with brain anatomy — every region of the brain "interfacing" with a specific aspect of mentality. At some point, the interface hypothesis becomes indistinguishable from the identity hypothesis: the "interface" just is the mind, and the non-physical substance is doing no explanatory work.
Discussion Questions
1. The conceivability argument says: if I can conceive of mind without body, they must be distinct. But can you conceive of water without H₂O? You can imagine a liquid that looks, tastes, and behaves like water but isn't H₂O. Does that prove water and H₂O are different substances? What does this tell you about the limits of conceivability arguments? 2. The interaction problem seems devastating for substance dualism. Are there any plausible mechanisms by which a non-physical mind could cause physical effects? What would such a mechanism even look like? 3. Property dualism says mental properties are real but not physical. What does "not physical" mean? If mental properties causally affect behaviour (your pain causes you to withdraw your hand), and causes must be physical (according to the causal closure of the physical), can property dualism be true? 4. Many people intuitively feel like dualists — they feel that their mind is somehow separate from their body. Is this intuition evidence for dualism, or just a feature of how we experience ourselves? Can the intuition be explained without dualism being true?
Chapter Seventeen
Physicalism and Its Problems
Can Science Explain the Mind?
If dualism can't explain how mind and body interact, perhaps the solution is simpler: there's only one kind of stuff. Everything — including the mind — is physical. Consciousness is something the brain does, just as digestion is something the stomach does. No ghosts, no non-physical substances, no mysteries. Just neurons, synapses, and electrochemistry. This is the physicalist programme, and it comes in several versions, each trying to explain how mental phenomena can be nothing more than physical phenomena. Each version makes progress. Each runs into its own wall.
Behaviourism
Gilbert Ryle attacked what he called "the ghost in the machine" — Descartes' picture of a non-physical mind inhabiting a physical body. Ryle argued that this was a category mistake : the same kind of error as a visitor to Oxford who sees all the colleges, libraries, and playing fields and then asks "but where is the university?" The university isn't a separate thing hiding behind the buildings — it's a way of describing the organisation of the buildings, the people, and the practices. Similarly, the mind isn't a separate thing hiding behind the body — it's a way of describing the organisation of behaviour. Specifically, Ryle argued that mental states are not private inner events but dispositions to behave . To say someone is "in pain" is not to report a private mental event but to describe their tendency to wince, cry out, seek medical attention, and so on. To say someone "believes it will rain" is to describe their tendency to carry an umbrella, check the forecast, and say "I think it'll rain."
The appeal of behaviourism:
It's scientifically tidy. It eliminates the mysterious
"inner world" that no one can observe, and it makes psychology an objective science of observable behaviour. No more speculating about what's happening "inside" — just observe what organisms do . The perfect actor problem: Could someone behave as if they're in pain without actually being in pain? An actor playing Hamlet might wince, cry out, and clutch his chest — but he's not in pain. Conversely, could someone be in pain without behaving like it? A Stoic warrior, or a patient under paralysis, or someone who simply refuses to show their suffering? If yes, then pain isn't the same as the disposition to behave in pain-ways. The inner experience and the outward behaviour come apart.
The Super-Spartan objection
(Hilary Putnam): Imagine a culture of "Super- Spartans" — people trained from birth to never show any pain behaviour. They feel pain just as intensely as anyone else, but they have learned to suppress every outward sign. They don't wince, don't cry out, don't seek medical attention. According to behaviourism, they're not in pain (since they don't have pain- dispositions). But they obviously are. Therefore, pain is not identical to pain- behaviour-dispositions. There's something left over — the experience of pain — that behaviourism can't account for. The problem: This seems to leave out the most important thing — the experience . Pain is not just a disposition to wince. It hurts . There is something it is like to be in pain, and that subjective experience is precisely what behaviourism cannot account for. A perfect actor could exhibit all the pain behaviours without being in pain, and a stoic could be in agony without exhibiting any.
Type Identity Theory
U.T. Place and J.J.C. Smart argued that mental states are identical to brain states.
Pain just is C-fibre firing. The experience of seeing red just is a specific pattern of neural activity. The identity is like "water is H₂O" — a discovery about what mental states actually are, made by neuroscience rather than chemistry. This is a clean, bold theory. It places the mind squarely in the physical world. There's no mystery about how mental states cause physical events — they just are physical events. And it's empirically testable: neuroscience can (in principle) identify the brain states that are identical to each mental state.
The problem:
Multiple realisability
. If pain is identical to C-fibre firing, then only beings with C-fibres can feel pain. But surely an octopus (which has a completely different neural architecture) can feel pain. And if we ever encounter an alien with silicon-based neurology, we'd want to say they could feel pain too. The same mental state can be "realised" in many different physical substrates — and if it can, then the mental state can't be identical to any particular physical state. The type identity theory is too restrictive: it ties mental states to one specific physical implementation when they should be defined more abstractly.
"Could an AI be conscious? Could it suffer? Does it matter how we treat Siri?" Your answer depends on which theory of mind you hold. The behaviourist says: if it behaves as if it's conscious, it is. The identity theorist says: only if it has the right brain states (so no — silicon can't be conscious). The functionalist says: if it has the right functional organisation, then yes — regardless of what it's made of. The eliminativist says: "consciousness" is a confused concept. The IDM (Chapter 18) will offer a different answer: consciousness is interaction , not information processing. The question is not "can it think?" but "is it genuinely in relation with reality?" — and that question has a structural answer that none of the physicalist theories can provide.
Functionalism
Hilary Putnam and Jerry Fodor proposed functionalism : mental states are defined not by what they're made of but by what they do — by their functional role in the causal network between sensory inputs, behavioural outputs, and other mental states. Pain is whatever state is caused by tissue damage, causes distress, motivates avoidance behaviour, and interacts with beliefs and desires in the right way. It could be C-fibres, silicon circuits, or anything else that plays the right functional role. The analogy is with software and hardware. The same program (software) can run on different computers (hardware). What makes it "the same program" is not the physical material but the functional organisation — the pattern of inputs, processing, and outputs. Similarly, what makes something "pain" is not the
physical material (neurons vs. silicon) but the functional organisation — the pattern of causes and effects. This is the computational theory of mind : the mind is to the brain as software is to hardware. Functionalism neatly solves the multiple realisability problem. An octopus and a human can both feel pain because pain is defined functionally, not physically. They share the functional organisation even though their physical substrates differ. And functionalism is scientifically productive — it's the implicit framework behind most cognitive science and AI research. But it faces its own challenges:
The Chinese Room
(John Searle): Imagine a person locked in a room with a rulebook for manipulating Chinese characters. Chinese speakers pass questions under the door; the person follows the rules and passes back answers. From outside, the room appears to understand Chinese. But the person inside doesn't understand a word — they're just following rules mechanically. If functional organisation is sufficient for mentality, the room "understands" Chinese. But it obviously doesn't. Therefore, functional organisation is not sufficient for genuine understanding. Searle's conclusion: syntax (rule-following) is not sufficient for semantics (meaning). Computers manipulate symbols according to rules but don't understand what the symbols mean. The absent qualia problem: Could a system have all the right functional organisation — all the right inputs, outputs, and internal connections — but have no subjective experience? If the entire population of China were organised to simulate the functional structure of a brain (each person playing the role of a neuron, communicating by radio), would the system be conscious? It would be functionally equivalent to a brain, but the intuition that this massive network of people feels something is hard to accept. This is the
Chinese Nation thought experiment (Ned Block), and it suggests that functional organisation might be necessary for consciousness but not sufficient. The inverted spectrum problem:
Could two people be functionally identical —
same inputs, same outputs, same internal processing — but have different subjective experiences? Maybe what I experience as red, you experience as green, and vice versa. We'd never know, because we'd both call the same wavelength "red" and behave identically toward it. If this is possible, then functional organisation doesn't determine the qualitative character of experience — and functionalism has left something out.