Chapter 5: Perception and Reality

Brain in a vat: What if you are actually a disembodied brain, floating in a nutrient solution, connected to a supercomputer that feeds you a perfect simulation of reality? All your experiences — this room, this book, the feeling of your hands — are electrical signals generated by the computer. From the inside, you couldn't tell the difference. How do you know you're not a brain in a vat? The crucial point: every test you could run — pinching yourself, asking other people, checking for inconsistencies — would produce the same results in the simulation as in reality. The sceptical challenge is not that this is likely but that it is logically possible , and if possible, certainty is unachievable. The simulation hypothesis: Nick Bostrom's argument: if advanced civilisations can run ancestor simulations, and if there are many such simulations, then the probability that you are in a simulation rather than "base reality" is very high. You might be a simulated consciousness in a simulated universe, with no access to whatever "real" reality lies outside the simulation. Unlike the brain in a vat,

Bostrom's argument doesn't just claim this is possible — it claims it's probable , based on straightforward reasoning about the likely number of simulated versus non-simulated conscious beings. Algorithmic manipulation: You don't need science fiction for this one. Your information environment is already curated by algorithms designed to maximise engagement, not truth. The "reality" you perceive through your devices is shaped, filtered, and distorted by systems whose goals have nothing to do with showing you what's real. You're not a brain in a vat, but you might be a mind in a feed — and the epistemological problem is uncomfortably similar.

Why Scepticism Can't Be Refuted on Its Own Terms The standard responses to scepticism are each illuminating and each incomplete:

Common-sense realism

(G.E. Moore): "Here is one hand. Here is another. Therefore the external world exists." Moore's point is that the premises of any sceptical argument are less certain than the common-sense beliefs they're supposed to undermine. I'm more certain that I have hands than I am of any philosophical argument to the contrary. If an argument contradicts what I know with certainty, the argument must contain an error — even if I can't identify it. This is the "Moorean shift": instead of accepting the sceptical conclusion, reject whichever premise leads to it. Psychologically compelling, but philosophically unsatisfying — it doesn't explain why the sceptic is wrong.

Pragmatism

(William James): Even if scepticism is irrefutable, it makes no practical difference. You have to act as if the world is real regardless. True — but it concedes that we can't know the world is real, and merely adds that it doesn't matter. For anyone who cares about truth (and you should), this is unsatisfying.

Semantic externalism

(Hilary Putnam): Putnam argued that a brain in a vat cannot think "I am a brain in a vat" — because the words "brain" and "vat" as used by the envatted brain don't refer to real brains and real vats (which the brain has never encountered) but only to the simulated images of brains and vats in its virtual experience. The statement "I am a brain in a vat" is therefore self-refuting: if it's true, the words don't mean what they'd need to mean for it to be true. This is technically ingenious but strikes many philosophers as too clever — it seems to show that we can't express the sceptical hypothesis, not that the hypothesis is false.

Transcendental arguments

(Kant): The very possibility of having experiences at all requires certain conditions (space, time, causation). If you're having experiences — even deceptive ones — the conditions for experience must be real. This is closer to a genuine refutation, but it only proves that the form of experience is real, not that the content of experience corresponds to external reality. Each response is partial. None fully resolves the challenge. The sceptic always has another move: "But how do you know that ?"

The Sceptic's Hidden Assumption

The IDM diagnosis: every sceptical argument — from Descartes' evil genius to the simulation hypothesis — operates entirely within the omniscient modality . It asks: can my representations of the world be wrong? Can the structural descriptions I have of reality fail to correspond to what's actually there? Can the information

I receive be systematically false?

The answer, within that modality, is always yes. Representations can be inaccurate. Descriptions can fail to correspond. Information can be faked. This is true, and no amount of additional information can make it false — because information, by its nature, is a representation , and representations can in principle be simulated.

But the sceptic assumes — without argument — that all epistemic contact with reality is representational. That the only way to "know" reality is to have accurate information about it. This is the omniscient-modal assumption, and it is exactly what the IDM challenges.

Understanding Cannot Be Deceived

Recall the distinction from Chapter 4: knowing (third-person, representational, omniscient-modal) and understanding (first-person, participatory, immanent-modal). The Incommensuration Theorem: the two are categorically different and irreducible to each other.

The sceptical challenge is devastating against knowing . Every piece of knowledge — every fact, every belief, every model — could in principle be wrong, because it's a representation and representations can be inaccurate. The evil genius can fake your knowledge. But the sceptical challenge has no purchase against understanding . Understanding is not a representation that could be inaccurate. It's the interaction itself . And the interaction is either happening or it isn't. You can't have a "fake" interaction in the same way you can have a fake representation. If you're interacting — if there is genuine meeting between subject and world, genuine perception and expression, genuine mutual influence — then the interaction is real, regardless of what the "content" of the interaction turns out to be. This is what Descartes actually discovered with the cogito, though he didn't have the vocabulary to say it. "I think, therefore I am" is not a piece of knowledge (a representational claim about an external fact). It's a moment of understanding — a first-person recognition that the interaction of thinking is occurring. It's an immanent-modal insight, not an omniscient-modal one. And that's why it resists sceptical attack: it isn't a representation that could be wrong. It's a direct, participatory contact with the process itself. The IDM extends this: the cogito is not unique.

Every genuine interaction has this scepticism-resistant character. When you are genuinely perceiving — not just processing information, but in real interactive contact with your environment — that contact is real. Not the specific content of the perception (which might be misinterpreted), but the fact of the interaction . The interaction cannot be faked, because faking it would require a real interaction to do the faking.

The sceptic's real achievement is showing that knowledge alone cannot ground reality . That's correct. No amount of information, however well- justified, gives you certainty about the external world. But the sceptic's hidden assumption — that knowledge is the only mode of epistemic contact — is false. Understanding provides a different kind of contact, one that doesn't pass through the bottleneck of representation and therefore can't be undermined by the sceptic's representational attacks.

The "post-truth" condition — the feeling that nothing is trustworthy, that all information might be manipulated, that expertise might be fake — is the sceptical predicament made culturally pervasive. And it can't be solved by more information. Fact-checking doesn't work, not because people are stupid, but because the sceptical worry operates at a level that no amount of information can address. If the problem is "how do I know any representation is accurate?", adding more representations doesn't help. What does help? Genuine interaction. Direct experience. Face-to-face conversation with real people. Physical engagement with the material world. The immanent-modal contact that can't be faked by an algorithm because it's not representational in the first place. The reason so many people feel epistemically lost is not that they lack information — they're drowning in information. What they lack is understanding : the kind of epistemic contact that comes from genuine participation in reality, not from consuming representations of it. The antidote to post-truth is not more truth (more facts, more fact-checks, more data). It's more reality — more direct, embodied, relational engagement with the world. The sceptic can undermine every representation, but the sceptic cannot undermine the interaction itself.

Discussion Questions

1.

Descartes' cogito proves that

exist. But it doesn't prove that you exist. How would the IDM's understanding-based approach address the problem of other minds? If understanding is first-person and participatory, can I understand that you exist, or only know it? 2. The chapter claims that the simulation hypothesis is a purely omniscient- modal worry — it only matters if your only mode of epistemic contact is representational. Does this dissolve the simulation hypothesis, or merely reframe it? Would you feel differently about your life if you learned it was a simulation? 3. "The antidote to post-truth is not more truth but more reality." Evaluate this claim. What practical changes would follow from taking it seriously — in education, in media, in daily life? 4. The sceptic claims they can doubt everything. But can they doubt the interaction of doubting ? Try it. Can you coherently doubt that the process of doubting is occurring? What does this tell you about the limits of scepticism?

Advocacy Scenario

Someone says: "You're just dodging scepticism by redefining 'knowledge.' The sceptic asks whether you can be certain about reality. You respond by saying 'understanding' is different from 'knowledge' and can't be doubted. But that's just semantics — the sceptic's question still stands." How would you respond? Consider: Is the sceptic's question "can you be certain about reality?" or is it really "can your representations be certain about reality?" If the question is the second one (and it is — every sceptical scenario works by questioning the accuracy of representations), then pointing out that there's a non-representational mode of epistemic contact is not a dodge. It's a fundamental reframing. The sceptic's argument is valid within the representational mode. But that mode isn't the only one.

Ethics

Chapter Eight

Utilitarianism

Consequences and Their Limits

We begin the ethics section with the theory that seems most obvious: do whatever produces the best results . This is the basic utilitarian intuition, and it has enormous appeal. Unlike moral systems that appeal to God's commands, ancient traditions, or abstract principles, utilitarianism says: look at what actually happens. Look at the consequences of your actions. Choose the action that produces the most good and the least harm. What could be more reasonable? The answer, as we'll see, is: quite a lot. But the utilitarian insight — that consequences matter — is genuinely important, and understanding both its power and its limitations is essential for everything that follows.

Bentham: The Greatest Happiness Principle Jeremy Bentham (1748–1832) founded utilitarianism on a single axiom: "Nature has placed mankind under the governance of two sovereign masters, pain and pleasure." Everything we do, Bentham argued, is ultimately motivated by the pursuit of pleasure and the avoidance of pain. Morality, therefore, should be about maximising pleasure and minimising pain — not just for the individual, but for everyone affected by the action.

This is the

Greatest Happiness Principle

: the right action is the one that produces the greatest happiness for the greatest number of people. Happiness is understood as pleasure and the absence of pain. Every person's happiness counts equally — the beggar's pleasure matters as much as the king's. Bentham even proposed a method for calculating this: the hedonic calculus . For any given pleasure or pain, you assess its intensity (how strong?), duration (how long?), certainty (how likely?), propinquity (how soon?), fecundity (will it lead to further pleasures?), purity (is it unmixed with pain?), and extent (how many people

does it affect?). Add up the pleasures, subtract the pains, and the action with the highest net score is the right one. The ambition here is extraordinary. Bentham wanted to make ethics as precise as mathematics — to replace moral intuition, religious commandments, and aristocratic prejudice with a rational, democratic, calculable science of right action. Everyone's pleasure counts equally — the beggar's as much as the king's. This radical egalitarianism was politically revolutionary: utilitarianism drove the great reform movements of the 19th century, from prison reform to animal welfare to the expansion of the franchise. If pleasure matters and everyone's pleasure counts equally, then slavery is wrong (it produces massive suffering for no net benefit), disenfranchisement is wrong (it ignores the happiness of those excluded), and cruel punishment is wrong (it produces more pain than it prevents). And the theory continues to drive social movements today. Peter Singer's argument that we are morally obligated to donate to effective charities (because the modest cost to us prevents enormous suffering to others) is pure utilitarian reasoning. The effective altruism movement — which asks "how can I do the most good with my time and money?" — is built on the Greatest Happiness Principle. The push for animal welfare legislation rests on Bentham's observation that the morally relevant question about animals is not "can they reason?" but "can they suffer?"

"Should I donate to charity? How do I decide which cause to support? Is it wrong to spend money on myself when others are starving?" These are utilitarian questions, and the effective altruism (EA) movement has tried to answer them with utilitarian rigour. EA argues that you should donate where your money prevents the most suffering per pound — which often means global health interventions in low-income countries rather than local causes. The reasoning is impeccable by utilitarian standards. But it also illustrates the theory's limits: it treats all suffering as commensurable (my suffering and a stranger's suffering are interchangeable units) and it ignores the relational dimension of giving (the difference between giving to a stranger through a website and giving to your neighbour directly). The IDM will argue in Chapter 11 that effective choice requires both calculation and relational understanding — and that EA, for all its rigour, operates in only one modality. Mill: Higher and Lower Pleasures

John Stuart Mill (1806–1873) saw the power of Bentham's theory but also its most glaring weakness: if all pleasures are equal, then a life spent in mindless sensory gratification is morally equivalent to a life of deep intellectual and moral engagement, provided the total quantity of pleasure is the same. Mill called this the "swine objection" — the accusation that utilitarianism reduces human beings to pleasure-seeking animals. Mill's response: not all pleasures are equal. There are higher pleasures (intellectual, aesthetic, moral) and lower pleasures (bodily, sensory, simple). The higher pleasures are qualitatively superior, not just quantitatively different. "It is better to be Socrates dissatisfied than a fool satisfied," Mill wrote. Anyone who has experienced both kinds of pleasure will prefer the higher, and their preference settles the matter. This is an important move, but it introduces a deep problem. How do you compare "higher" and "lower" pleasures in the hedonic calculus? Bentham's system was quantitative — you could (in principle) add up the numbers. Mill's system is qualitative — some pleasures are just better than others, regardless of their intensity or duration. But "better" according to whom? Mill says: according to "competent judges" — people who have experienced both kinds. But this appeals to the very thing utilitarianism was supposed to replace: the judgment of experienced, wise individuals. We're back to something uncomfortably close to Aristotelian virtue ethics (Chapter 10), smuggled in through the back door.

Rule Utilitarianism

Act utilitarianism — Bentham's original version — says: for each individual action, calculate the consequences and choose the one with the best outcome. But this creates problems. If a surgeon has five patients who will die without organ transplants, and one healthy patient walks in for a check-up, act utilitarianism seems to say: kill the one, harvest their organs, save the five. Net happiness increases. This is obviously monstrous.

Rule utilitarianism tries to fix it. Instead of asking "which individual action produces the most happiness?", it asks: "which rule would produce the most happiness if generally followed?" The rule "don't murder healthy patients for their organs" clearly produces more happiness as a general rule than "murder healthy patients whenever the arithmetic works out," because a society where the second rule was operative would be terrifying — nobody would go to the doctor. So rule utilitarianism forbids the organ-harvesting case.

The problem: rule utilitarianism tends to collapse back into act utilitarianism. If a specific case arises where breaking the rule would genuinely produce better consequences — and you can see that clearly — why follow the rule? Because rules generally produce better outcomes? But in this case, breaking the rule produces a better outcome. The utilitarian logic that justifies the rule also justifies the exception. And once you start making exceptions, you're back to act utilitarianism. There's a deeper tension: the whole point of utilitarianism was to replace arbitrary rules with rational calculation. Rule utilitarianism reintroduces rules — and then can't explain why the rules should be followed in cases where following them produces worse outcomes than breaking them. It's trying to be both a consequentialist theory (only outcomes matter) and a rule-based theory (follow the rules regardless of outcomes). These two commitments pull in opposite directions.

Preference Utilitarianism

Peter Singer and others proposed preference utilitarianism : instead of maximising pleasure, maximise the satisfaction of preferences. The right action is the one that best satisfies the preferences of all those affected, weighted equally. This avoids some problems. It doesn't require a theory of what counts as "pleasure" — it just asks people what they want. And Singer's drowning child argument shows the theory's power: if you walked past a shallow pond and saw a child drowning, you would save the child even if it meant ruining your expensive clothes. Distance doesn't change the moral calculation. Therefore, if you can save a child's life by donating money to effective aid organisations, you are morally required to do so — the distance between you and the child is morally irrelevant. This argument has launched the entire effective altruism movement. But preference utilitarianism creates its own problems. What about preferences formed under manipulation (advertising)? What about preferences that are self- destructive (addiction)? What about the preferences of future generations who don't yet exist to have preferences? And the deepest problem remains: how do you aggregate preferences across different people? My preference for a quiet evening and your preference for a loud party are incommensurable — there's no common scale on which to weigh them.

Nozick's Experience Machine

Robert Nozick proposed a piercing thought experiment against all forms of hedonistic utilitarianism. Imagine a machine that could give you any experience you wanted — it would perfectly simulate a life of achievement, love, adventure, whatever you choose. Once plugged in, you'd believe it was all real. You'd be maximally happy. Would you plug in?

Most people say no. They want to actually do things, not just experience doing them. They want genuine relationships, not simulated ones. They want to be a certain kind of person, not just feel like one. This suggests that happiness (or preference satisfaction) is not the only thing that matters — that there's something about reality, authenticity, and genuine achievement that has value beyond the experience it produces. If that's right, utilitarianism's focus on experience is too narrow.

The Standard Objections

Utilitarianism, in all its forms, faces several persistent objections: The calculation problem: You can't actually compute the total consequences of an action. Effects ripple outward indefinitely. Unintended consequences abound. The hedonic calculus is a theoretical ideal, not a practical tool. In practice, we never know whether an action will maximise happiness — we can only guess. And if we're guessing, how is utilitarianism better than intuition? The justice objection: Utilitarianism can justify terrible injustice to minorities if it increases overall happiness. If enslaving 10% of the population would make the other 90% significantly happier, act utilitarianism says: do it. If a surgeon could save five patients by killing one healthy person and harvesting their organs, the utilitarian calculation says: kill. The theory has no concept of individual rights that can't be overridden by aggregate calculations — and this strikes at its heart.

The trolley problem illuminates this tension precisely. A runaway trolley will kill five people unless you divert it to a side track where it will kill one. Most people say: divert. The utilitarian calculation is clear — five outweigh one. But now change the scenario: the only way to stop the trolley is to push a large stranger off a bridge into its path, killing them to save the five. The arithmetic is identical. But most people say: don't push. Why? Because there's something morally significant about actively using someone as a tool — about treating them merely as a means — that pure consequence-calculation can't capture. The utilitarian must say "push" in both cases, or admit that something other than consequences matters morally.