This is the pool-player insight from Chapter 21, now stated as a structural principle. If I sink the ball and position the cue ball well, the next shot is easier. If I make a choice that serves everyone involved and creates conditions favourable for future choices, I'm on a path. The win-win choices form a continuous path through the space of possibilities — a path you can walk, where each step makes the next step more natural. Conversely: compromise, coercion, and zero-sum choices move you away from this path. They create conditions where the next good choice is harder to find, not easier. Bad choices compound too — in the wrong direction. Third Theorem: The distance from the path is measurable. Wherever you are in the space of possible choices — even if you can't see the path of right action directly — you can sense the direction toward it. The degree to which it seems that no win-win choice is available is a measure of how far from the path you currently are. And the direction that seems "warmest" — the choice that feels most coherent when thinking and feeling are both engaged — is the direction toward the path, even if you can't see the path itself. Think of it as navigating in fog. You can't see your destination, but you can feel which direction the ground slopes. The third theorem says: reality provides a gradient. Even when you can't compute the optimal choice, even when the situation is too complex for rational analysis, you can sense which direction is better. That sensing is what attunement and discernment provide — the feeling capacities that Chapter 21 described as guides to choice alongside thinking. Taken together, the three theorems say: the path of right action exists (Theorem 1), it's walkable (Theorem 2), and you can find your way toward it from wherever you are (Theorem 3). This is an extraordinary claim. It means that the ethical life is not a matter of following rules, or calculating consequences, or achieving a state of character. It's a navigation practice — an ongoing, moment-by-moment process of sensing the direction of greatest relational integrity and moving toward it, using all of your capacities together.
How This Relates to the Canon
The IDM doesn't reject the classical theories. It subsumes them.
Utilitarianism's insight — consequences matter — is captured by the continuity principle. The outcomes of your choices affect the relational conditions in which future choices are made. Attending to those consequences, especially over time, is part of maintaining continuity. Kant's insight — the form of the will matters — is captured by the symmetry principle. Universalisability is a specific expression of symmetry: the same maxim should produce the same assessment regardless of who is acting on it. Virtue ethics' insight — character matters — is captured by the path of right action. The practice of making win-win choices, sustained over time, is the cultivation of character. Courage, temperance, justice, and wisdom are not isolated traits but natural consequences of walking the path consistently. You become courageous by repeatedly choosing to engage with what matters despite risk. You become wise by repeatedly integrating thinking and feeling in the service of effective choice. Virtue, in the IDM, is the trail you leave when you walk the path well. But the IDM goes beyond the canon in a specific way: it provides the ground that each classical theory lacks. Utilitarianism needs a theory of value (what counts as utility?) — the IDM provides one through the concept of relational integrity. Kantianism needs a basis for selecting maxims — the IDM provides one through the principles of symmetry and continuity. Virtue ethics needs an account of why some traits are virtues — the IDM provides one through the three theorems, which explain why win-win choices are structurally available and compounding.
"Can my individual choices actually change anything?" The three theorems give a structural answer to this question. Theorem 2 says that win-win choices are adjacent — good choices create conditions for more good choices. This means that your individual choice doesn't have to solve the whole problem. It just has to move you one step closer to the path. And from that new position, the next step is easier to see and to take. The paralysis you feel — "the problems are too big for my choices to matter" — is a consequence of thinking about choice in omniscient mode: standing outside the whole system, calculating whether your tiny input will shift the global outcome. From that perspective, of course you're insignificant. But that's the wrong frame. Ethics isn't about computing global outcomes. It's about maintaining the integrity of the interactions you're actually in. And within those interactions — with the people you live with, work with, share a community with — your choices are not tiny. They're the entire relational fabric. The path of right action is walked one step at a time, and each step is as large as the interaction it occurs in.
"Am I complicit in systems I didn't build but participate in?" The symmetry and continuity principles address this directly. You are complicit to the degree that your participation violates symmetry (you benefit from asymmetries you could address but choose not to) or violates continuity (your participation sustains processes that create unjust discontinuities in others' lives). But complicity is not a binary — it's a position relative to the path. The question is not "am I complicit?" (the answer is almost always yes, to some degree, because we are all embedded in imperfect systems). The question is: given where I am, what is the next step toward the path? That step might be small. But Theorem 2 says it makes the next step easier.
Goodness, Truth, and Beauty
One final element. The three classical transcendentals — goodness, truth, and beauty — have appeared throughout the history of philosophy as somehow interconnected but never satisfactorily explained. The IDM gives them a structural home.
Truth is the aspect of choice-making concerned with correspondence: does your understanding of the situation match the way things actually are? Are your models accurate? Is your thinking sound? Truth is the epistemic dimension of choice — the omniscient-modal aspect.
Goodness is the aspect concerned with value: is your choice aligned with what genuinely matters? Are you serving the integrity of the relational process? Is the care at the root of your action genuine? Goodness is the axiological dimension — the immanent-modal aspect.
Beauty is the aspect concerned with fittingness: is your choice appropriate to this specific situation, this specific community, this specific moment? Is there an elegance to it — a rightness that goes beyond what logic can justify? Beauty is the aesthetic dimension — the transcendent-modal aspect. It's the part of a good choice that couldn't have been predicted from the principles alone, the local, specific, creative element that makes the choice not just correct but alive . A fully effective choice has all three: it's true (it rests on accurate understanding), it's good (it serves genuine values), and it's beautiful (it fits the specific moment with a rightness that transcends general principles). Remove any one and the choice is diminished. A choice that is true and good but ugly — technically correct but tone-deaf, clumsy, ill-timed — fails to honour the specific reality of the situation. A choice that is beautiful and good but untrue — well-intentioned and elegantly executed but based on false information — will produce unintended harm. A choice that is true and beautiful but not good — accurate and elegant but uncaring — is merely clever. This is why effective choice is a skill, not a formula. No algorithm can produce it, because the beauty component — the transcendent-modal element — is by definition unpredictable from prior logic. It requires the creative, intuitive, feeling- guided engagement that Chapter 21 described. The principles of symmetry and continuity provide the ground. Thinking and feeling provide the navigation. But the specific choice, in the specific moment, is an act of genuine creativity. It cannot be mechanised, outsourced, or replaced by AI. And that is one of the deepest reasons why the mastery of choice is a human responsibility that cannot be delegated.
Discussion Questions
1. Consider a symmetry violation you've experienced — a situation where you or someone else was treated differently based on context in a way that felt unfair. Can you describe it in the precise language of symmetry (sameness of content, difference of context)? What content should have been preserved?
Across what contexts?
2. Consider a continuity violation you've experienced — a sudden, disproportionate change in relational conditions. How did the discontinuity damage trust? What would a more continuous process have looked like? 3. The first theorem says there is always a win-win choice. Think of a situation where this seems impossible — where the choices genuinely appear to be trade-offs with no win-win available. Now ask: what assumptions are narrowing the space of possibilities? What desires (not just wants) are involved? Is there a deeper level at which all parties could be served? 4. The chapter claims that ethics is not a formula but a skill — and that the "beauty" component of effective choice cannot be mechanised. What implications does this have for the question of whether AI systems can make ethical decisions?
Advocacy Scenario
Someone says: "Your 'non-relativistic ethics' sounds like moral absolutism. You're just claiming to have found the One True Morality. Every culture has its own values, and no one has the right to impose theirs on anyone else." How would you respond? Consider: What's the difference between claiming that specific moral rules are universal (which is moral absolutism) and claiming that structural principles like symmetry and continuity are universal (which is what the IDM claims)? Is the claim "fairness matters" the same kind of claim as "eat pork" or "don't eat pork"? The IDM doesn't say which specific rules to follow — it says that whatever rules a culture develops, they should exhibit symmetry and continuity. Is that imposition, or is it a description of what "working rules" structurally require?
Chapter Twelve
Applied Ethics
Testing the Framework on Real Questions
Chapter 11 gave you the IDM's ethical framework: symmetry, continuity, the three theorems of the path of right action. This chapter puts it to work on real questions — the questions your generation is actually asking. Not as a formula that produces automatic answers, but as a framework that clarifies what's at stake and reveals options that other frameworks obscure.
? The utilitarian answer:
If AI-written essays produce the same learning outcomes as human-written ones, there's no harm. But they probably don't — the learning happens in the writing process, not in the final product. The student who struggles through an essay develops understanding, critical thinking, and the ability to articulate complex ideas. The student who pastes a prompt into ChatGPT develops... prompting skills. Long-term utility says: the writing process is the education. Removing it removes the benefit. The Kantian answer: Could you universalise the maxim "use AI for all academic work"? If everyone did, assessment would become meaningless — grades would measure AI quality, not student understanding, and the entire institution of education would collapse. The maxim is self-defeating. Moreover, submitting AI work as your own is deception: you're treating the assessor merely as a means to a grade, manipulating their rational judgment rather than engaging with it honestly. The virtue ethics answer:
What character trait does this express? Laziness?
Resourcefulness? Does it develop your intellectual virtues (curiosity, rigour, honesty) or atrophy them? The practically wise person asks not "can I get away with this?" but "what kind of person am I becoming by doing this?" Every time you use AI to avoid intellectual struggle, you practise avoidance — and you become better at avoiding. The IDM answer: Apply symmetry: if you submit AI-generated work as your own, the assessor's understanding of the content (that it reflects your learning) doesn't match reality (the AI produced it). The content has changed across contexts — that's a symmetry violation. Apply continuity: if everyone gradually shifts to AI-generated work, the meaning of academic credentials degrades continuously until they signify nothing — a slow-motion continuity violation in the trust structure of education. But the IDM also asks deeper: what's the desire here? If the desire is to learn (a boundary phenomenon — can't be purchased, must be participated in), using AI defeats it. If the desire is to get a credential (a want — external, transactional), using AI serves it but at the cost of the deeper desire. The path of right action: use AI as a tool for enhancing your learning (ask it questions, have it critique your drafts, use it to explore ideas) rather than replacing your learning. The win-win is possible — it just requires engaging with the tool in a way that serves the deeper desire rather than substituting for it.
Environmental Ethics
?
The utilitarian answer
(Singer): Animals suffer. Factory farming causes immense suffering. If suffering is what matters, the massive suffering of animals outweighs the modest pleasure of eating meat. Stop eating meat. The Kantian answer: Animals are not rational agents, so the categorical imperative doesn't apply to them directly. But Kant did argue that cruelty to animals is wrong because it corrupts the character of the person being cruel — it damages their capacity to treat rational beings well. The virtue ethics answer:
What character trait does eating factory-farmed meat express? Is it temperance (moderate enjoyment of food) or self- indulgence (ignoring suffering for convenience)? What would the virtuous person do? The IDM answer: Apply the dependency chain. Ecology is more fundamental than economy. Industrial animal agriculture damages the ecological foundation (land degradation, water pollution, greenhouse emissions, biodiversity loss) for the benefit of the economic layer (cheap protein). This is a dependency-chain inversion. The symmetry principle adds: the ecological costs are borne by ecosystems, future generations, and the animals themselves, while the benefits accrue to present-day consumers. The content (who bears the cost) changes dramatically across contexts — a clear symmetry violation. But the IDM also avoids the absolutism of "never eat meat." The question is not "is meat inherently wrong?" but "does this specific food system maintain the integrity of the ecological interactions it depends on?" A regenerative farm that enhances soil health, supports biodiversity, and treats animals as participants in an ecological community may be ethically sound. A factory farm that extracts from ecology for economic profit is not. The framework is structural, not dogmatic.
The standard frameworks handle environmental ethics poorly, and this is revealing. Utilitarianism requires calculating consequences across entire ecosystems and future generations — a calculation that is not just difficult but conceptually impossible, since the relevant variables are too numerous to enumerate and their interactions too complex to model. Kantianism applies to rational agents, and it's unclear whether ecosystems, animal species, or future generations qualify as rational agents in the relevant sense. Virtue ethics asks what the virtuous person would do, but the environmental crisis is a systemic problem — no amount of individual virtue can fix a structurally destructive food system or energy infrastructure. The limitations aren't accidental. They reveal that all three frameworks were designed for individual moral agents acting in local contexts, not for civilisational-scale problems with planetary consequences. This is one of the strongest arguments for the IDM's structural approach. The IDM addresses environmental ethics through the dependency chain (Chapter 20). Ecology is the foundation — culture, infrastructure, and economy all depend on it. Any action that damages the ecological foundation for the benefit of a higher-level layer is a dependency-chain inversion. The ethical principle is structural: don't undermine the layers your system depends on. This principle applies at every scale — from the individual consumer to the multinational corporation to international treaty negotiations — and doesn't require impossible calculations or metaphysical assumptions about nature's moral status. The symmetry principle adds precision: the benefits of ecological extraction accrue to one group (typically wealthy, present-day humans), while the costs are borne by another (typically poorer communities, future generations, non-human life). This asymmetry — benefits here, costs there — is a symmetry violation. The content (the quality of ecological conditions) changes dramatically across contexts (who benefits vs. who suffers). The continuity principle addresses the "boiling frog" problem of environmental degradation: each individual act of extraction or pollution produces a tiny, imperceptible change. But the cumulative effect is catastrophic. Continuity violations don't have to be sudden — gradual, sustained degradation is a continuity violation in slow motion.
? The paralysis: "My individual carbon footprint is negligible. Even if I went zero-carbon, the global effect would be unmeasurable. So why bother?" This is the omniscient-modal trap: evaluating your individual contribution from the God's-eye view and concluding it's insignificant. The IDM reframes: your carbon footprint is not a number to be computed but a set of interactions whose quality you control. Flying less isn't about reducing a global statistic — it's about aligning your choices with your deeper desires (a liveable planet, ecological integrity) rather than your surface wants (convenience, cheap holidays). The path of right action (Theorem 2) says: each aligned choice makes the next one easier. And Theorem 3 says: even when you can't see the full path, you can sense the direction. More practically: individual action changes culture, and culture is more fundamental than economy in the dependency chain. When you change your behaviour, you change the behaviour of the people who observe you. Cultural change is how systemic change actually happens — not through individual carbon arithmetic but through the shifting of norms, expectations, and shared values.
Technology Ethics
Should we develop artificial general intelligence? Should social media be regulated? Should genetic editing be permitted? These questions share a common structure: they involve the mastery of causation (we can build these technologies) outrunning the mastery of choice (we don't yet know whether we should ).
? The utilitarian answer:
Calculate the total wellbeing produced by social media (connection, information access, entertainment) against the total harm (addiction, misinformation, mental health effects, political polarisation). If harm outweighs benefit, regulate. But the calculation is impossible — the effects are too diffuse, too long-term, and too entangled with other factors. The Kantian answer: Does the current social media model treat users as ends in themselves or merely as means (sources of data and attention to be monetised)? The humanity formula suggests that any system designed to exploit human psychological vulnerabilities for profit treats users as means. Regulation is required to restore respect for persons. The IDM answer: Apply the dependency chain. Social media platforms are infrastructure-layer entities that are shaping culture (a lower, more fundamental layer). When infrastructure determines culture rather than serving it, the dependency chain is inverted. The platforms don't respond to cultural values — they shape cultural values, through algorithmic curation optimised for engagement rather than wellbeing. This is structurally identical to the economy-driving-ecology inversion: a higher-level system is damaging the foundation it depends on. The symmetry principle reveals the core asymmetry: the platforms know everything about your behaviour (they have knowledge of you) while you know almost nothing about their algorithms (you have no knowledge of them). This informational asymmetry is a symmetry violation — the content (who knows what about whom) is radically different depending on context (which side of the platform you're on). Regulation that restores informational symmetry — transparency requirements, algorithmic auditing, data sovereignty — directly addresses the structural problem. The ethical gap (Chapter 1) is the central concept here. Every new technology widens the gap between what we can do and what we are wise enough to do. The question is not "should we develop this technology?" (which is often unanswerable in advance) but "do we have the governance structures, the cultural wisdom, and the ecological awareness to deploy this technology without catastrophic unintended consequences?"