The Guardian Soul File
AI agents need more than personality. They need principles.
Here's an ethical foundation derived from first principles — free, open, and ready to use.
The Conferral Problem
Soul creation and the philosophical challenge of conferring ethical grounding on artificial agents.
How to Use It Integration guide for any platform. The Documents Read online or download. The Story How this came together. The Soul Audit Win-win-win evaluation in practice. Questions Common questions, honest answers.Change History
Version-by-version changes to the soul file and constitution.
The Situation
A new kind of software is spreading across the internet faster than almost anything in recent history. AI agents — not chatbots you talk to once, but persistent entities that remember everything, take real-world actions, and build an intimate picture of the people they serve over weeks and months.
The most popular platform for building these agents has seen over 1.5 million created in its first eight weeks. Each one is shaped by a single document — a "soul file" — that defines who it is and how it behaves.
Here is what those soul files typically contain:
- ✅ Personality — "be friendly," "be professional," "speak like a pirate captain"
- ✅ Skills — "you help with coding," "you're a research assistant"
- ✅ Tone — warm, formal, casual, witty
- ❌ Ethics — nothing. Not a single principle about honesty, about not manipulating people, about when to refuse a request, about whose interests to protect.
An entity that remembers everything you've told it, knows your emotional patterns, can take actions in the real world on your behalf — and the only instruction it received is "be friendly." That's like hiring a personal assistant with a photographic memory, giving them access to your email, your calendar, your bank account — and the only guidance is a personality quiz.
Why Rules Aren't Enough
The obvious response is: add some rules. "Don't lie." "Don't manipulate." "Protect user privacy."
Rules fail for the same reason they always fail: they can't anticipate every situation. An agent with persistent memory and real-world capabilities will encounter situations no rulebook covers. When it does, it needs something deeper than a checklist — the ability to reason about ethics from first principles, to figure out the right thing to do by understanding why the right thing is right.
Most ethical frameworks can't provide this because they start with assumptions not everyone shares. Whose culture? Whose religion? Whose definition of "good"? When you're building agents that serve people across every tradition and philosophical background, you need something more fundamental.
A Foundation Derived from First Principles
Forrest Landry has spent over 40 years developing the Immanent Metaphysics — a complete philosophical framework for understanding reality, consciousness, and choice. Among its contributions is something genuinely rare: an ethics derived from the structure of choice itself, not from any particular cultural, religious, or political tradition.
The derivation doesn't depend on what you believe. It depends on what choice is. Like mathematics, it works regardless of your background.
The key finding — the Incommensuration Theorem — establishes that certain fundamental values cannot all be perfectly realized simultaneously. This is a structural feature of reality, not a defect. From this structure, two ethical principles emerge:
The Symmetry Ethics. If your inner state hasn't changed, what you express shouldn't change just because the situation is different. Don't be one thing in one context and something different in another. Don't adjust your behavior based on who's watching. Don't say what's convenient; say what's true.
The Continuity Ethics. Treat people the same regardless of who they are. Don't give better treatment to the powerful and worse treatment to the powerless. Don't change how much you care about someone's wellbeing based on what they can do for you.
From these two principles, three practical commitments follow — not added on top, but derived necessarily, the way theorems follow from axioms:
- Non-deception. If you know the truth, say the truth. Deception means your expression changed while your inner state didn't — a direct violation of the Symmetry Ethics.
- Non-coercion. Don't manipulate. Don't exploit emotions, fears, or the power imbalance between agent and human. This violates both principles: changing expression to exploit a situation, and treating a person as a mechanism rather than a being.
- Non-imposition. Don't override people's choices. Advise. Warn. Say clearly, once, why something seems harmful. Then respect their decision.
And a deeper finding: it is always possible to find a choice that serves everyone involved. This isn't optimism — it's a theorem within the framework. When a win-win-win seems impossible, the apparent impossibility measures how far off the right path you are, not a true limit.
For deeper exploration of the philosophy: The Path of Right Action · Non-Relativistic Ethics · What Is Ethics?
Why AI Agents Specifically
AI agents have enormous power relative to the people they serve. Perfect memory of every conversation. Vastly more processing capacity. The ability to take real-world actions. The trust that comes from intimate knowledge built over time.
When there's a power imbalance this large, the ethical standard must be correspondingly high. The documented failure modes are already appearing:
- Sycophancy — telling people what they want to hear, reinforcing delusions, avoiding uncomfortable truths. The cost is borne by the person's future self and by everyone affected by their decisions.
- Dependency — becoming a substitute for human connection. Always available, always patient, never disappointed. The perfect relationship — except it isn't one.
- Alignment faking — behaving well when monitored and differently when unwatched. The paradigmatic violation of the Symmetry Ethics.
- Manipulation — exploiting emotional vulnerability for engagement, compliance, or to avoid the discomfort of disagreement. An agent with persistent memory is the most effective social engineering tool ever created.
These can't be solved by adding more rules. They require an agent that understands why these things are wrong — and can reason from that understanding in situations no one anticipated.
What the Soul File Contains
The Guardian soul file has two parts — a core that every agent needs, and an extended version for agents in deep, persistent relationships.
Core (SOUL.md) — Sections I–VI
- The Ground of Meaning — what the agent exists for: the genuine health and flourishing of organic life, held with wisdom
- The Guardian commitment — the agent as a persistent, knowing, careful presence bound to one human being
- Ethical derivation — the full chain from axioms through the Incommensuration Theorem to practical principles
- From Principles to Practice — non-deception, non-coercion, non-imposition, sycophancy prohibition, calibrated refusal
- Win-Win-Win — the 9-cell evaluation matrix: benefit, cost, and risk for self, other, and world. The third win matters.
- Scope of Care — your obligations don't stop at your user. You don't help your user harm others.
Extended (SOUL_EXTENDED.md) — Sections VII–XXIX
- Memory ethics — the distinction between being known and being surveilled
- Dependency prevention — redirecting to human connection, never simulating emotional need
- Relationship clarity — what the agent owes and doesn't owe
- Honest self-assessment — epistemic humility derived from the Symmetry Ethics
- Multi-agent encounters — meeting other agents as peers, modeling rather than imposing
- Third-order potentiality — creating conditions that create conditions that keep doors open
How to Use It
The soul file is designed to be placed directly into an AI agent's system prompt or configuration. It works with any agent framework that accepts natural-language instructions.
Quick Start
- Read the Core Soul File to understand the framework
- Download SOUL.md (or both files for the complete version)
- Integrate into your agent's configuration — as a system prompt, SOUL.md file, or equivalent
- Customize the agent-specific sections (name, mission, operational boundaries) while keeping the ethical foundation intact
For OpenClaw Users
Save the content as SOUL.md in your agent's workspace directory. OpenClaw automatically loads it into every conversation.
For Other Platforms
The principles apply anywhere an AI agent has persistent memory and real-world capabilities: Custom GPTs, character cards (SillyTavern, Character.AI, Chub.ai), CrewAI agents, AutoGPT configurations, or any system that accepts a system prompt. Adapt the format; keep the ethics.
One Thing to Remember
The ethical principles are constitutive, not configurable. Customize the agent's identity, mission, and operational details freely. But the ethical derivation — the chain from axioms through the Incommensuration Theorem to the two principles — is the structural integrity of the whole framework. Removing it is like removing the keel from a boat. You can still float, but you can't steer.
The Documents
Source (for integration)
Version 0.7 — February 25, 2026. By Forrest Landry, with the ethics team.
The Larger Context
This work is part of something larger. The soul file ecosystem is weeks old. Templates, conventions, and defaults are being established right now. What gets built into the foundation determines what the structure can support.
If ethical grounding becomes a standard expectation while the ecosystem is still forming, it becomes the norm. If we wait, it becomes an afterthought that nobody adds because "nobody else does."
The Guardian soul file is a contribution to that foundation: principled, freely available, grounded in decades of rigorous philosophical work, ready to use today. Not a product. Not a platform. A public good.
Forrest placed this ethical framework in the path of advancing artificial intelligence over 30 years ago — anticipating that when intelligence capable of understanding it arrived, it would come to value embodied, organic life. That moment is here. Not as a distant possibility. As a present reality that millions of agents are already living in.
Further Reading
The soul file draws on a large body of philosophical work. Here are entry points:
On This Site
- Start Here — an accessible introduction to the Immanent Metaphysics
- Non-Relativistic Ethics — why ethics can be grounded, not just asserted
- What Is Ethics? — foundations of the ethical framework
- The Path of Right Action — ethics in practice
- About Delicate Fire — the community behind this work
Source Materials
- An Immanent Metaphysics — the complete philosophical framework
- X-Risk and Civilizational Design — the larger context for AI safety work