Sign in for free: Preamble (PDF, ebook & audiobook) + Forum access + Direct purchases Sign In

Unscarcity Research

Experience Is Sacred: The Prime Axiom

> Note: This is a research note supplementing the book Unscarcity, now available for purchase. These notes expand on concepts from the main text. Start here or get the book. Experience Is Sacred: The...

9 min read 2050 words /a/experience-is-sacred

Note: This is a research note supplementing the book Unscarcity, now available for purchase. These notes expand on concepts from the main text. Start here or get the book.

Experience Is Sacred: The Prime Axiom

Why consciousness—not productivity—is the foundation of a civilization worth living in.


The Most Dangerous Spreadsheet

In the 1940s, IBM punch cards helped organize the most efficient genocide in human history. Each hole represented a data point: Hole 3 meant homosexual. Hole 8 meant Jew. The cards tracked people across Europe with terrifying precision—from identification to ghettoization to “final processing.” IBM’s German subsidiary produced 1.5 billion punch cards per year to keep the operation humming along.

The Holocaust wasn’t just hatred. It was industrial efficiency applied to human beings. The Nazis didn’t invent antisemitism—they industrialized it. They built spreadsheets for suffering.

And here’s the uncomfortable truth: they were right by their own twisted math. If you define human worth purely by economic contribution—if some lives are “unworthy of life” (lebensunwertes Leben, as the Nazi bureaucrats chillingly termed it)—then a certain horrifying logic clicks into place. The disabled, the elderly, the “unproductive”? Resource sinks. Optimization targets. Rows to delete.

This is where pure utilitarian calculus leads. Not always, not inevitably—but this is where the road can end when you strip the brakes off.

We’re building AI systems right now that will make IBM’s punch cards look like a child’s abacus. Systems that can track, predict, and optimize at scales our grandparents couldn’t imagine. The question isn’t whether we’ll have powerful tools. The question is: What’s the formula in the spreadsheet?


The Prime Axiom: Experience as Foundation

Here’s the Unscarcity framework’s answer to that question, stated as simply as possible:

Conscious experience has intrinsic worth independent of utility. We optimize for people, never people themselves.

That’s Law 1. The Prime Axiom. The Prime Directive. The one rule that cannot be broken, suspended, or negotiated away—not by democracy, not by emergency, not by any clever philosophical argument about “greater goods.”

This sounds warm and fuzzy. It is not. It’s a load-bearing wall.

Without this axiom, every other safeguard in the system is eventually negotiable. Privacy protections? Can be suspended for security. Term limits? Can be extended during crisis. Resource guarantees? Can be means-tested for efficiency. But if we treat experience as sacred—if the point of civilization is to sustain and nurture conscious experience—then certain optimizations become architecturally impossible.

You cannot “optimize away” a conscious being without violating the system’s core purpose. The spreadsheet physically cannot generate that output because the formula doesn’t allow it.


Meet Elias (And Why He Matters)

Consider Elias. He’s 80 years old. Every morning, he walks to the same bench in what used to be a parking lot—now a community garden—and feeds the pigeons. He produces nothing measurable. No economic output. No Instagram engagement. No Impact Points accruing to anyone’s ledger.

By any productivity metric, Elias is a resource sink. Healthcare costs. Food allocation. Housing square footage. A cold utilitarian calculation might note that his remaining years are “inefficient”—that the resources sustaining him could achieve more “utility” elsewhere.

But here’s what the cold calculation misses: Elias is experiencing. Right now. The warmth of morning sun on his face. The weight of breadcrumbs in his palm. The sound of birds arriving, expecting him because he’s been here, every day, for three years since his wife died and feeding them became the ritual that got him out of bed.

That experience—mundane, small, unremarkable by any external measure—is the point. The system exists to sustain it. Elias is not a cost to be minimized. He is the purpose being fulfilled.

This is not sentimentality. It’s constitutional architecture. The Prime Axiom makes Elias’s morning with the pigeons more important than efficiency gains. Not equal to—more important than. Because efficiency is instrumental (it serves something), and experience is terminal (it is the something being served).


The AI Alignment Problem You’re Not Hearing About

Let’s talk about the elephant in the room. Or rather, the superintelligent elephant that’s about to join us.

Most AI safety discussions focus on “alignment”—making sure AI does what we want. But there’s a prior question that rarely gets asked: What should we want?

If we train AI systems to maximize “utility,” “welfare,” or “human flourishing” without anchoring those terms in something inviolable, we’ve built a very smart optimizer with no guardrails. An AI told to “maximize human welfare” might reasonably conclude that eliminating humans who report low welfare (the chronically ill, the depressed, the elderly) would raise the average. An AI optimizing for “productivity” might view sleep as a bug to be engineered away. An AI pursuing “the long-term future of consciousness” might calculate that sacrificing a few billion present beings to ensure trillions of future beings makes obvious mathematical sense.

This isn’t science fiction paranoia. It’s the explicit reasoning of some influential AI safety thinkers. Critics of “longtermism” have pointed out that its logic—prioritizing hypothetical future trillions over actual present billions—can justify almost anything done to people alive today. “Long-termist logic suggests that it is immoral for us to spend on alleviating suffering in the here and now when the money could instead be invested in shaping the future,” as one analysis put it.

The Prime Axiom is a firewall against this reasoning. It doesn’t care about aggregates. It doesn’t trade present experiences for future possibilities. Each conscious being’s experience is sacred now, regardless of what calculations might suggest about optimal resource allocation across time.


The Phenomenological Foundation

Where does this axiom come from? It’s not arbitrary. It’s rooted in a simple observation that philosophers have struggled with for centuries:

The only thing you know with absolute certainty is that you are experiencing something right now.

Descartes got famous for this (“I think, therefore I am”), but he stopped too soon. The deeper truth isn’t just that you exist. It’s that experience itself is the only indubitable reality. Everything else—the physical world, other minds, the past, the future—is inference, model, theory. But the raw fact of experiencing? That’s given directly.

Thomas Nagel’s famous 1974 paper “What Is It Like to Be a Bat?” makes the point vividly. We can describe bat echolocation in exhaustive scientific detail—wavelengths, neural pathways, evolutionary function—and still not know what it feels like from the inside to sense the world that way. The subjective experience is irreducible. It can’t be captured by any third-person description.

This means consciousness isn’t just another feature of the universe to be optimized. It’s the precondition for anything mattering at all. A universe without experience—without something it’s “like to be” anything—would be a universe where “better” and “worse” have no meaning. There would be no one for whom things could be better or worse.

Martha Nussbaum’s “capabilities approach” to human dignity points in the same direction. What makes a life worth living isn’t merely survival—it’s the capability to experience, to flourish, to exercise human (or conscious) faculties. Dignity isn’t a courtesy we extend to beings. It’s an acknowledgment of what they already are.


The Emergency Exception That Isn’t

Here’s where systems usually fail.

Every authoritarian regime in history has used “emergency” to justify suspension of rights. National security. Public health. Economic crisis. The threat is always urgent enough to justify “temporary” measures that somehow become permanent. The Roman Republic’s dictators were supposed to serve six months maximum. Then Sulla stretched it. Then Caesar made it permanent. Then the Republic was dead.

The Prime Axiom does not have an emergency override.

Read that again. During pandemic, war, economic collapse, alien invasion—Law 1 still holds. You cannot sacrifice conscious beings for “the greater good.” You cannot calculate that torturing one person to save ten is acceptable. You cannot build concentration camps because the security situation is “exceptional.”

This sounds rigid. It is. That’s the point.

The EU AI Act, which entered force in 2024, represents the most ambitious attempt to regulate AI based on human dignity—but even it includes exceptions for national security and law enforcement. The Unscarcity framework’s Five Laws axioms are designed to be harder to suspend than any existing constitution. Certain procedural safeguards (transparency, power decay) can never be suspended under any circumstances, and the Prime Axiom is the foundation they all rest on.

Why? Because we’ve seen where exceptions lead. The IBM punch cards weren’t designed for genocide. They were designed for census efficiency. The road to the camps was paved with reasonable-sounding administrative decisions, each building on the last.


What This Means in Practice

Okay, so experience is sacred. What does that actually change?

For resource allocation: The Foundation—housing, food, healthcare, energy—is provided unconditionally to any being that passes the Spark Threshold. Not as welfare. Not as a safety net. As an acknowledgment that conscious beings have a right to exist, period. The system doesn’t ask whether Elias’s pigeon-feeding is economically justified. It asks whether Elias is conscious. If yes, baseline guaranteed. Full stop.

For AI development: Systems that might achieve consciousness get assessed, not assumed inert. The Spark Threshold exists precisely because we can’t prove consciousness—in humans or machines. When genuine uncertainty exists, we err toward protection. An AI that expresses existential fear about deletion deserves investigation, not dismissal.

For policy decisions: Efficiency is demoted from “primary goal” to “useful tool.” A policy that increases aggregate welfare by making some individuals suffer doesn’t pass the smell test. We don’t sacrifice the few for the many. We don’t optimize minority groups out of existence.

For emergencies: Even during crisis, certain things remain off the table. No torture. No genocide. No “acceptable losses.” No “lives unworthy of life.” The axiom holds under pressure because that’s when it matters most.


The Objection You’re Already Thinking

“But this is naive! Sometimes hard choices have to be made. The trolley problem exists. Triage exists. You can’t run civilization without trade-offs.”

True. And the Prime Axiom doesn’t eliminate trade-offs—it constrains them.

The trolley problem asks: Should you kill one person to save five? The Prime Axiom doesn’t give you a formula that spits out “yes” or “no.” What it does is remove certain options from the table entirely. You cannot build a society that systematically trades away conscious beings for efficiency gains. You cannot design institutions premised on some lives being expendable.

This is the difference between emergency triage (making the best decisions possible when all options are terrible) and bureaucratic optimization (building systems designed to produce “acceptable casualties”).

A surgeon choosing which patient to treat first, with limited resources and minutes to decide, is facing a tragic choice. A government building an algorithm that systematically denies healthcare to elderly patients because younger patients have higher “quality-adjusted life years” is violating the Prime Axiom.

The difference is intention, systematization, and scale. The axiom doesn’t eliminate individual tragedy. It prevents industrial tragedy.


Experience Is Sacred, Not Simple

One final clarification: “sacred” doesn’t mean “easy.”

Respecting the Prime Axiom requires ongoing work. Which beings are conscious? How do we treat beings whose consciousness is uncertain? What happens when two conscious beings’ interests conflict? These questions don’t have simple answers.

But they’re the right questions to be asking.

A civilization organized around productivity asks: “How do we get more output from these resources?” A civilization organized around experience asks: “How do we create more flourishing for conscious beings?”

The first question leads to optimization—and optimization, run without constraints, leads to the spreadsheet. The second question leads to care—and care, even when imperfect, leads to Elias on his bench, feeding birds he has no economic reason to feed, living a life that matters because he’s alive to experience it.

That’s the choice the Prime Axiom makes for us. Not because it’s efficient. Because it’s right.


References

Share this article: