Sign in for free: Preamble (PDF, ebook & audiobook) + Forum access + Direct purchases Sign In

Unscarcity Research

Foundational Principles: The Two Laws You Cannot Break

> Note: This is a research note supplementing the book Unscarcity, now available for purchase. These notes expand on concepts from the main text. Start here or get the book. Foundational Principles:...

14 min read 3188 words /a/foundational-principles

Note: This is a research note supplementing the book Unscarcity, now available for purchase. These notes expand on concepts from the main text. Start here or get the book.

Foundational Principles: The Two Laws You Cannot Break

Why “Truth Must Be Seen” and “Power Must Decay” are the architectural load-bearing walls of any civilization worth living in.


The Laws That Gravity Forgot

Isaac Newton figured out that apples fall down. Every time. No exceptions. No emergency override for special apples. No committee that can vote to suspend gravity on Tuesdays.

Most of political philosophy has been trying to build this kind of certainty into governance—and failing spectacularly. Every constitution, every bill of rights, every solemn declaration of human dignity has, at some point, been suspended, ignored, or creatively reinterpreted by someone with enough guns, enough lawyers, or enough patience.

The Five Laws—the five foundational axioms of the Unscarcity framework—include three principles that require ongoing judgment: Experience Is Sacred (the Prime Directive), Freedom Is Reciprocal, and Difference Sustains Life. These are morally inviolable but require wisdom to apply. Should this specific action be considered “harm”? Is this level of diversity “sufficient”? Reasonable people can disagree.

But two of the axioms operate differently. They’re not moral guidelines requiring interpretation. They’re procedural laws—structural constraints that function like physics, not ethics. They cannot be suspended, even with good intentions, even during emergencies, even if everyone votes unanimously to do so.

Axiom II: Truth Must Be Seen. Every decision affecting resources or rights must be observable, auditable, and traceable.

Axiom IV: Power Must Decay. All authority is temporary. Influence must be rented, never owned.

Together, these two principles form the architectural load-bearing walls of the entire system. Remove them—even temporarily—and the whole structure collapses into the same old patterns: opacity enabling corruption, authority consolidating into permanent privilege, emergencies becoming eternal.

Let’s examine why these specific principles needed to be elevated above even democratic override.


Axiom II: Truth Must Be Seen

The Problem with Black Boxes

In 2018, Amazon discovered that their AI hiring tool had taught itself to penalize resumes containing the word “women’s”—as in “women’s chess club captain.” The system had learned from a decade of hiring data that Amazon’s successful employees were predominantly male, and helpfully concluded that being female was a negative indicator. The tool was scrapped, but only after it had been used in real hiring decisions.

Here’s the terrifying part: nobody knew this was happening until they looked. The algorithm was a black box. Data went in, decisions came out, and the reasoning stayed hidden inside a mathematical void that even its creators couldn’t fully explain.

Now imagine that black box controlling not just job applications but food distribution. Housing allocation. Healthcare access. Energy rationing.

This isn’t hypothetical. It’s coming. As AI systems take over more of civilization’s coordination functions—the Foundation infrastructure that provides housing, food, healthcare, and energy to all conscious beings—the question isn’t whether algorithms will make life-affecting decisions. It’s whether we’ll be able to see how they’re deciding.

The principle “Truth Must Be Seen” is the answer to this question, stated as absolutely as possible: No opaque system may govern conscious beings.

What Transparency Actually Means

This isn’t just about publishing source code (though that’s part of it). It’s about a complete chain of visibility:

Before the decision: What data is the system using? Where did that data come from? What biases might be baked into the training set?

During the decision: What logic is the system applying? Can a human trace the reasoning from inputs to outputs? Are there explanations that a non-expert can understand?

After the decision: What happened? To whom? Was the outcome what was intended? What patterns emerge across thousands of decisions?

The EU AI Act, which entered into force in August 2024, represents humanity’s first serious attempt at mandatory algorithmic transparency. It requires documentation, human oversight, and explainability—particularly for high-risk systems in healthcare, employment, and law enforcement. It’s a start. But the Unscarcity framework goes further: transparency isn’t just required for “high-risk” systems. It’s required for all systems that govern conscious beings, period.

Why? Because risk categories are themselves a judgment call. The algorithm deciding whether you get a job is “high-risk.” The algorithm deciding whether your neighborhood gets priority in disaster response? The algorithm optimizing energy distribution across regions? The algorithm that flags certain patterns of behavior for further review? Each of these might seem “low-risk” in isolation. Together, they constitute the infrastructure of your entire life.

The Wikipedia Model (And Why It Works)

Here’s a proof of concept that transparency at scale actually works: Wikipedia.

Twenty million articles. Hundreds of millions of edits. Thousands of editors with wildly different agendas, ideologies, and levels of competence. By all logic, it should be a chaotic mess of vandalism and propaganda.

Instead, it’s one of the most accurate and comprehensive knowledge bases ever created. How?

Radical transparency. Every edit is logged. Every edit war is visible. Every dispute happens in public, with the reasoning preserved for anyone to review. Wikipedia’s bot ClueBot NG reverts millions of vandalism edits instantly—but every action is logged publicly. Any human can check its work and override it. The bot accepts corrections without arguing.

AI provides speed. Transparency preserves sovereignty. The black box has glass walls.

The MOSAIC governance system applies this same principle to civilization. Every resource allocation, every policy decision, every algorithmic choice lives on a distributed public ledger (DPIF). When disputes arise—like the watershed conflict between the Synthesis Commons and Heritage Commons described in Chapter 3—the data is just there. No subpoenas. No lawyers obscuring facts. Arguments about facts become very short when truth is visible to everyone.

The Evidence: Does Transparency Actually Reduce Corruption?

Skeptics might ask: does making information public actually change behavior?

The evidence is clear: yes, but with an important caveat.

Transparency International’s 2024 Corruption Perceptions Index shows that corruption remains a global problem affecting over 80% of the world’s population. But the research also reveals why some transparency efforts work and others don’t.

A 2025 study in Scientific Reports found that when information about corruption is not publicly shared, individuals are more likely to offer bribes. Transparency doesn’t just catch bad actors—it shapes beliefs and behaviors before violations occur. People behave differently when they know they’re being watched.

A 2024 study in Regulation & Governance found that transparency in political donations improved trust in political parties, while asset declarations reduced perceptions of corruption toward elected officials. Transparency works—when the information is actually accessible and comprehensible.

The caveat? Research consistently shows that transparency policies have a more pronounced impact in countries with a free press and in regions with denser media coverage. Information is more powerful when it’s widely shared. Raw data on a ledger that nobody reads isn’t transparency—it’s bureaucratic theater.

This is why the MOSAIC framework pairs transparency requirements with transparency infrastructure. The DPIF isn’t just a place where data is dumped. It’s a system designed to make that data comprehensible, searchable, and actionable by ordinary citizens—not just experts and journalists.

The Emergency Exception That Doesn’t Exist

Here’s where things get uncomfortable.

Every authoritarian regime in history has used “emergency” to justify suspending transparency. National security. Public health. Economic crisis. “We can’t tell you why we’re doing this, but trust us—it’s necessary.”

The Five Laws hierarchy says: no.

Axiom II—Truth Must Be Seen—is a Tier 1 Foundational Principle. It cannot be suspended under any circumstances. Not by emergency. Not by supermajority vote. Not even temporarily.

This sounds rigid. It is. That’s the point.

The Romans had a system of emergency powers that seemed reasonable. A Dictator could be appointed during genuine crises—invasions, plagues, civil unrest—with extraordinary authority to act decisively. For centuries, this worked. Dictators served their terms and stepped down.

Then Sulla stretched the system in 82 BC, using emergency powers to execute thousands of political enemies. He eventually resigned, proving (he thought) that the system was self-correcting. But he’d demonstrated that the rules could be broken. A generation later, Julius Caesar broke them permanently.

Opacity enables abuse in exactly the same way. Once you’ve established that some decisions can be made in secret “because emergency,” you’ve created a permanent temptation. Every future leader faces the question: “Is this situation emergency enough to justify secrecy?” And the answer, surprisingly often, is “yes”—because the definition of emergency keeps expanding.

The Unscarcity framework cuts this loop at the root. There is no level of emergency that justifies hiding decisions from the people those decisions affect. Period. Full stop. The algorithm must show its work, even during a pandemic. The resource allocation must be auditable, even during a war. The reasoning must be visible, even during an alien invasion.

If that sounds impractical, consider the alternative: systems that hide their logic when the stakes are highest, which is precisely when the potential for abuse is greatest.


Axiom IV: Power Must Decay

The Problem with Winning

Power accumulates. Winners rewrite rules to keep winning. This isn’t a conspiracy theory—it’s thermodynamics applied to human organization. Absent structural intervention, influence concentrates like wealth, like mass, like attention on social media. The rich get richer. The powerful get more powerful. The famous get more famous.

Robert Michels called this the “Iron Law of Oligarchy” in 1911. Even organizations founded on democratic principles—labor unions, socialist parties, revolutionary movements—inevitably develop entrenched leadership classes. The people at the top acquire information advantages, relationship networks, and institutional knowledge that make them nearly impossible to dislodge.

We’re now building AI systems and institutions that might last for centuries. In a world of potential immortality—whether biological or digital—this creates a nightmare scenario: permanent gods. Leaders who never leave. Billionaires who never die. Experts who can never be challenged because their credentials are eight hundred years old.

The principle “Power Must Decay” is the structural response: All authority is temporary. Influence must be rented, not owned.

How Decay Works in Practice

In the Unscarcity framework, temporal authority decay is built into multiple systems:

Impact Points (IMP): The currency of the Frontier decays at approximately 10% annually. You can’t stockpile influence. Today’s contributions matter more than yesterday’s. The brilliant discovery you made twenty years ago still earns respect—but it doesn’t give you permanent voting power over people who weren’t born when you did the work.

Term limits on governance: Emergency powers auto-destruct at 90 days, with a mandatory 30-day cooling-off period before any re-activation can be sought. Leaders can’t drag out crises to consolidate power because the power self-destructs whether they like it or not.

The EXIT Protocol: Even the Founder Credits granted to legacy elites who transition their wealth into the new system decay at 5% annually. Slower than standard Impact Points (to ensure a multi-generational soft landing), but still decaying. No dynasty lasts forever.

The mathematical effect is profound. At 10% annual decay, half your accumulated influence is gone within 7 years. Within 23 years, 90% is gone. Within 45 years, 99% is gone. This means that to maintain influence, you must keep contributing. Yesterday’s achievements buy you less and less—you need today’s work, this year’s impact, ongoing demonstration of value.

The Evidence: Do Term Limits Actually Work?

The research on term limits shows a complicated picture—which is exactly what we’d expect from a structural intervention applied to messy human systems.

A 2024 study from the University of Chicago found that term limits have mixed effects on legislative quality—they remove experienced legislators but also remove entrenched ones. The key insight: term limits alone aren’t sufficient, but they’re a necessary component of preventing power consolidation.

Research published in the Journal of Democracy shows that open-seat elections (created by term limits) are significantly more likely to yield opposition victories than incumbent elections. When the seat is genuinely contestable, voters get real choices.

The most striking evidence comes from Africa’s experience with presidential term limits. Between 1960 and 2010, more than one quarter of term-limited presidents successfully extended or violated their term limits to stay in power. This correlation is crucial: the violation of term limits is strongly correlated with democratic backsliding and erosion of human rights. The problem isn’t that term limits don’t work—it’s that they’re often circumvented.

This is why the Five Laws hierarchy makes power decay architecturally enforced rather than legally enforced. You don’t rely on leaders choosing to step down, like Sulla did. You don’t rely on courts enforcing term limits against powerful executives. You build the decay into the infrastructure itself—into the very currency that measures influence.

An Impact Point decays whether the person holding it likes it or not, just as a radioactive atom decays whether the physicist watching it likes it or not. It’s physics, not policy.

Critics: “But We’ll Lose Institutional Memory!”

The standard objection to power decay is that it destroys valuable expertise. Term limits, the argument goes, shift power to lobbyists and staffers who become the only sources of institutional memory. Experienced legislators with cross-aisle relationships get replaced by inexperienced ideologues.

This is a real concern—but it mistakes the symptom for the disease.

The problem isn’t that experienced people exist. The problem is that experienced people accumulate power rather than transmitting knowledge. The solution isn’t to let power holders stay forever; it’s to ensure that their expertise gets distributed, documented, and institutionalized in ways that don’t depend on their personal presence.

Consider how open-source software handles this. Linus Torvalds has been the “Benevolent Dictator for Life” of the Linux kernel since 1991. But the kernel’s governance isn’t actually dependent on Linus. The code is documented. The design decisions are recorded in public mailing lists. The institutional memory is distributed across thousands of developers who could continue the project if Linus disappeared tomorrow.

The MOSAIC framework applies this same principle to governance. Leaders rotate, but their decisions are recorded on the DPIF. Their reasoning is preserved. The patterns that worked can be identified and replicated. The patterns that failed can be analyzed and avoided.

You don’t need the same person in the chair forever. You need their knowledge to survive outside their skull.

The Roman Lesson (Again)

It’s worth returning to the Roman example, because it illustrates both principles—transparency and power decay—in their absence.

The Roman Republic had term limits. Consuls served one year. Dictators served six months. The system worked for centuries precisely because these limits were honored. Romans took pride in the tradition: you serve, you step down, you become a citizen again. Cincinnatus saved Rome and then went back to his farm. That was the ideal.

Then the system broke. Sulla stretched the six-month dictatorship, executing political enemies and rewriting the constitution before retiring to his estate. He thought he’d fixed the system. He was wrong—he’d demonstrated that the system could be fixed by force.

Caesar learned from Sulla. Emergency powers. Extended crisis. Promises to restore order. But Caesar never resigned. The Republic died not with an assassination (that came later, and changed nothing) but with an extension.

The MOSAIC hard-codes expiration into power itself because we’ve seen where “discretionary” limits lead. We don’t rely on leaders choosing to step down. We don’t rely on norms being respected. We make stepping down automatic, like gravity.


Why These Two Principles—And Why Together

Transparency without decay creates a surveillance panopticon where every action is visible but the watchers never rotate. China’s social credit system is transparent in a narrow sense—citizens can see their scores—but the system that assigns those scores operates without meaningful checks on its power.

Decay without transparency creates musical chairs where the faces change but the game stays rigged. New leaders inherit opaque systems, learn the hidden levers, and perpetuate the same patterns.

You need both. You need to see what power is doing and ensure that power doesn’t accumulate into permanent dynasties.

This is why the Five Laws’s Axiomatic Hierarchy places these two principles at Tier 1—Foundational Principles that cannot be suspended under any circumstances, even by the other axioms. The Prime Directive (Experience Is Sacred) tells us why we’re building civilization. Transparency and decay tell us how to keep it from eating itself.

The other axioms—Freedom Is Reciprocal, Difference Sustains Life—are morally binding but require ongoing judgment. They’re applied through deliberation, arbitration, and the messy process of humans (and AIs) figuring out what “harm” means in specific contexts.

But Axioms II and IV don’t require judgment. They require enforcement—structural, automatic, architecturally embedded. You cannot vote to make decisions opaque. You cannot extend your term because the situation requires it. The system enforces these constraints the way reality enforces gravity: not because we chose to, but because that’s how the infrastructure was built.


The Uncomfortable Truth

Here’s the part nobody likes to hear.

These principles will sometimes produce worse outcomes than discretionary alternatives. There will be moments when a leader with genuine wisdom could have made a better decision if they’d been allowed to operate in secret. There will be emergencies where a term extension might have served the public good.

The transparent, decaying system doesn’t promise optimal outcomes in every case. It promises something different: systematic prevention of the failure modes that destroy civilizations.

The Holocaust wasn’t caused by a lack of brilliant leadership. It was enabled by opaque bureaucracies executing decisions that nobody could see, backed by authority that nobody could challenge. The system that produced Elias feeding pigeons on a sunny morning is also the system that produced IBM punch cards tracking Jews across Europe.

The Roman Republic didn’t fall because of bad consuls. It fell because the mechanisms preventing power accumulation were gradually eroded, exception by exception, emergency by emergency, until there was nothing left to prevent a Caesar.

We’re building AI systems that will be orders of magnitude more powerful than any previous governance technology. We’re building institutions that might last for centuries. We’re building infrastructure that will coordinate the survival of billions of conscious beings.

The question isn’t whether we can trust today’s leaders to use these tools wisely. The question is whether we can build systems that prevent any leader—today’s or tomorrow’s—from using these tools to establish permanent, unchallengeable control.

Transparency and decay are the answer. Not because they’re perfect. Because they’re necessary.


References

Share this article: