Sign in for free: Preamble (PDF, ebook & audiobook) + Forum access + Direct purchases Sign In

Unscarcity Research

AI as Referee, Humans as Conscience

> Note: This is a research note supplementing the book Unscarcity, now available for purchase. These notes expand on concepts from the main text. Start here or get the book. AI as Referee, Humans as...

8 min read 1815 words /a/ai-as-referee-humans-as-conscience

Note: This is a research note supplementing the book Unscarcity, now available for purchase. These notes expand on concepts from the main text. Start here or get the book.

AI as Referee, Humans as Conscience

Why the robot blows the whistle, but you decide if the game is worth playing


The Tennis Court Epiphany

In 2006, the U.S. Open became the first Grand Slam to deploy Hawk-Eye—a system using ten cameras to track a tennis ball’s trajectory with 3.6mm accuracy. Serena Williams hit a forehand. A line judge called it out. Williams challenged. Ten cameras, one algorithm, and 0.4 seconds later: the ball was in by 2 millimeters. The call was overturned.

Here’s what’s remarkable: nobody argued with the robot.

Not the line judge, not the opponent, not the crowd. When Hawk-Eye renders its verdict, the matter is settled. The machine saw what the human missed. End of debate.

But here’s what’s more remarkable: no one suggested Hawk-Eye should also decide the rules of tennis.

Nobody said, “Let the algorithm determine whether tiebreakers are fair.” Nobody proposed, “The AI should choose which tournaments count toward rankings.” Hawk-Eye measures; humans deliberate. The machine enforces; people decide what’s worth enforcing. This isn’t a bug. It’s the blueprint for governing a post-scarcity civilization.


The Division Nobody Talks About

Every governance system in history has faced the same problem: how do you scale decision-making without becoming either a bureaucratic nightmare or a tyranny?

Rome solved it with magistrates—then got emperors. The Catholic Church solved it with bishops—then got the Inquisition. Modern democracies solved it with agencies—then got regulatory capture. The pattern repeats because all these solutions conflate two fundamentally different functions:

  1. Rule enforcement (Did the ball land in or out?)
  2. Rule formation (Should the court be this size? Should we even play tennis?)

When the same entity does both, power accumulates. The rule-enforcers become rule-makers. The referees start owning teams. The line judges redesign the court to favor their friends.

The Unscarcity solution is elegant in its simplicity: separate the functions at the substrate level.

AI handles enforcement. Humans handle formation. The machine blows the whistle. The people write the rulebook.


Wikipedia: 2,500 Bots, Zero Robot Overlords

If you think this sounds utopian, you’re already using a system that works this way—and has for twenty-four years.

Wikipedia’s English edition has 7 million articles, 202,000 active editors, and approximately 2,500 approved bot tasks. Bots perform 10-16% of all edits. Nine of the ten most prolific “editors” aren’t human. ClueBot NG, the most famous anti-vandalism bot, catches 40% of all vandalism within 30 seconds. It has made millions of edits.

But here’s the thing: any human can override any bot.

ClueBot reverts your edit? You can put it back. A categorization bot mislabeled your article? You can fix it. No permissions required. No appeals board. No waiting period. Human judgment trumps algorithmic judgment, always.

This isn’t a flaw in Wikipedia’s design—it’s the core of Wikipedia’s design. Bots handle the predictable: formatting, link maintenance, vandalism detection, statistics updates. Humans handle the contested: whether a source is reliable, whether content is notable, whether an article maintains neutral point of view.

The pattern: AI for mechanics, humans for meaning.

When you search for “Wikipedia governance” you’ll find endless debates about administrator elections, arbitration cases, and policy changes. You’ll find almost nothing about bot decisions. That’s the point. The mechanical layer operates invisibly. The meaningful layer operates democratically.

This is exactly what the MOSAIC architecture proposes for post-scarcity civilization.


The Civic Layer: Referee, Not Ruler

In the Unscarcity framework, the Civic Layer is the AI-augmented infrastructure that coordinates resource allocation and light governance for the Foundation (the 90% baseline that covers everyone’s survival needs).

The key word is light.

The Civic Layer operates as a referee and registrar, not a ruler. It manages logistics—housing, food distribution, energy allocation—without requiring human deliberation for routine operations. When a Commons needs 50,000 more kilowatt-hours, the Civic Layer routes the energy. When a resident requests shelter, the Civic Layer identifies available housing.

But the Civic Layer never decides values.

It doesn’t determine whether your Commons should prioritize density or sprawl. It doesn’t choose whether your culture values silence or celebration. It doesn’t rule on whether art or athletics deserves more resources. These are human questions—questions of meaning, aesthetics, and philosophy.

The Civic Layer intervenes reactively only when a Five Laws Axiom is violated:

  • Someone’s blocking another person’s Foundation access? Flag.
  • A process is being hidden from public view? Flag.
  • Power is concentrating without decay? Flag.

Then humans deliberate. The AI identified the violation. Humans determine the response.

This is the Singapore traffic model at civilizational scale. Singapore’s GLIDE system controls 2,700 intersections with 18 regional computers. Local controllers handle split-second timing decisions. Regional computers coordinate green waves. Human operators intervene for exceptional circumstances. The AI decides when your light turns green. Humans decided whether to build roads in the first place.

Result: a 20% reduction in peak-hour delays, 15% faster commutes, $1 billion in annual savings—all without a single AI deciding urban policy.


The Stanford Warning: “AI in the Loop, Not Humans Out”

The phrase “human-in-the-loop” has become a governance buzzword. The EU AI Act mandates it for high-risk systems. Corporate ethics boards invoke it as a safety blanket. But there’s a problem Stanford’s Human-Centered AI Institute has been documenting: humans in loops get lazy.

Here’s the paradox: if an AI is right 99% of the time, humans stop paying attention. Why override something that’s almost always correct? The human becomes a rubber stamp, nodding along until they’ve lost the facility for independent judgment.

This is called “automation complacency,” and it’s not hypothetical. Studies show that experts put in charge of overseeing automated systems “get out of practice, because they no longer engage in the routine steps that lead up to the conclusion.” Presented with conclusions rather than problems, they lose the skill to evaluate those conclusions.

The Wikipedia model avoids this trap by not making human oversight the default path.

Most bot edits are never reviewed by humans—and that’s fine. Humans don’t need to verify every formatting fix. What matters is that humans can intervene when they care to. The override button is always there. But humans exercise judgment selectively, on the cases that matter, rather than rubber-stamping everything.

The Civic Layer follows the same principle. Most resource allocations don’t need human review. The AI routes energy, distributes food, assigns housing—millions of transactions, zero drama. Humans engage when the system flags an anomaly, when a dispute arises, or when policy needs updating.

The AI handles the 99% that’s obvious. Humans focus on the 1% that requires wisdom.


“But Won’t the AI Get It Wrong?”

Yes. Constantly.

ClueBot NG has a 0.1% false positive rate. That sounds excellent—until you realize 0.1% of millions of decisions means thousands of errors. Good-faith editors get their contributions wrongly reverted. Legitimate articles get mislabeled as vandalism.

Here’s how Wikipedia handles it: gracefully.

Every bot decision is immediately reversible. False positives are tracked and fed back into training. Users aren’t punished for having edits incorrectly reverted. The community tolerates the error rate because the alternative—having humans review every edit—doesn’t scale.

This is the crucial insight: the question isn’t “will the AI be perfect?” The question is “what happens when it’s wrong?”

If the cost of AI error is catastrophic and irreversible—you need more human oversight. If the cost is annoying but fixable—let the machine work and correct as needed.

The Foundation’s Civic Layer handles high-stakes decisions (someone’s shelter allocation) differently than low-stakes decisions (routing tonight’s vegetable surplus). Both use AI coordination. But the human oversight intensity scales with the consequences of error.

This is common sense we’ve somehow forgotten. We don’t require a human to approve every email spam filter decision. We don’t need a judge to validate every traffic light. But we absolutely want humans involved in parole decisions and medical diagnoses.

The Civic Layer applies the same gradient. Lettuce routing? Full automation. Housing disputes? Human review. Constitutional challenges? Full Diversity Guard deliberation.


The Alternative Is Worse

Here’s what critics miss: the choice isn’t between “AI governance” and “human governance.”

The choice is between coordinated AI + human deliberation and uncoordinated human bureaucracy + de facto AI manipulation.

We already live in a world shaped by algorithms. Your social media feed is algorithmically curated. Your job application is algorithmically screened. Your credit score is algorithmically calculated. Your news exposure is algorithmically filtered.

These systems aren’t transparent. They aren’t reversible. They aren’t accountable. They don’t have human override buttons. They’re optimized for engagement, profit, or efficiency—not for your flourishing.

The Unscarcity proposal doesn’t add AI to governance. It makes the AI that’s already governing visible, auditable, and subordinate to human values.

The Civic Layer publishes its source code. Its decisions appear on distributed ledgers (DPIF). Its logic is comprehensible, not black-boxed. Any human can audit any decision. Any Commons can challenge any pattern.

This is Axiom II: Truth Must Be Seen—applied at the infrastructure level.

The alternative—pretending we can govern complex systems with human bureaucracy alone—is how you get regulatory agencies staffed by 10,000 people who still can’t track derivatives, or planning departments that take eighteen months to approve a building permit.

AI handles the speed and scale problems. Humans handle the meaning and direction problems. Neither can do both. Both are necessary.


The Game Itself

Return to the tennis court for a moment.

Hawk-Eye settles line calls with 3.6mm accuracy. No human could match it. No one tries. The technology ended a century of arguments about where the ball landed.

But Hawk-Eye didn’t end arguments about tennis. Players still debate rule changes. Fans still discuss whether tiebreakers are exciting or artificial. Tournament directors still decide whether to prioritize tradition or innovation.

The game continues. Only the disputes about mechanics have ended.

This is the vision: a civilization where the tedious arguments about “did you comply with procedure X” are settled instantly by machines—freeing humans to engage in the important arguments about “should procedure X exist in the first place.”

The AI blows the whistle. You decide if the game is worth playing.



References

  • UnscarcityBook, Chapter 2, Chapter 3
  • Geoffrey C. Bowker & Susan Leigh Star, Sorting Things Out (1999)
  • Wikipedia: Bot policy and statistics (2024)
  • Hawk-Eye Innovations: Technical specifications
  • Stanford HAI: “AI in the Loop: Humans Must Remain in Charge” (2024)
  • IBM: “What Is Human In The Loop (HITL)?” (2024)
  • Singapore GLIDE traffic management system: Performance metrics
  • EU AI Act, Article 14: Human oversight requirements

Share this article: