Sign in for free: Preamble (PDF, ebook & audiobook) + Forum access + Direct purchases Sign In

Unscarcity Research

"The MOSAIC Architecture: How to Run Civilization Without a Price Tag"

"A technical deep-dive into how the Civic Mesh coordinates resource allocation, validates contributions, resolves disputes, and defends against capture—without prices, central databases, or blockchain"

13 min read 2913 words /a/civic-mesh-architecture

Note: This is a research note supplementing the book Unscarcity, now available for purchase. These notes expand on concepts from the main text. Start here or get the book.

The MOSAIC Architecture: How to Run Civilization Without a Price Tag

For software engineers skeptical that post-money coordination can scale


“You can’t coordinate a complex economy without prices.”

Every economist, policy wonk, and libertarian uncle has said it. Friedrich Hayek built an entire career proving that prices are the only way to aggregate distributed knowledge. Markets are the universe’s way of saying “figure it out, nerds.”

Here’s the uncomfortable truth: We’ve already proven Hayek wrong—repeatedly.

Google Maps coordinates 154 million daily users without anyone paying for priority routing. Wikipedia produces humanity’s most comprehensive encyclopedia with 200,000 daily edits—no money changes hands. Linux runs most of the world’s servers through volunteer coordination. Your email gets from New York to Tokyo through federated protocols that nobody owns.

The question isn’t can you coordinate without prices. The question is: Can you extend these patterns from information to physical resources?

This article provides the engineering answer.


The Category Error Everyone Makes

When you first hear “civilization without money,” your brain immediately reaches for two bad models:

Bad Model #1: “So it’s blockchain?”

No. Blockchain solves the problem of achieving consensus among mutually distrusting strangers who need strict transaction ordering. It’s brilliant for “who owns this Bitcoin?” and catastrophically overengineered for “should we build a hospital here?”

Blockchain assumes adversarial anonymity. The MOSAIC assumes identity-aware cooperation among federated communities with aligned goals. Different problem, different architecture.

Blockchain’s proof-of-work makes sense when:

  • Participants are anonymous and potentially hostile
  • Transaction ordering must be globally consistent
  • Financial value transfer is the primary use case
  • Burning the energy of a small country is an acceptable trade-off

The MOSAIC needs:

  • Identity-aware consensus (you’re accountable, not anonymous)
  • Eventually consistent coordination (we don’t need atomic precision)
  • Multi-dimensional resource flows (housing ≠ healthcare ≠ compute)
  • Energy efficiency (we’re running civilization, not a speculation casino)

Bad Model #2: “So there’s a central database?”

Also no. Central databases create single points of failure (technical), single points of capture (political), and single points of “whoops, the authoritarian just seized the server room.”

The MOSAIC is federated, not centralized. Think DNS, not Oracle Database. Think the internet protocol stack, not Amazon Web Services.

The Right Model: TCP/IP for Civilization

The MOSAIC (Modular, Autonomous, Interconnected Communities) is to resource coordination what TCP/IP is to data transmission.

TCP/IP doesn’t control the internet—it enables independent networks to interoperate. Similarly:

  • The MOSAIC doesn’t control resource allocation
  • It provides protocols for communities (Commons) to coordinate
  • Each Commons maintains autonomy
  • Shared protocols enable interoperability
  • Transparency enables accountability

The Civic Mesh is the protocol layer. The MOSAIC is the civilization running on it.


The Five Data Structures That Replace Money

Money does one thing: collapse multi-dimensional value into a single scalar. That’s convenient for exchange, but it’s a lossy compression algorithm for human contribution.

The Civic Mesh operates on five richer data structures.

1. Contribution Logs (Append-Only, Distributed)

Purpose: Record who contributed what, when, in what context, and what it enabled.

ContributionLog {
  id: UUID
  contributor: IdentityHandle
  timestamp: ISO8601
  contribution_type: Enum[Labor, Resource, Coordination, Knowledge, Care]
  domain: String  // e.g., "healthcare", "housing", "energy"
  verification: {
    witnesses: [IdentityHandle]  // peer validators who saw it happen
    diversity_score: Float       // Proof-of-Diversity metric
    audit_trail: Merkle_root     // tamper detection
  }
  context: {
    mission: MissionID           // which Guild or project
    enablement: ImpactVector     // what did this make possible?
    resources_consumed: ResourceVector  // what did it cost the system?
  }
}

Why this beats a paycheck:

A paycheck says “you worked 40 hours.” A Contribution Log says “you mentored three engineers on fusion reactor design, which accelerated the Osaka project by two months, validated by six peers across three continents.”

One is a number. The other is a story.

Storage pattern: Distributed hash table with content-addressed storage (like IPFS). Each contribution is immutable, with references forming a directed acyclic graph of “who enabled what.” Think Git’s commit history—decentralized, auditable, tamper-evident, no blockchain overhead.

2. Resource Requests (Demand Signals)

Purpose: Surface what people need without requiring payment.

ResourceRequest {
  id: UUID
  requester: IdentityHandle
  resource_type: Enum[Housing, Healthcare, Equipment, Compute, etc.]
  quantity: ResourceVector  // multi-dimensional, not just a count
  context: {
    mission: MissionID        // what does this enable?
    urgency: Enum[Routine, Priority, Emergency]
    alternatives_considered: [ResourceID]  // did you try substitutes?
    duration: Timespan        // temporary or permanent?
  }
  impact_forecast: {
    enables: [OutcomeDescription]  // what will this make possible?
    requires: ResourceVector       // full dependency chain
    opportunity_cost: ResourceVector  // what else could these resources do?
  }
}

Why this beats a market transaction:

Markets only see effective demand—what you can pay for. They’re blind to the brilliant researcher who can’t afford lab equipment, the community that needs a clinic but has no money, the innovation that dies in someone’s head because they couldn’t quit their job to pursue it.

Resource Requests surface actual need with context. The system sees “Maria needs fabrication access to prototype a water filter that could serve 10,000 people”—not just “Maria has $0.”

Storage pattern: Time-series databases with privacy-preserving aggregation. Individual requests are ephemeral; aggregate demand patterns persist. Similar to Google Analytics—page views disappear, traffic patterns enable intelligent routing.

3. Reputation Profiles (Decaying, Domain-Specific)

Purpose: Distinguish signal from noise without creating permanent hierarchies.

ReputationProfile {
  identity: IdentityHandle
  domains: {
    [domain_name]: {
      current_score: Float       // decays exponentially
      recent_contributions: [ContributionID]
      peer_endorsements: {
        [endorser]: Float        // weighted by endorser's reputation
      }
      failures: [FailureLog]     // mistakes matter
      diversity_bonus: Float     // extra weight for cross-domain work
      last_updated: ISO8601
      decay_rate: Float          // Five Laws Axiom IV: Power Must Decay
    }
  }
}

The decay function:

reputation(t) = base_reputation × e^(-λ × t)

Where:
  λ = decay constant (configured per domain)
  t = time since contribution
  base_reputation = peer-validated contribution value

Why decay matters:

Without decay, reputation compounds into oligarchy. The person who did great work in 2025 shouldn’t automatically dominate decisions in 2045. Decay enforces Five Laws Axiom IV (Power Must Decay)—you must continuously earn influence.

Unlike your h-index, which traps you in your past forever, reputation here is more like a tennis ranking: recent performance matters most.

4. Impact Point Accounts (Non-Transferable, Decaying)

Purpose: Coordinate access to the 10% Ascent (life extension, interstellar missions, consciousness exploration) without creating markets.

ImpactPointAccount {
  identity: IdentityHandle
  balance: Float              // current IMP
  earned: {
    [contribution_id]: {
      amount: Float
      earned_at: ISO8601
      expires_at: ISO8601     // IMP decay and expire
      domain: String
      validated_by: [IdentityHandle]
    }
  }
  spent: [Transaction]        // what did you use them for?
  decay_schedule: DecayFunction // 10% annual decay (Axiom IV)
}

Critical constraints:

  • Non-transferable: You can’t sell IMP. You can’t give IMP. If you could, they’d become currency. Markets would form. Speculation would begin. We’d be right back where we started.

  • Decaying: 10% annual decay, ~7-year half-life. Without decay, early contributors become permanent elites. Decay ensures power is re-earned each generation.

Decay function:

remaining_IMP(t) = initial_IMP × e^(-0.1 × t)

Richard Castellano, the logistics billionaire from Chapter 8, gets Founder Credits through the EXIT Protocol—but even those decay at 5% annually. His grandchildren inherit nothing unless they earn it themselves.

5. Proof-of-Diversity Validator Sets (Randomly Selected, Statistically Validated)

Purpose: Prevent capture by ensuring major decisions require consensus from genuinely diverse groups.

ValidatorSet {
  decision_id: UUID
  selection_criteria: {
    min_dimensions: Int  // e.g., 5
    required_entropy: {
      geographic: Float   // bits of Shannon entropy
      economic: Float
      cultural: Float
      generational: Float
      professional: Float
    }
    min_validators: Int   // typically 7
  }
  selected: {
    [identity]: {
      diversity_attributes: Map[String, String]
      selection_probability: Float
      vote: Enum[Approve, Reject, Abstain]
      reasoning: Text       // why?
    }
  }
  diversity_score: Float  // aggregate Shannon entropy
  threshold: Float        // e.g., 0.67 supermajority
}

The selection algorithm ensures you can’t stack a panel with your friends. Validators are randomly selected to maximize diversity across multiple dimensions. Even if you control 20% of every demographic category (geographic, economic, cultural, generational, professional), the probability of capturing a diverse 7-person panel is:

P(capture) = (0.2)^5 = 0.00032 (0.032%)

Compare that to capturing a simple majority: 51%.

This is the Diversity Guard in action—the governance firewall that makes coordinated tyranny statistically improbable. See Diversity Guard Mathematics for the full proof.


Proof-of-Diversity: The Consensus Mechanism That Isn’t Blockchain

Blockchain asks: “Do we agree on transaction order?”

The MOSAIC asks: “Do diverse perspectives agree this is a good idea?”

Different question, different mechanism.

How Proof-of-Diversity (PoD) Works

Step 1: Decision triggers validation

A Housing Guild proposes: “Redirect 10% of regional fabrication capacity from single-family homes to multi-unit buildings.”

This affects multiple Commons → triggers PoD validation.

Step 2: Random selection of diverse validators

The system selects 7 validators requiring:

  • Geographic diversity (urban, suburban, rural)
  • Economic diversity (different sectors)
  • Cultural diversity (different backgrounds)
  • Generational diversity (young, mid-career, senior)
  • Professional diversity (different domains of expertise)

Step 3: Validators review and vote

Each validator sees:

  • Full proposal text
  • Impact forecast (who benefits, who bears costs)
  • Resource requirements
  • Alternative proposals considered
  • Historical precedent (what happened when we did similar things?)

They vote with reasoning:

  • APPROVE: “This addresses housing shortage without excess energy cost”
  • REJECT: “Insufficient water infrastructure to support density increase”
  • ABSTAIN: “Need more data on transit capacity”

Step 4: Correlation detection

Here’s where it gets clever. The system runs statistical tests to detect coordinated capture:

def detect_bloc_voting(votes, validator_attributes):
    """
    Flag decisions where votes correlate suspiciously with
    any single demographic dimension.
    """
    for dimension in DIVERSITY_DIMENSIONS:
        contingency = build_contingency_table(votes, dimension)
        chi2, p_value, dof, expected = chi2_contingency(contingency)

        # p < 0.1 means votes are suspiciously correlated
        if p_value < 0.1:
            return {
                "suspicious": True,
                "dimension": dimension,
                "recommendation": "Resample or escalate to Five Laws review"
            }

    return {"suspicious": False}

If all 7 validators from urban areas vote YES on a pro-urban policy and the chi-squared test shows p-value = 0.03—something’s fishy. Resample or escalate.

Step 5: Decision recorded

If the vote passes threshold:

  • Decision implemented
  • Validators’ reasoning published (Five Laws Axiom II: Truth Must Be Seen)
  • Impact tracking begins (did forecast match reality?)

If rejected:

  • Decision blocked
  • Reasoning published
  • Proposers can revise and resubmit

Why This Works: The Math of Making Tyranny Hard

Byzantine Fault Tolerance says: n ≥ 3f + 1, where f is the number of potentially compromised validators.

For f = 2 (tolerating 2 bad actors), n ≥ 7.

But with diversity requirements, the probability of 2 bad actors both being selected from diverse backgrounds is multiplicatively small:

P(attack | PoD) = P(control_dimension_A) × P(control_dimension_B) × ...

For 5 dimensions with 10% control each:
P(attack) = 0.1^5 = 0.00001 (0.001%)

Compare to homogeneous majority voting:

P(attack | majority) = 0.51 (51%)

PoD is computationally cheap but socially expensive to attack. You can’t buy your way in—you must build genuine support across statistically diverse populations.

Dimension Blockchain (PoW/PoS) Proof-of-Diversity
Goal Order transactions Ensure diverse perspectives agree
Security parameter Compute power or stake Validator diversity
Energy cost High Negligible
Attack vector 51% attack Must control all demographic dimensions
Scalability Limited High (only selected validators vote)

Dispute Resolution: Robots First, Humans for the Hard Stuff

Most conflicts are trivial. A scheduling conflict is not a philosophical crisis. The system handles 99% automatically, escalating only the genuinely difficult cases to human judgment.

The Dispute Resolution Hierarchy

Level 0: Automated (99% of cases)
├─ Resource conflicts → Time-sharing algorithms
├─ Scheduling conflicts → Calendar optimization
├─ Load balancing → Redistribute across capacity
└─ Metric anomalies → Flag and auto-adjust

Level 1: Peer Mediation (0.9% of cases)
├─ Guild-to-Guild negotiation
├─ AI facilitates by surfacing context
├─ Resolution within 48 hours
└─ Outcome published as precedent

Level 2: Civic Arbitration (0.09% of cases)
├─ Neutral panel (PoD-selected)
├─ Formal hearing with evidence
├─ Binding decision unless appealed
└─ Reasoning published

Level 3: Constitutional Review (0.01% of cases)
├─ Systemic issues affecting Five Laws axioms
├─ Higher diversity requirements (15+ validators)
├─ Supermajority (80%+) threshold
└─ Creates binding precedent

Example: Resource Contention

Scenario: Two Guilds want the same fabrication facility at the same time.

Automated resolution:

def resolve_fabrication_conflict(request_A, request_B, facility):
    # Check urgency
    if request_A.urgency == "Emergency" and request_B.urgency != "Emergency":
        return allocate_to(request_A), suggest_alternative(request_B)

    # Check relative impact
    impact_A = forecast_impact(request_A)
    impact_B = forecast_impact(request_B)

    if impact_A > impact_B * 1.5:  # Clear winner
        return allocate_to(request_A), defer(request_B)

    # Check deadline flexibility
    if request_A.deadline > request_B.deadline:
        return defer(request_A), allocate_to(request_B)

    # No clear algorithmic winner → escalate
    return escalate_to_mediation([request_A, request_B])

If escalated to peer mediation:

Representatives meet with AI facilitation:

  • AI surfaces relevant context (past projects, consumption patterns, historical precedent)
  • Each Guild explains why their request is time-sensitive
  • Mediator proposes options:
    • Time-share (Guild A: mornings, Guild B: afternoons)
    • Delay lower-priority project by 2 weeks
    • Use alternative facility with longer transit time

Guilds negotiate. Outcome published: “Resource conflict resolved via time-sharing; neither project delayed.”

Guild Failure Detection

Guilds can drift from mission, waste resources, or just… stop working. The system detects and responds.

class GuildHealthMetrics:
    mission_drift: Float      # output alignment with stated mission
    resource_efficiency: Float  # resources consumed vs. impact created
    contributor_satisfaction: Float  # are members fulfilled?
    external_reputation: Float  # peer Guild assessments
    delivery_consistency: Float  # promises vs. outcomes

    def health_score(self) -> Float:
        return weighted_average([
            (self.mission_drift, 0.3),
            (self.resource_efficiency, 0.2),
            (self.contributor_satisfaction, 0.2),
            (self.external_reputation, 0.15),
            (self.delivery_consistency, 0.15)
        ])

If health score falls below threshold:

  1. Flag for peer review (PoD-selected panel)
  2. Panel reviews metrics, history, member testimonials, comparable Guilds
  3. Decision: Restructure? New leadership? Deprecate and redirect members?

Similar to Kubernetes health checks—automated monitoring, human decisions for complex failures.


Attack Vectors and Defenses

Any coordination system must defend against gaming, capture, and bad actors. Here’s how the MOSAIC handles the classic attacks.

Attack 1: Sybil Attacks (Fake Identities)

Threat: Create thousands of fake identities to manipulate voting or resource allocation.

Defense: Identity isn’t anonymous—it’s persistent and accountable.

  • Web-of-trust model: New identities require sponsorship from 3+ established members
  • Proof-of-personhood: Periodic verification (video call, in-person gathering, biometric)
  • Reputation staking: Sponsors risk their own reputation when vouching
  • Time delay: New identities can’t vote immediately

Cost to attacker: Creating one fake identity requires compromising 3+ established members who have their own reputation at stake. Creating an army of fakes is O(n) in compromised sponsors.

Attack 2: Capture (Control Validator Selection)

Threat: Manipulate who gets selected as validators to approve favorable decisions.

Defense: Randomized selection + mandatory diversity.

  • Selection algorithm optimizes for Shannon entropy across dimensions
  • Diversity thresholds prevent homogeneous panels
  • Statistical correlation detection flags suspicious voting patterns

Even with 20% control across all 5 diversity dimensions:

P(capture) = (0.2)^5 = 0.00032 (0.032%)

Attack 3: Collusion (Coordinate Across Validators)

Threat: Validators secretly agree to vote as a bloc regardless of merit.

Defense: Transparency + statistical detection.

  • All votes and reasoning are public (pseudonymous but auditable)
  • Chi-squared tests detect correlation between votes and demographics
  • If p < 0.1, decision is flagged for review
  • Repeated collusion destroys reputation

Attack 4: Goodhart’s Law (Gaming Metrics)

Threat: Optimize for measured metrics while ignoring actual goals.

Defense: Multi-dimensional metrics + peer review.

  • No single metric determines outcomes
  • Metrics are context-rich (not reducible to one number)
  • Peer review catches gaming that algorithms miss
  • Gaming harms long-term reputation

Example:

A Guild reports “high impact” by counting people served. But peer Guilds notice: “They’re handing out tokens without checking if anyone benefits.”

Peer review → reputation hit → future requests scrutinized.


What This Actually Looks Like: The Stack

┌─────────────────────────────────────────────────────────┐
│  Civic Mesh Protocol Layer (Open Standard)              │
│  - Resource Request Protocol (RRP)                      │
│  - Contribution Validation Protocol (CVP)               │
│  - Proof-of-Diversity Protocol (PoDP)                   │
│  - Dispute Resolution Protocol (DRP)                    │
└─────────────────────────────────────────────────────────┘
                          │
        ┌─────────────────┼─────────────────┐
        ▼                 ▼                 ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│  Commons A    │ │  Commons B    │ │  Commons C    │
│  (Local Node) │ │  (Local Node) │ │  (Local Node) │
└───────────────┘ └───────────────┘ └───────────────┘
        │                 │                 │
        └─────────────────┴─────────────────┘
                          │
                ┌─────────┴─────────┐
                ▼                   ▼
        ┌──────────────┐    ┌──────────────┐
        │ DHT Storage  │    │ Time-Series  │
        │ (IPFS-like)  │    │ (InfluxDB)   │
        └──────────────┘    └──────────────┘

Data Flow: Resource Request

  1. User submits request via local Commons interface → logged with timestamp, identity, context

  2. Local AI analyzes → Routine? Auto-approve and route. Unusual? Flag for review.

  3. If flagged, initiate PoD → Select 7 diverse validators → collect votes + reasoning

  4. Aggregate votes → check diversity metrics → run correlation detection → publish decision

  5. Resource allocation → If approved: route to Guild. If rejected: notify with reasoning.

  6. Impact tracking → Did this enable expected outcomes? Feedback loop adjusts models.

API Sketch

# Resource Request
civic_mesh.request_resource(
    resource_type="fabrication_facility",
    quantity=ResourceVector(hours=20, energy_kwh=500),
    mission="Prototype water desalination system",
    impact_forecast="Enable testing of 10x more efficient membrane",
    deadline=datetime(2048, 3, 15)
)

# Log Contribution
civic_mesh.log_contribution(
    contributor=identity.handle,
    contribution_type="knowledge_transfer",
    domain="water_systems",
    description="Mentored 5 engineers on reverse osmosis design",
    witnesses=[eng1.handle, eng2.handle, eng3.handle],
    enablement="Accelerated desalination project by 2 months"
)

# PoD Validation
decision = civic_mesh.validate_decision(
    proposal="Increase water allocation to agriculture by 15%",
    affected_population=["farmers", "urban_residents", "industry"],
    diversity_requirements={
        "geographic": 2.0,  # bits of entropy
        "economic": 1.5,
        "cultural": 1.8
    },
    threshold=0.67
)

Performance: Can This Actually Scale?

TL;DR: Yes. The existence proof is all around you.

Scalability

Operation Complexity
Contribution logging O(1) per contribution
Resource request routing O(log n) where n = Guilds
PoD validator selection O(k × d) where k=7, d=5
Automated dispute resolution O(1) for most cases

Latency

Operation Typical Maximum
Routine resource request < 1 second 10 seconds
Contribution logging < 1 second 5 seconds
PoD validation (cold start) 24-48 hours 7 days
Automated dispute resolution < 10 seconds 1 minute
Peer mediation 1-3 days 1 week
Constitutional review 2-4 weeks 90 days

Throughput Targets

  • 1 billion+ resource requests/day — Google Maps handles 1.5 billion routes daily
  • 100 million+ contribution logs/day — Wikipedia scale: 597 million edits/year across Wikimedia
  • 1 million+ PoD validations/day — rare; most decisions are routine
  • 10,000+ dispute resolutions/day — mostly automated

Storage

  • Contribution logs: ~100 GB/day → ~10 TB/year active, 100+ TB historical
  • Resource requests (aggregated): ~1 TB active (90-day rolling window)
  • Reputation scores: ~10 KB × 1B identities = 10 TB
  • Total active: ~50-100 TB distributed across nodes

Modern infrastructure handles this trivially. We’re not asking for miracles—we’re asking for what Google does for ads, but for resource coordination.


Relationship to What Already Works

The MOSAIC isn’t entirely novel. It composes proven patterns.

What We Borrow From Bitcoin

Distributed consensus, tamper-evident logs, transparent validation.

What We Change

Identity-aware (not anonymous), PoD (not PoW), multi-dimensional (not just transactions), energy-efficient (no mining).

What We Borrow From Kubernetes

Declarative resource requests, automated scheduling, health monitoring, namespace isolation.

What We Change

Coordinates humans + physical resources (not containers), Byzantine fault tolerance via diversity (not simple majority), reputation-weighted (not admin-controlled).

What We Borrow From DNS

Federated architecture, hierarchical authority with local autonomy, caching, eventual consistency.

What We Change

Coordinates resource flows (not name resolution), PoD prevents capture (not just signatures), transparent reasoning (not just lookups).

What We Borrow From Google Maps

Real-time information aggregation, AI-augmented routing, distributed decision-making, feedback loops.

What We Change

Multi-dimensional resources (not just travel time), contribution tracking (not anonymous GPS), dispute resolution (not just alternative routes).


Open Questions

No system is complete. Here’s what still needs work.

Identity Without Government

The current design assumes cryptographic identity + social attestation. But:

  • How do you bootstrap trust in a brand-new Commons?
  • What if social attestation networks are compromised?
  • How do you handle key loss without central authority?

Potential: Social recovery protocols (Argent wallet model), biometric fallbacks, reputation escrow.

Cross-Commons Coordination

Each Commons runs independently. How do they interoperate on complex multi-Commons projects? What about disputes between Commons?

Potential: Federated dispute resolution, inter-Commons treaties, shared infrastructure layers.

AI Bias

AI systems inherit training data biases. How do we detect when AI recommendations systematically favor certain groups?

Potential: Adversarial testing, diverse training data, transparent model auditing, human review.

Transition

Most infrastructure runs on money. How do you migrate existing systems? Handle hybrid periods where money and MOSAIC coexist?

Potential: Free Zones as Phase Zero pilots, dual-track systems, gradual deprecation.

See: The Transition: EXIT or the Fire and Enterprise EXIT Protocol.


The Bottom Line: We Already Do This

Skeptics say: “You can’t coordinate complex systems without markets.”

The MOSAIC answers: “We already do—for traffic, knowledge, code, and content. Now let’s do it for physical resources.”

Google Maps coordinates 154 million people’s daily movement without anyone paying for priority routing. Wikipedia’s 200,000 daily edits produce better information than any corporation. Linux runs 96% of the world’s top million web servers through volunteer coordination.

The question isn’t whether post-price coordination is possible.

The question is whether we’ll build it before scarcity-era systems collapse under the weight of technological abundance.

The Labor Cliff arrives in the 2030s. Fusion goes commercial between 2045 and 2055. AI already outperforms humans on most cognitive tasks.

The infrastructure for abundance exists. The question is whether we’ll build the coordination layer to distribute it—or watch it get captured by the same structures that created scarcity in the first place.

The MOSAIC is one answer. The code patterns exist. The engineering is tractable.

The only question is will.


References and Further Reading

Distributed Systems Foundations:

  • Lamport, L., Shostak, R., & Pease, M. (1982). “The Byzantine Generals Problem.” ACM Transactions
  • Castro, M., & Liskov, B. (1999). “Practical Byzantine Fault Tolerance.” OSDI
  • Maymounkov, P., & Mazières, D. (2002). “Kademlia: A Peer-to-Peer Information System Based on the XOR Metric”

Coordination Without Markets:

  • Benkler, Y. (2006). The Wealth of Networks
  • Ostrom, E. (1990). Governing the Commons
  • Raymond, E. S. (1999). The Cathedral and the Bazaar

Current Statistics:

Related Unscarcity Articles:


© 2025 Patrick Deglon. All Rights Reserved.

Share this article: