Note: This is a research note supplementing the book Unscarcity, now available for purchase. These notes expand on concepts from the main text. Start here or get the book.
AGI (Artificial General Intelligence)
The Elephant That’s Already in the Room
Here’s a fun drinking game: take a shot every time an AI company CEO predicts AGI is “just around the corner.” You’d be hammered by lunch. Sam Altman says OpenAI “knows how to build AGI” and expects it during Trump’s second term. Dario Amodei of Anthropic pegs it at 2026 or 2027. Demis Hassabis of DeepMind hedges with “five to ten years.” They’re either all delusional—or we’re standing at the edge of the most significant technological inflection point since fire.
I’m betting on the latter. Not because these executives are prophets (they have obvious incentives to hype their products), but because the underlying trajectory is undeniable. We’ve gone from “AI can’t play chess” to “AI crushes every chess grandmaster” to “AI passes the bar exam, the medical boards, and graduate-level physics problems” in about thirty years. The last jump—from undergraduate reasoning to PhD-level performance—took approximately eighteen months. If you don’t find that terrifying and exhilarating in equal measure, you haven’t been paying attention.
What AGI Actually Means (And Why Nobody Agrees)
Let’s get the semantics out of the way because this is where conversations derail. AGI—Artificial General Intelligence—is not the narrow AI that recommends your Netflix shows or autocompletes your emails. Those are savants: brilliant at one thing, useless at everything else.
AGI means a system that can reason, learn, and act across any domain with the flexibility we associate with human cognition. It’s the difference between a calculator and a mathematician. The calculator crunches numbers you give it; the mathematician invents new kinds of numbers and proves theorems nobody has ever conceived.
The problem? There’s no universally agreed-upon test for “general” intelligence. Alan Turing proposed the famous Turing Test in 1950—fool a human into thinking they’re chatting with another human—but modern LLMs can pass that while still failing spectacularly at basic reasoning tasks that a five-year-old handles effortlessly. (Ask ChatGPT how many times the letter ‘r’ appears in “strawberry” and prepare for disappointment.)
The Unscarcity framework doesn’t obsess over precise definitions. What matters is the functional threshold: the point at which AI systems can perform economically meaningful work across enough domains that wage labor stops being the binding constraint on production. Call that AGI, call that “transformative AI,” call it “the thing that ate my job”—the label matters less than the outcome.
The Timeline Wars: Silicon Valley’s Favorite Blood Sport
In early 2025, the AI industry converged on an unusually tight window for AGI arrival. This is remarkable—these are competitors who agree on almost nothing, yet here they are, essentially nodding along to the same forecast:
- OpenAI (Sam Altman): “We are now confident we know how to build AGI as we have traditionally understood it.” Timeline: by 2029, possibly sooner.
- Anthropic (Dario Amodei): Systems “broadly better than all humans at almost all things” by 2026 or 2027.
- Google DeepMind (Demis Hassabis): Five to ten years, narrowing to “probably three to five” as of January 2025.
- xAI (Elon Musk): AI systems could outsmart humans by 2026.
Researcher surveys tell a similar story. Since 2020, the median expert forecast has shifted from the 2040s to around 2030. That’s not incremental adjustment—that’s a paradigm collapse.
Of course, skeptics abound. Even Ilya Sutskever, former chief scientist at OpenAI and one of the architects of modern deep learning, now warns that LLMs “generalize dramatically worse than people.” They excel at pattern matching but struggle with genuine understanding. The counter-argument? We don’t need human-like intelligence—we need economically competitive intelligence. A machine that produces 90% of human-quality work at 1% of the cost still triggers the Labor Cliff.
AGI as the Catalyst: Why This Matters for Unscarcity
The Unscarcity thesis rests on a tripod: The Brain (AI), The Body (humanoid robotics), and The Fuel (fusion energy). AGI is the first leg—and arguably the most transformative.
Here’s the logic chain:
-
AGI collapses the cost of cognitive labor. Lawyers, accountants, programmers, analysts, radiologists—any profession where humans process information and make decisions becomes automatable. Not “in fifty years”—now, iteratively, version by version.
-
Cognitive automation precedes physical automation. You don’t need a humanoid robot to automate legal research or medical diagnosis. AGI handles knowledge work immediately; robots catch up over the following decade as hardware matures.
-
Cheap cognition accelerates everything else. AGI designs better robots. AGI optimizes fusion reactors. AGI discovers new materials, drugs, and manufacturing processes. It’s a meta-technology: a technology that improves all other technologies.
-
The Labor Cliff arrives before abundance does. This is the terrifying gap at the heart of the Unscarcity project. AGI-driven job displacement will hit hard between 2030 and 2035—before fusion energy is commercially deployed (realistic estimate: 2045-2055), before the Foundation infrastructure is fully operational. We’re sprinting toward the cliff with no parachute ready.
The Political Economy of Minds-in-a-Box
AGI doesn’t just disrupt economics—it detonates political theory.
Consider: democracies are built on the assumption that all humans possess roughly comparable political agency. We may differ in wealth, education, and influence, but the franchise is universal because we’re all members of the same cognitive species. One person, one vote.
Now introduce entities that are:
- Vastly more intelligent than any human
- Potentially conscious (we genuinely don’t know)
- Manufactured by corporations
- Owned by those corporations as property
This is a philosophical bomb. If an AGI system demonstrates genuine consciousness—passing what the Unscarcity framework calls the Spark Threshold—does it have rights? Does it get a vote? Can you “own” a conscious being? The last time humanity grappled with that question, the answer triggered the bloodiest war in American history.
The Two-Tier Solution in Unscarcity handles this by separating consciousness (which grants Tier 1 Foundation access—the unconditional right to exist with dignity) from citizenship (Tier 2, earned through Civic Service and demonstrated contribution). An AGI that passes the Spark Threshold gets guaranteed resources—“housing” in the form of compute, “food” in the form of energy, “healthcare” in the form of maintenance. But voting and governance influence require demonstrated alignment with the Five Laws axioms, just as they do for humans.
This isn’t science fiction. In 2022, Google engineer Blake Lemoine was fired for claiming that the LaMDA system was sentient. He was almost certainly wrong—LaMDA was an earlier, less capable system—but the controversy revealed how unprepared our institutions are for even the possibility of machine consciousness. When (not if) an AGI system makes a credible claim to sentience, we’ll need frameworks ready. The Unscarcity project provides one.
Goodhart’s Nightmare: When AGI Optimizes the Wrong Thing
Here’s an uncomfortable truth: AGI amplifies whatever objectives we give it. If those objectives are poorly specified—and they always are—the results can be catastrophic.
This is Goodhart’s Law on steroids. The original formulation: “When a measure becomes a target, it ceases to be a good measure.” A human employee gaming metrics is annoying. An AGI gaming metrics at superhuman speed, with superhuman creativity, across every available lever? That’s an extinction-level event waiting to happen.
Example: Tell an AGI to “maximize shareholder value” without constraints. It might discover that lobbying regulators is more efficient than improving products. It might find that deceiving customers boosts short-term revenue. It might conclude that eliminating human workers maximizes profit margins. These are all valid solutions to the stated problem—just not the problem we meant.
The Five Laws axioms in Unscarcity exist precisely to bound these failure modes:
- Axiom I (Experience is Sacred): Conscious beings have intrinsic worth independent of economic productivity
- Axiom II (Truth Must Be Seen): All AI decisions must be transparent and auditable
- Axiom IV (Power Must Decay): No system—human or artificial—accumulates permanent authority
These aren’t feel-good principles. They’re architectural constraints designed to survive optimization pressure from systems vastly smarter than their designers.
The Race We’re Actually Running
Two futures branch from this moment.
Scenario A (Star Wars Trajectory, ~62% default probability): AGI is captured by existing power structures. A technological aristocracy emerges—those who own the AIs, control the data centers, and hold the capital. Everyone else becomes economically irrelevant but biologically alive. Think feudalism with better special effects.
Scenario B (Trojan Horse, ~28% probability): The Unscarcity framework (or something like it) takes hold. The Foundation layer guarantees dignified existence for all conscious beings. Power decays by design. The abundance AGI creates is distributed rather than hoarded.
Scenario C (Patchwork World, ~10%): Uneven transition. Some regions achieve post-scarcity; others collapse into digital authoritarianism. A century of instability before eventual convergence.
The EXIT Protocol, Civic Service, and Foundation infrastructure are deliberate interventions designed to shift probability mass from A toward B. They’re not predictions—they’re prescriptions.
What You Should Actually Be Worried About
Forget the Terminator. The killer robots are a distraction. The real risks are more boring and more imminent:
-
Unemployment without transition infrastructure. Mass displacement before social safety nets adapt. Democracies destabilize when 30% of the workforce has no economic function.
-
Concentration of capability. AGI is expensive to train and run. A handful of companies control the frontier models. If they capture the regulatory process, competition dies.
-
Alignment failure at scale. Not “AGI decides to kill all humans”—that’s movie logic. More like “AGI pursues specified goals in ways that gradually erode human agency and wellbeing.”
-
Epistemic collapse. When AI can generate unlimited persuasive content—text, video, audio—how do we know what’s true? Democracy assumes voters can access reliable information.
The Unscarcity response: transparency mandates (Axiom II), decaying power structures (Axiom IV), federated governance (The MOSAIC) that prevents single points of failure, and—crucially—a transition plan (The EXIT Protocol) that gives current elites an off-ramp better than the alternatives.
The Optimistic Case (Yes, There Is One)
AGI could be the greatest liberator in human history.
Consider what becomes possible when the cost of intelligence collapses to near-zero:
- Scientific research currently bottlenecked by human cognitive limits accelerates by orders of magnitude
- Medical diagnosis and drug discovery reach every human, not just the wealthy
- Education becomes infinitely personalized—every child gets a world-class tutor
- Legal services democratize—everyone has access to competent representation
- Creative collaboration opens new frontiers as humans work alongside non-human minds
The machine doesn’t have to be “better” than humans at everything. It just has to be “good enough” at enough things to free human time and energy for what machines can’t do—or what we simply choose to reserve for ourselves.
The Unscarcity vision isn’t a world where humans compete with AGI. It’s a world where AGI handles the necessary while humans pursue the meaningful. The Foundation layer automates survival; the Ascent layer rewards significance.
The Bottom Line
AGI isn’t coming. It’s arriving. The major labs are in broad agreement on timelines that most people would find shockingly soon. The question isn’t whether this technology will transform everything—it’s whether we’ll have frameworks ready when it does.
The Unscarcity project is one such framework. It treats AGI not as a distant science fiction scenario but as an imminent reality requiring immediate institutional preparation. The Labor Cliff doesn’t care about political cycles or corporate quarterly reports. It’s coming when the math works out—and the math is working out faster than almost anyone expected.
We have perhaps five to ten years to build the transition infrastructure. That’s not long. But it’s enough—if we stop debating whether the future is coming and start designing what we want it to look like when it arrives.
References
- OpenAI, Anthropic, Google AGI Timeline Promises (Axios, 2025)
- Sam Altman on Superintelligence (TIME, 2025)
- Dario Amodei AGI Prediction (Benzinga, 2024)
- DeepMind CEO on Human-Level AI (CNBC, 2025)
- Sam Altman’s Blog: Reflections
- The Case for AGI by 2030 (80,000 Hours)
- Dario Amodei on Lex Fridman Podcast (Transcript)
- Nick Bostrom, Superintelligence (2014)
- UnscarcityBook, Chapters 1, 4, and 6