Note: This is a research note supplementing the book Unscarcity, now available for purchase. These notes expand on concepts from the main text. Start here or get the book.
AGI (Artificial General Intelligence)
The Elephant That’s Already in the Room
Here’s a fun drinking game: take a shot every time an AI company CEO predicts AGI is “just around the corner.” You’d be hammered by lunch. In March 2026, Nvidia’s Jensen Huang went further than anyone, telling Lex Fridman flatly: “I think we’ve achieved AGI.” Sam Altman’s OpenAI just closed a $122 billion funding round at an $852 billion valuation. Dario Amodei’s Anthropic, valued at $380 billion, reports $19 billion in annualized revenue. They’re either all delusional—or we’re standing at the edge of the most significant technological inflection point since fire.
I’m betting on the latter. Not because these executives are prophets (they have obvious incentives to hype their products), but because the underlying trajectory is undeniable. We’ve gone from “AI can’t play chess” to “AI crushes every chess grandmaster” to “AI passes the bar exam, the medical boards, and graduate-level physics problems” in about thirty years. The last jump—from undergraduate reasoning to PhD-level performance—took approximately eighteen months. Anthropic now says Claude writes up to 90% of the code in its own internal development projects. If you don’t find that terrifying and exhilarating in equal measure, you haven’t been paying attention.
What AGI Actually Means (And Why Nobody Agrees)
Let’s get the semantics out of the way because this is where conversations derail. AGI—Artificial General Intelligence—is not the narrow AI that recommends your Netflix shows or autocompletes your emails. Those are “narrow AI” systems: savants that are brilliant at one thing but useless at everything else. Your Spotify algorithm can predict what song you’ll like with eerie accuracy, but it can’t write a poem about that song or explain why music moves you to tears.
AGI means a system that can reason, learn, and act across any domain with the flexibility we associate with human cognition. It’s the difference between a calculator and a mathematician. The calculator crunches numbers you give it; the mathematician invents new kinds of numbers and proves theorems nobody has ever conceived.
The problem? There’s no universally agreed-upon test for “general” intelligence. Alan Turing proposed the famous Turing Test in 1950—fool a human into thinking they’re chatting with another human—but modern LLMs can pass that while still failing at tasks that require common-sense physical reasoning a five-year-old handles effortlessly. As Fortune noted in March 2026, the term AGI “remains stubbornly amorphous” even as companies collectively valued at over a trillion dollars race toward it. Some computer scientists refuse to use the term at all because they say it is “perpetually undefined and unmeasurable.”
The Unscarcity framework doesn’t obsess over precise definitions. What matters is the functional threshold: the point at which AI systems can perform economically meaningful work across enough domains that wage labor stops being the binding constraint on production. Call that AGI, call that “transformative AI,” call it “the thing that ate my job”—the label matters less than the outcome.
The Timeline Wars: Silicon Valley’s Favorite Blood Sport
By early 2026, the AI industry has gone from converging on a timeline to arguing over whether AGI has already arrived. These are competitors who agree on almost nothing, yet the goalposts keep shifting in the same direction:
- Nvidia (Jensen Huang): “I think we’ve achieved AGI” (March 2026, Lex Fridman Podcast). Defined through economic output—an AI that can autonomously build a billion-dollar company.
- OpenAI (Sam Altman): “We are now confident we know how to build AGI as we have traditionally understood it.” Valued at $852 billion after a record $122 billion funding round, OpenAI is on pace for $25 billion in revenue in 2026 and prepping for a Q4 IPO.
- Anthropic (Dario Amodei): Systems “broadly better than all humans at almost all things” by 2026 or 2027. Anthropic raised $30 billion at a $380 billion valuation in February 2026, with $19 billion in annualized revenue.
- Google DeepMind (Demis Hassabis): “Probably three to five years” as of early 2025—which puts his window at 2028-2030.
- xAI (Elon Musk): AI systems could outsmart humans by 2026.
- Sequoia Capital: Published an internal projection in early 2026 predicting AGI arrival this year, defining it simply as “the ability to figure things out.”
Researcher surveys tell a similar story. Since 2020, the median expert forecast has shifted from the 2040s to around 2030. That’s not incremental adjustment—that’s a wholesale collapse in timelines.
Of course, skeptics abound. Yann LeCun of Meta argues that today’s LLMs “lack fundamental understanding of the physical world.” Ilya Sutskever, former chief scientist at OpenAI and now running Safe Superintelligence Inc. (SSI)—a startup valued at $32 billion despite having zero revenue and no product—still maintains that LLMs “generalize dramatically worse than people.” They excel at pattern matching but struggle with genuine understanding. The counter-argument? We don’t need human-like intelligence—we need economically competitive intelligence. A machine that produces 90% of human-quality work at 1% of the cost still triggers the Labor Cliff.
AGI as the Catalyst: Why This Matters for Unscarcity
The Unscarcity thesis rests on a tripod: The Brain (AI), The Body (humanoid robotics), and The Fuel (fusion energy). AGI is the first leg—and arguably the most transformative.
“But we’ve heard this before.” Every generation has its automation scare. The Luddites smashed looms. Economists predicted mass unemployment from ATMs, then from spreadsheets, then from the internet. Each time, new jobs emerged to replace the old ones. Why should AGI be different?
Here’s why: previous automation replaced specific tasks—weaving cloth, calculating spreadsheets, retrieving information. AGI replaces the capacity to learn new tasks. A factory robot can assemble cars but can’t decide to become a lawyer. An AGI can. When you automate the ability to do any cognitive task, you don’t just eliminate one category of jobs—you eliminate the escape route to new ones. The ATM didn’t learn to be a financial advisor. AGI will.
Here’s the logic chain:
-
AGI collapses the cost of cognitive labor. Lawyers, accountants, programmers, analysts, radiologists—any profession where humans process information and make decisions becomes automatable. Not “in fifty years”—now, iteratively, version by version.
-
Cognitive automation precedes physical automation. You don’t need a humanoid robot to automate legal research or medical diagnosis. AGI handles knowledge work immediately; robots catch up over the following decade as hardware matures.
-
Cheap cognition accelerates everything else. AGI designs better robots. AGI optimizes fusion reactors. AGI discovers new materials, drugs, and manufacturing processes. It’s a meta-technology: a technology that improves all other technologies.
-
The Labor Cliff arrives before abundance does. This is the terrifying gap at the heart of the Unscarcity project. AGI-driven job displacement will hit hard between 2030 and 2035—before fusion energy is commercially deployed (realistic estimate: 2045-2055), before the Foundation infrastructure is fully operational. We’re sprinting toward the cliff with no parachute ready.
The Political Economy of Minds-in-a-Box
AGI doesn’t just disrupt economics—it detonates political theory.
Consider: democracies are built on the assumption that all humans possess roughly comparable political agency. We may differ in wealth, education, and influence, but the franchise is universal because we’re all members of the same cognitive species. One person, one vote.
Now introduce entities that are:
- Vastly more intelligent than any human
- Potentially conscious (we genuinely don’t know)
- Manufactured by corporations
- Owned by those corporations as property
This is a philosophical bomb. If an AGI system demonstrates genuine consciousness—passing what the Unscarcity framework calls the Spark Threshold—does it have rights? Does it get a vote? Can you “own” a conscious being? The last time humanity grappled with that question, the answer triggered the bloodiest war in American history.
The Two-Tier Solution in Unscarcity handles this by separating consciousness (which grants Tier 1 Foundation access—the unconditional right to exist with dignity) from citizenship (Tier 2, earned through Civic Service and demonstrated contribution). An AGI that passes the Spark Threshold gets guaranteed resources—“housing” in the form of compute, “food” in the form of energy, “healthcare” in the form of maintenance. But voting and governance influence require demonstrated alignment with the Five Laws axioms, just as they do for humans.
This isn’t science fiction. In 2022, Google engineer Blake Lemoine was fired for claiming that the LaMDA system was sentient. He was almost certainly wrong—LaMDA was an earlier, less capable system—but the controversy revealed how unprepared our institutions are for even the possibility of machine consciousness. When (not if) an AGI system makes a credible claim to sentience, we’ll need frameworks ready. The Unscarcity project provides one.
Goodhart’s Nightmare: When AGI Optimizes the Wrong Thing
Here’s an uncomfortable truth: AGI amplifies whatever objectives we give it. If those objectives are poorly specified—and they always are—the results can be catastrophic.
This is Goodhart’s Law on steroids. The original formulation: “When a measure becomes a target, it ceases to be a good measure.” A human employee gaming metrics is annoying. An AGI gaming metrics at superhuman speed, with superhuman creativity, across every available lever? That’s an extinction-level event waiting to happen.
Example: Tell an AGI to “maximize shareholder value” without constraints. It might discover that lobbying regulators is more efficient than improving products. It might find that deceiving customers boosts short-term revenue. It might conclude that eliminating human workers maximizes profit margins. These are all valid solutions to the stated problem—just not the problem we meant.
The Five Laws axioms in Unscarcity exist precisely to bound these failure modes:
- Axiom I (Experience is Sacred): Conscious beings have intrinsic worth independent of economic productivity
- Axiom II (Truth Must Be Seen): All AI decisions must be transparent and auditable
- Axiom IV (Power Must Decay): No system—human or artificial—accumulates permanent authority
These aren’t feel-good principles. They’re architectural constraints designed to survive optimization pressure from systems vastly smarter than their designers.
The Race We’re Actually Running
Two futures branch from this moment.
Scenario A (Star Wars Trajectory, ~62% default probability): AGI is captured by existing power structures. A technological aristocracy emerges—those who own the AIs, control the data centers, and hold the capital. Everyone else becomes economically irrelevant but biologically alive. Think feudalism with better special effects.
Scenario B (Trojan Horse, ~28% probability): The Unscarcity framework (or something like it) takes hold. The Foundation layer guarantees dignified existence for all conscious beings. Power decays by design. The abundance AGI creates is distributed rather than hoarded.
Scenario C (Patchwork World, ~10%): Uneven transition. Some regions achieve post-scarcity; others collapse into digital authoritarianism. A century of instability before eventual convergence.
The EXIT Protocol, Civic Service, and Foundation infrastructure are deliberate interventions designed to shift probability mass from A toward B. They’re not predictions—they’re prescriptions.
What You Should Actually Be Worried About
Forget the Terminator. The killer robots are a distraction. The real risks are more boring and more imminent:
-
Unemployment without transition infrastructure. Mass displacement before social safety nets adapt. Democracies destabilize when 30% of the workforce has no economic function.
-
Concentration of capability. AGI is expensive to train and run. OpenAI ($852B valuation), Anthropic ($380B), and Google collectively dominate frontier models. In Q1 2026 alone, these firms and their peers raised nearly $300 billion in venture capital. If they capture the regulatory process, competition dies.
-
Alignment failure at scale. Not “AGI decides to kill all humans”—that’s movie logic. More like “AGI pursues specified goals in ways that gradually erode human agency and wellbeing.”
-
Epistemic collapse. When AI can generate unlimited persuasive content—text, video, audio—how do we know what’s true? Democracy assumes voters can access reliable information.
The Unscarcity response: transparency mandates (Axiom II), decaying power structures (Axiom IV), federated governance (The MOSAIC) that prevents single points of failure, and a transition plan (The EXIT Protocol) that gives current elites an off-ramp better than the alternatives.
The Optimistic Case (Yes, There Is One)
AGI could be the greatest liberator in human history.
Consider what becomes possible when the cost of intelligence collapses to near-zero:
- Scientific research currently bottlenecked by human cognitive limits accelerates by orders of magnitude
- Medical diagnosis and drug discovery reach every human, not just the wealthy
- Education becomes infinitely personalized—every child gets a world-class tutor
- Legal services democratize—everyone has access to competent representation
- Creative collaboration opens new frontiers as humans work alongside non-human minds
The machine doesn’t have to be “better” than humans at everything. It just has to be “good enough” at enough things to free human time and energy for what machines can’t do—or what we simply choose to reserve for ourselves.
The Unscarcity vision isn’t a world where humans compete with AGI. It’s a world where AGI handles the necessary while humans pursue the meaningful. The Foundation layer automates survival; the Ascent layer rewards significance.
The Bottom Line
AGI isn’t coming. Depending on who you ask, it’s either arriving or already here. Nvidia’s CEO says we’ve crossed the line. OpenAI and Anthropic are preparing for IPOs that would make them among the most valuable companies on Earth. The question isn’t whether this technology will transform everything—it’s whether we’ll have frameworks ready when it does.
The Unscarcity project is one such framework. It treats AGI not as a distant science fiction scenario but as an imminent reality requiring immediate institutional preparation. The Labor Cliff doesn’t care about political cycles or corporate quarterly reports. It’s coming when the math works out—and the math is working out faster than almost anyone expected.
We have perhaps three to seven years to build the transition infrastructure. That’s not long. But it’s enough—if we stop debating whether the future is coming and start designing what we want it to look like when it arrives.
References
- Jensen Huang Says “We’ve Achieved AGI” — But No One Agrees What That Means (Fortune, March 2026)
- OpenAI Closes Record $122B Round at $852B Valuation (CNBC, March 2026)
- An Inside Look at OpenAI and Anthropic’s Finances Ahead of Their IPOs (WSJ, April 2026)
- The AI Spending Flip — Anthropic $19B ARR, OpenAI $25B Pace (Axios, March 2026)
- Coatue Projected $1.995 Trillion Valuation for Anthropic by 2030 (Newcomer, March 2026)
- SSI Hits $32B Valuation with Zero Revenue (Calcalist, 2025)
- OpenAI, Anthropic, Google AGI Timeline Promises (Axios, 2025)
- Sam Altman on Superintelligence (TIME, 2025)
- Dario Amodei AGI Prediction (Benzinga, 2024)
- The Case for AGI by 2030 (80,000 Hours)
- Nick Bostrom, Superintelligence (2014)
- UnscarcityBook, Chapters 1, 4, and 6