Note: This is a research note supplementing the book Unscarcity, now available for purchase. These notes expand on concepts from the main text. Start here or get the book.
AGI Timeline 2026: What Altman, Hassabis, and Amodei Predict
The Most Expensive Consensus in History
Here’s something that should make you deeply uncomfortable: the CEOs of OpenAI, Anthropic, Google DeepMind, xAI, and Meta — companies that collectively spend over $100 billion a year on AI infrastructure — all agree on roughly the same thing. AGI is coming, and it’s coming fast. Not “in our lifetimes.” Not “eventually.” They’re talking about this decade. Some of them are talking about next year.
Now, you might say: “Of course they’d say that — they’re selling the hype.” Fair point. Nvidia’s stock price is basically an AGI futures contract. But here’s the thing about consensus among competitors: these people agree on almost nothing else. They fight about safety, about regulation, about open-source vs. closed-source, about whether to testify before Congress or ignore it. If Sam Altman says the sky is blue, Elon Musk will call it mauve just out of spite. Yet on AGI timelines, they’ve converged to a window so narrow you could park a Tesla in it.
Let’s break down who’s saying what, why they might be right, why they might be wrong, and why — ultimately — the exact date matters far less than what we’re building in the meantime.
The Predictions: A Who’s Who of “It’s Coming”
Sam Altman (OpenAI)
Altman has been the most aggressively bullish of the bunch. In January 2025, he wrote on his blog that OpenAI is “now confident we know how to build AGI as we have traditionally understood it.” Not “we think we might figure it out” — we know how. He predicted AGI could arrive during Trump’s second term, which means by January 2029 at the latest. By mid-2025, he’d sharpened the estimate further, suggesting that AGI could arrive as early as 2027.
In a September 2024 essay titled “The Intelligence Age,” Altman wrote that superintelligence — not just AGI, but systems beyond human-level — could be achievable “in a few thousand days.” Do the math. A few thousand days from September 2024 puts you at 2027-2031.
What makes Altman’s predictions notable isn’t just their aggressiveness — it’s OpenAI’s institutional bet. The company restructured from a nonprofit to a capped-profit entity, raised over $13 billion from Microsoft, and is reportedly planning a $100 billion compute buildout. You don’t spend that kind of money on “maybe.”
Credibility check: Altman has a history of ambitious timelines. He also runs a company whose valuation depends on the market believing AGI is imminent. Discount accordingly — but don’t dismiss entirely.
Dario Amodei (Anthropic)
Anthropic’s CEO has been the most specific. In a November 2024 appearance on the Lex Fridman podcast, Amodei said he expects AI systems to be “broadly better than all humans at almost all things” by 2026 or 2027. That’s not a hedge — that’s a date range you could put in your calendar.
In his essay “Machines of Loving Grace” (October 2024), Amodei laid out what he called “the compressed 21st century” — the idea that AI could compress a hundred years of scientific progress into five to ten years. He wasn’t talking about some distant future. He was describing what happens after AGI arrives in the late 2020s.
What makes Amodei’s prediction especially striking is that Anthropic is simultaneously the most safety-focused of the major labs. This isn’t a man saying “full speed ahead, consequences be damned.” He’s saying “this is coming, it’s dangerous, and we need to prepare” — which is, incidentally, exactly what the Unscarcity framework argues.
Amodei has also warned that AI could eliminate 50% of entry-level white-collar jobs within five years. When the CEO of an AI company tells you his product will destroy half the entry-level job market, you should probably listen.
Credibility check: Amodei left OpenAI over safety disagreements and co-founded Anthropic explicitly to build AI more carefully. He has less incentive to overhype and more credibility on risks. His timeline estimates are worth taking seriously.
Demis Hassabis (Google DeepMind)
Hassabis, a neuroscientist turned AI researcher who won the Nobel Prize in Chemistry for AlphaFold’s protein structure predictions, has historically been the most measured voice. In early 2025, he said “human-level AI” would arrive in “five to ten years,” with a lean toward “probably three to five.” That translates to 2028-2030 as the sweet spot.
At the World Economic Forum in Davos in January 2025, Hassabis described AGI as “probably the most transformative and also potentially dangerous technology in human history.” Coming from the man whose team solved a fifty-year biology problem with AI, that assessment carries weight.
DeepMind’s approach is noteworthy: they’re not just scaling language models. They’re building reasoning systems, scientific discovery tools, and planning algorithms. When Hassabis says AGI is close, he’s looking at a broader portfolio of capabilities than anyone else.
Credibility check: Hassabis has the strongest scientific credentials of any major AI CEO. His lab has produced genuine breakthroughs (AlphaGo, AlphaFold, Gemini). He’s also the least prone to hype cycles, making his convergence with Altman and Amodei all the more significant.
Elon Musk (xAI)
Musk, never one for understatement, predicted in late 2024 that AI would be “smarter than any single human” by the end of 2025, and “smarter than all humans combined” by 2028-2029. He launched xAI in 2023, built the massive Colossus compute cluster in Memphis with 100,000 GPUs, and has been racing to catch up with OpenAI.
Musk has a complicated relationship with AI predictions. He was an early OpenAI board member, left acrimoniously, sued the company, and then started his own. His timelines tend to be more aggressive than his peers’ — but he also has a track record of being directionally right about technology even when his specific dates are optimistic. (SpaceX was “going to Mars by 2024.” They didn’t. But they did revolutionize space launch.)
Credibility check: Musk’s AGI predictions should be taken with the same grain of salt as his Mars timelines and Full Self-Driving promises. Directionally useful; chronologically unreliable.
Mark Zuckerberg (Meta)
Zuckerberg pivoted Meta hard toward AI in 2023-2024, open-sourcing the Llama series of models and declaring that Meta’s long-term goal was to build general intelligence. While he’s been less specific about dates than his peers, he told the Acquired podcast in early 2025 that he expected AI systems to be able to “reason like a mid-level engineer” by 2025-2026 and that the path to AGI was “increasingly clear.”
Meta’s approach is interesting because it’s the most open. Llama models are available to anyone. Zuckerberg has argued that open-sourcing AI is both safer (more eyes on the code) and strategically smart (prevents concentration of power). Whether you agree with that logic, it means Meta’s progress is more visible and verifiable than that of closed labs.
Credibility check: Zuckerberg has burned credibility before (the Metaverse pivot that cost tens of billions). But Meta’s AI team — led by Yann LeCun, a Turing Award winner — is genuinely world-class. The combination of open models and top talent makes Meta’s signals worth tracking.
The Prediction Markets: What the Crowd Thinks
Individual executives have incentives to hype. What do aggregated forecasts say?
Metaculus — a prediction platform with a strong track record — had its community median for “when will AGI arrive?” collapse from the 2040s in 2020 to around 2030 by mid-2025. That’s a fifteen-year shift in five years. Not a gradual update — a paradigm collapse.
Polymarket, the decentralized prediction market, has shown similar trends. Bets on “AGI by 2030” have climbed steadily, with implied probabilities hovering around 30-40% as of early 2026.
The 2023 Expert Survey on AI Progress (conducted by Katja Grace et al.) surveyed over 2,700 AI researchers. The median respondent put a 50% probability on “human-level machine intelligence” arriving by 2047 — but the distribution had a fat left tail, with 10% of respondents putting it before 2027. More importantly, these estimates have been shifting leftward with each successive survey. The 2016 version of the same survey had the median at 2061.
Here’s the uncomfortable pattern: every time researchers update their AGI estimates, they move closer, not further away.
The Skeptics: Why They Might Be Right
Not everyone is drinking the AGI Kool-Aid, and the skeptics make legitimate points.
Yann LeCun (Meta’s Chief AI Scientist, ironically) has consistently argued that current LLM architectures will not achieve AGI. He believes we’re missing fundamental breakthroughs in world models, planning, and causal reasoning. LLMs, he argues, are sophisticated pattern-matchers, not thinkers. He might be right — but notice that even LeCun doesn’t say AGI is impossible, just that we need new architectures to get there.
Gary Marcus (NYU cognitive scientist) has been the most vocal critic, pointing out that LLMs still fail at basic reasoning, make confident errors, and lack genuine understanding. He argues that we’re in a “scaling is all you need” bubble that will eventually burst.
Ilya Sutskever (former OpenAI chief scientist, now at Safe Superintelligence Inc.) has said that LLMs “generalize dramatically worse than people.” Coming from one of the architects of modern deep learning, that’s a significant data point.
The skeptics’ strongest argument: we’ve been here before. AI has gone through multiple “winters” where hype outpaced reality. The 1960s promised thinking machines by the 1980s. The 1980s expert systems boom crashed in the 1990s. Maybe LLMs are just the latest hype cycle.
The counter-argument? Previous hype cycles lacked the commercial infrastructure to sustain themselves. Today’s AI companies have real revenue, real customers, and real economic impact. Anthropic, the company that employs me, crossed $1 billion in annualized revenue in 2024. OpenAI is north of $4 billion. These aren’t research grants — they’re business empires. The capital flowing into AI is not speculation on a dream; it’s investment in technology that is already generating returns.
The Date Is a Distraction (And Here’s What Actually Matters)
Let’s pause the prediction game and ask a different question: Does it matter whether AGI arrives in 2027 or 2035?
For investors and stock traders? Sure. For the rest of us? Not nearly as much as you’d think.
Here’s why: the Labor Cliff doesn’t wait for some official “AGI achieved” announcement. It’s already happening. Every quarter, the systems get more capable. Every quarter, more jobs become automatable. The displacement curve is smooth, not a step function. Whether AGI is formally “achieved” in 2027 or 2032, the economic disruption is already underway by 2026, and it will intensify regardless of what label we put on the technology.
Look at the employment data. U.S. employers announced nearly 700,000 job cuts in just the first five months of 2025 — an 80% increase year-over-year. Computer science graduates face 6.1% unemployment. Entry-level knowledge work is evaporating. This isn’t because AGI arrived; it’s because narrow AI got good enough at specific tasks to make humans redundant in those roles.
AGI isn’t a switch that flips. It’s a dimmer that’s been turning up for years.
The obsession with pinpointing the AGI date is, frankly, a way of avoiding the harder conversation: What are we building to catch the people who fall off the cliff?
The Book’s Argument: Build the Foundation Before You Need It
This is where the Unscarcity framework parts company with both the techno-optimists (“AGI will solve everything!”) and the doomers (“AGI will destroy everything!”).
The framework’s position is simple and uncomfortable: Whether AGI arrives in 2027 or 2035, we need the Foundation infrastructure operational before it hits — not after.
Here’s the logic:
-
AGI accelerates job displacement. Even the optimistic scenario — where AGI creates more jobs than it destroys — requires a transition period. People don’t instantly retrain. Industries don’t instantly restructure. The gap between “old jobs gone” and “new jobs available” is where the suffering happens.
-
The Foundation is not UBI. Universal Basic Income is a band-aid — a check in the mail. The Foundation is infrastructure: guaranteed housing, food, healthcare, energy, and compute delivered as public utilities. It’s the difference between giving someone a fish and building a fishery. (See the three scenarios analysis for why UBI alone leads to the “Star Wars” trajectory — bread and circuses for the masses while a technological aristocracy owns everything that matters.)
-
Infrastructure takes time. You can’t build the Foundation overnight. Free Zones need to be piloted. The EXIT Protocol needs early adopters among existing elites. Civic Service programs need to be designed and tested. All of this requires a decade of preparation minimum. If AGI arrives in 2027 and we haven’t started, we’re already too late. If it arrives in 2035 and we started in 2026, we might just make it.
-
The prediction market is screaming at us. When Metaculus shifts its AGI median by fifteen years in five years, that’s not noise — that’s a signal. When every major AI CEO converges on the same window, that’s not hype — that’s institutional knowledge becoming public. The people building the technology are telling us how fast it’s coming. We should probably believe them, or at least hedge against the possibility they’re right.
So What Do We Actually Do?
Forget trying to predict whether Altman or Hassabis has the better crystal ball. Focus on the actions that make sense regardless of the exact timeline:
If AGI arrives in 2027 (aggressive scenario): We’re already behind. The Foundation needs emergency deployment. The EXIT Protocol becomes a crisis response, not a planned transition. Political disruption will be severe. This is the scenario where every year of preparation we’ve done pays off a hundredfold.
If AGI arrives in 2032 (moderate scenario): We have time to pilot Free Zones, build political coalitions, and demonstrate that the Foundation model works. This is the scenario where starting now means arriving on time.
If AGI arrives in 2040+ (conservative scenario): We have a generous runway. But narrow AI will still displace millions of jobs in the interim, so the Foundation infrastructure is valuable regardless. We don’t need AGI for the Labor Cliff to hit — we just need AI that’s “good enough.”
In all three scenarios, the answer is the same: start building now.
The AI CEOs have told us what’s coming. The prediction markets have priced it in. The employment data confirms it’s already happening. The only remaining question is whether we build the parachute before we go off the cliff, or after.
I’d prefer before. Wouldn’t you?
Related Articles
- AGI (Artificial General Intelligence) — What AGI means and why it matters for the Labor Cliff
- The 2025-2030 Labor Cliff — Why job displacement arrives before abundance does
- The Potential Timeline — When AI + robots + fusion change everything
- Compute Clusters — The physical infrastructure powering the AGI race
- Employment Statistics 2025 — The numbers behind the displacement
- Three Scenarios Analysis — Star Wars, Trojan Horse, or Patchwork World
The Unscarcity blueprint argues we need new infrastructure for a world where human labor is no longer the bottleneck. Curious? Read the book or start with the preamble.