Sign in for free: Preamble (PDF, ebook & audiobook) + Forum access + Direct purchases Sign In

Unscarcity Research

"The 100x Future: When Exponential Growth Stops Being a Metaphor"

"A deep dive into the history of 100x technological leaps—from Moore's Law to the AI explosion—and why the next decade will make the last fifty years look like a warm-up act. We're not just getting faster; we're rewriting the rules of what's possible."

10 min read 2166 words /a/100x-future

Note: This is a research note supplementing the book Unscarcity, now available for purchase. These notes expand on concepts from the main text. Start here or get the book.

The 100x Future: When Exponential Growth Stops Being a Metaphor

Here’s a thought experiment: imagine explaining your smartphone to someone from 1995. Not the interface—the specs. “Yes, it has more computing power than every computer that existed when you were born, combined. It fits in my pocket. I mostly use it to argue with strangers and look at pictures of cats.”

We throw around “exponential growth” like it’s a TED Talk buzzword, but our linear monkey brains genuinely cannot comprehend what it means. Human intuition evolved to track prey across savannas, not to understand why a technology that doubled in power every two years would be a billion times more powerful after sixty years. So we nod along when someone says “AI is advancing exponentially” while picturing something like a steep hill, not the vertical cliff it actually is.

To understand where we’re going, we need to look backward at the 100x moments—the leaps that didn’t just improve technology but rewrote what was physically possible. Then we need to understand why the next 100x is different: faster, broader, and carrying the weight of civilization on its shoulders.

A Brief History of “That’s Impossible, Until It Wasn’t”

Moore’s Law: The Original Exponential

In 1965, Gordon Moore observed that the number of transistors on a chip doubled roughly every two years. He was describing what seemed like a temporary trend. It held for fifty years.

The math of absurdity:

  • 1971: Intel 4004 had 2,300 transistors
  • 1989: Intel 486 had 1.2 million (500x increase)
  • 2010: Intel Core i7 had 2.3 billion (1 million times the original)
  • 2023: Apple M2 Ultra has 134 billion transistors

The Apollo 11 guidance computer had 74 KB of memory and ran at 0.043 MHz. Your smartphone has 128 GB (1.7 million times more) and runs at 3 GHz (70,000 times faster). We put a man on the moon with hardware that couldn’t run Tetris, and we use hardware that could run the entire NASA mission control to watch YouTube shorts.

This wasn’t just “computers got better.” It was the quiet rewriting of economic physics. Tasks that required a building now required a desk. Tasks that required a desk now required a phone. Tasks that required a phone now require… well, we’re about to find out.

Bandwidth: From Library to Nervous System

Internet speed is another 100x story we’ve sleepwalked through.

  • 1996 (56k modem): Downloading a single MP3 took 10 minutes. A movie would take days.
  • 2000 (DSL/Cable): 100x faster. Suddenly music streaming became possible.
  • 2010 (4G): Another 100x. Video streaming went mobile.
  • 2020 (5G): Up to 100x theoretical improvement over 4G, with latency dropping from ~50ms to ~1ms.

The 4G-to-5G jump illustrates why “100x” isn’t just “more of the same.” At 50ms latency, remote surgery is a liability lawsuit waiting to happen. At 1ms, it’s viable. The internet went from a library (static pages you retrieve) to a nervous system (real-time sensation and response). That’s not evolution; that’s metamorphosis.

Koomey’s Law: The Silent Enabler

Here’s a law you’ve never heard of that shaped your entire life: the number of computations per joule of energy has doubled every 1.57 years since the 1940s.

Why does this matter? Because without it, your laptop would need a car battery and your phone would be tethered to a wall outlet like a landline. Koomey’s Law is why mobile computing exists. It’s why wearables exist. It’s why the devices that consume our lives don’t literally consume our electrical grid (yet—we’ll get to that).

Every 100x leap in efficiency meant devices could do more while burning less. The first computers used vacuum tubes and drank power like data centers drink water today. Now a chip the size of your fingernail outperforms them while running on a watch battery.

The Current Leap: Welcome to the Vertical Portion of the Curve

Now we come to the part where the curve goes vertical. The AI explosion isn’t following Moore’s Law—it’s making Moore’s Law look leisurely.

The Brain Gets Bigger (LLMs)

The evolution of large language models reads like the timeline of a biological species, compressed from millions of years into a coffee break:

Model Year Parameters Capability
GPT-1 2018 117 million A curiosity. Could barely complete sentences.
GPT-2 2019 1.5 billion Coherent paragraphs. OpenAI worried it was “too dangerous to release.”
GPT-3 2020 175 billion 100x leap. Poetry, code, essays. The “holy shit” moment.
GPT-4 2023 ~1.76 trillion Reasoning, humor, nuance. Passed the bar exam.
GPT-5 2025 Undisclosed 94.6% on AIME math, 74.9% on SWE-bench coding, 45% fewer factual errors.

That last line isn’t speculation—GPT-5 launched August 7, 2025, and the benchmarks are real. On the AIME (American Invitational Mathematics Examination), designed to challenge the top 5% of high school math students, GPT-5 scores 94.6% without using external tools. On SWE-bench Verified—a test of real-world coding ability where the AI must understand codebases, identify bugs, and write fixes—it hits 74.9%.

And by December 2025, we already have GPT-5.2. The iteration speed itself is accelerating.

The significance: We went from “AI can barely string a sentence together” to “AI outperforms most humans on professional-level reasoning tasks” in seven years. That’s not a trend line; that’s a phase transition.

The Eyes Get Sharper (Image Generation)

Remember when AI image generation produced nightmare fuel—faces with too many teeth, hands with seven fingers, the general aesthetic of “uncanny valley by way of Salvador Dalí having a stroke”?

  • 2021 (DALL-E 1): Low-res blurs. A “llama” looked like a beige accident.
  • 2022 (DALL-E 2/Midjourney v3): Artistic but wrong. Faces that almost worked. Text that never did.
  • 2023 (Midjourney v5/DALL-E 3): Photorealism that fooled experts. Text started working.
  • 2024-2025: Video generation (Sora, Runway Gen-3, Kling). Full cinematic scenes from text prompts.

Four years. From “abstract smudge” to “photographic evidence that never happened.” We’re now generating videos indistinguishable from footage shot on physical cameras. The philosophical and legal implications are still catching up to the technological reality.

The Body Gets Faster (Inference Hardware)

Software miracles require hardware miracles. Enter Groq.

The Groq LPU (Language Processing Unit) processes language differently from traditional GPUs. While Nvidia’s chips excel at training models, Groq’s architecture is purpose-built for inference—running trained models at speed. The results:

  • Llama 3 70B: 284 tokens/second on Groq vs. ~25-50 tokens/second on typical GPU clouds (6-11x faster)
  • Llama 3 8B: 877 tokens/second
  • Time to First Token: 0.22 seconds, consistently (deterministic architecture means no variability)

Why does speed matter? Because the difference between a 2-second response and a 0.2-second response isn’t 10x improvement—it’s the difference between “AI assistant” and “extension of thought.” When the AI responds faster than you can finish your sentence, the UX becomes telepathic.

The Wall: When Physics Fights Back

Here’s where optimistic futurism meets thermodynamics. Exponential curves don’t run forever—they hit walls. And the AI explosion is currently sprinting toward several of them.

The Energy Wall: AI’s Insatiable Hunger

The International Energy Agency’s 2025 report reads like a warning shot:

  • Current state: Data centers consumed 415 TWh globally in 2024—1.5% of world electricity
  • US specifically: 183 TWh in 2024 (4% of national consumption, equivalent to Pakistan’s total demand)
  • 2030 projection: 945 TWh globally (doubling), with AI-optimized centers quadrupling their share
  • The math that breaks things: By 2030, the US will consume more electricity processing data than manufacturing all energy-intensive goods combined—including aluminum

AI isn’t just energy-hungry; it’s energy-ravenous. A single GPT-4 query uses 10-50x more electricity than a Google search. As models get larger and inference gets more common, the curves don’t add—they multiply.

Goldman Sachs estimates $720 billion in grid upgrades needed by 2030 just to keep the lights on. About 20% of planned data center projects face delays because the power grid literally cannot supply them.

This isn’t a hypothetical. Microsoft and Amazon are buying nuclear power plants. Google is signing deals for geothermal. The AI industry isn’t pivoting to clean energy out of environmental virtue—they’re doing it because they’ve run out of dirty energy to buy.

The fusion imperative: This is why the timeline for commercial fusion suddenly matters enormously. When Commonwealth Fusion Systems, Helion, TAE Technologies, and others talk about “fusion by 2035-2040,” they’re not describing a nice-to-have. They’re describing the only way to power a fully realized AI infrastructure without rebuilding the entire electrical grid every five years.

The Brain (AI) demands the Fuel (Fusion). The technologies aren’t parallel developments—they’re co-dependent.

The Material Wall: Scarcity at the Foundation

Every AI chip contains rare earth elements with names that sound like rejected Harry Potter spells: Neodymium, Dysprosium, Yttrium, Terbium. Plus critical minerals like Lithium, Cobalt, and Gallium.

The problem: these materials aren’t evenly distributed.

  • China controls ~60% of rare earth mining and ~90% of processing
  • The Democratic Republic of Congo supplies 70% of global cobalt
  • Supply chains are geopolitically fragile and environmentally catastrophic

An iPhone contains about 50 different elements. A single EV battery requires about 8 kg of lithium and 6 kg of cobalt. An AI data center requires… well, multiply by several million and keep going.

Two solutions, both requiring 100x leaps:

  1. Space Mining: The asteroid belt contains an estimated $700 quintillion in mineral wealth. A single asteroid (16 Psyche) holds ~$10 quintillion in metals. This sounds like science fiction until you realize the economics: if you can get extraction costs below terrestrial mining, the game changes entirely.

  2. Perfect Recycling: Every phone, battery, and chip contains the elements we’re running out of. With robotic disassembly and advanced separation, we could close the loop. We’d need automation sophisticated enough to economically recover every atom of gold and dysprosium from e-waste. Which brings us to…

The Labor Wall: Who Builds the Builders?

Here’s the paradox: we need millions of robots to build the infrastructure for abundant AI. We need abundant AI to design and coordinate those millions of robots. Each technology requires the other to scale.

This is why the 100x future isn’t one breakthrough—it’s three interlocking ones:

  • The Brain (AI): Intelligence without biological limits
  • The Body (Robotics): Labor without human exhaustion
  • The Fuel (Fusion): Energy without scarcity

Miss any one, and the others stall. Hit all three, and the compounding effects become civilizational.

The Timeline Compression Problem

The most unsettling aspect of the current moment isn’t the technology—it’s the speed.

Previous 100x leaps took decades to propagate. Moore’s Law gave us 50 years to adjust. The internet took 30 years to become ubiquitous. Society had time to build institutions, retrain workers, develop norms.

The AI 100x is happening in years, not decades. GPT-3 to GPT-5 was five years. The jump from “AI can’t draw hands” to “AI generates photorealistic video” was three years. The gap between “AI assists programmers” to “46% of production code is AI-generated” was two years.

Social institutions move at the speed of legislation—years to decades. Technology moves at the speed of deployment—months to years. Economic disruption moves at the speed of quarterly earnings—weeks to months.

We’re trying to navigate a civilization-scale transition with Bronze Age maps and Steam Age institutions. The roads we’re traveling didn’t exist when we started the journey.

Conclusion: The Fork in the Road

We stand at a genuine inflection point. The 100x improvements are real. The timeline compression is real. The infrastructure demands are real.

Two paths diverge:

Path One: We fumble the transition. The Brain scales but the Fuel doesn’t, so AI remains energy-constrained and geographically concentrated. The Body doesn’t scale, so the benefits accrue only to those who already own the infrastructure. We get what Unscarcity calls the “Star Wars Trajectory”—incredible technology captured by existing power structures, creating a new kind of scarcity: access to abundance itself.

Path Two: We treat the Three Engines as co-dependent and invest accordingly. Fusion gets the Manhattan Project urgency it deserves. Robotics proliferates to the point where physical labor truly becomes optional. AI governance ensures the Brain serves all consciousness, not just its owners. We get the “Republican Star Trek” outcome—federated abundance, where technology liberates rather than stratifies.

The difference isn’t technical. The technical trajectories are largely determined by physics and investment. The difference is political and social: who decides how the 100x is deployed, and for whom?

We’ve never navigated a transition this fast. We’ve never had tools this powerful. And we’ve never faced a choice this consequential.

The next decade isn’t about whether the 100x happens. It’s about whether we’re ready for it—and whether “we” includes everyone.


Sources

Share this article: