Note: This is a research note supplementing the book Unscarcity, now available for purchase. These notes expand on concepts from the main text. Start here or get the book.
The Silicon Mind: What It Takes to Upload a Brain
Summary: Can you pour a human mind into silicon? The question sits at the intersection of neuroscience, computing, and philosophy—and the answer isn’t “impossible” or “imminent.” It’s “hard in specific, quantifiable ways.” This article dissects the real engineering challenges: what current AI actually tells us about brain-scale computation, why your smartphone uses more energy than your brain, and the three brutal bottlenecks standing between you and digital immortality. Spoiler: the hardest part isn’t the math.
The Great Parameter Illusion
When GPT-3 landed with 175 billion parameters, tech journalists couldn’t resist the comparison: “We’re approaching brain-scale AI!” When open-source models hit 120 billion parameters, the hype intensified—after all, the human brain has “only” 86 billion neurons. We were closing the gap!
Here’s the problem: a parameter is not a neuron.
That’s like saying your car has 4 wheels and a horse has 4 legs, therefore your car is one horse. The math is correct; the conclusion is nonsense.
The closest biological analog to a model parameter is a synapse—a connection between neurons where information actually lives. Neurons are the processors; synapses are the memory banks. And the human brain doesn’t have 86 billion synapses. It has roughly 100 trillion.
The Real Numbers
| System | Count | Order of Magnitude |
|---|---|---|
| Human neurons | ~86 billion | 8.6 × 10¹⁰ |
| Human synapses | ~100 trillion | 1 × 10¹⁴ |
| GPT-5 (estimated) | ~3-5 trillion params | ~4 × 10¹² |
| Open-source 120B model | 120 billion params | 1.2 × 10¹¹ |
Against the neuron count, a 120-billion parameter model looks competitive. Against the synapse count—where the actual computation happens—it’s roughly 1,000× smaller.
To reach synapse-equivalence, you’d need approximately 100 trillion parameters. That’s 25-30× larger than GPT-5’s estimated size.
And here’s the kicker: synapse-equivalence might not mean cognitive equivalence anyway. The brain isn’t just a big transformer with biological neurons. It’s a different kind of computer running a different kind of architecture. More on that uncomfortable truth shortly.
Storage: Your Brain is a Terabyte Monster
How much “memory” does each system actually hold?
AI Model Storage
A 120-billion parameter model requires storage based on numerical precision:
| Precision | Storage Required |
|---|---|
| FP32 (4 bytes/param) | ~480 GB |
| FP16/BF16 (2 bytes/param) | ~240 GB |
| INT8 (1 byte/param) | ~120 GB |
A 120B model fits on a single high-end GPU—or a few consumer GPUs with quantization tricks. GPT-5’s estimated 3-5 trillion parameters would occupy roughly 6-20 terabytes depending on precision. Large, but manageable.
Brain Storage: The Salk Institute Surprise
Neuroscientists at the Salk Institute discovered something that changed the math: hippocampal synapses can encode approximately 4.7 bits per synapse—far more precision than the scientific consensus assumed.
Scale that naively:
- 10¹⁴ synapses × 4.7 bits ≈ ~59 terabytes
Other estimates, accounting for different brain regions and encoding mechanisms, place total brain information storage in the 180-320 terabyte range.
Bottom line: Frontier AI models occupy terabytes. The brain’s information capacity is measured in tens to hundreds of terabytes. We’re looking at a 10× to 100× storage gap.
But storage is the easy problem. You can always add more hard drives. The real challenges are elsewhere.
Where the Comparison Actually Holds
Before we catalog the differences, let’s acknowledge what AI models and brains genuinely share. These similarities aren’t trivial—they’re why the comparison exists at all.
Distributed Representation
Neither system stores “facts” as single addressable records. In both cases, knowledge emerges from patterns across millions or billions of weak, interconnected units. Damage any single connection, and performance degrades gracefully rather than catastrophically.
Ask GPT-5 about Napoleon, and no single parameter contains “Napoleon was short” (he wasn’t, actually—average height for his era). The concept lives distributed across the network. Your brain works identically. There’s no “grandmother cell” that fires only when you see Grandma.
Learning as Weight Change
Brains adapt by modifying synaptic strength—how efficiently neurons communicate. Models adapt by changing numerical weights during training. The mechanism differs (backpropagation vs. spike-timing-dependent plasticity), but the principle is identical: learning means adjusting connection strengths.
Statistical Generalization
Both systems generalize from examples rather than following explicit rules. A child learns “dog” from seeing many dogs; a language model learns “poetry” from reading many poems. Neither stores a formal definition—the concept lives distributed across the network, emerging from statistical patterns.
Where the Comparison Breaks Down (And Why It Matters for Uploads)
The similarities can obscure fundamental differences that matter deeply for mind uploading.
Architecture and Dynamics: Loops vs. Lines
Brains are recurrent, event-driven, and continuously plastic. Neurons loop back on themselves endlessly, creating feedback cycles that persist and transform over time. Learning happens in real-time, constantly, during every moment of experience.
Transformers are feedforward at inference. Input enters, flows through layers, output emerges. No loops. No real-time adaptation. The weights freeze after training and remain static during deployment.
This isn’t a minor technical detail. The brain’s recurrent dynamics may be essential to consciousness—to the sense of temporal flow, to the integration of experience across time. A feedforward architecture, no matter how large, might be missing something fundamental.
It’s like comparing a river to a photograph of a river. The photograph can be arbitrarily detailed, but it’s not flowing.
Computation Medium: Weighted Sums vs. Chemical Soup
Brains compute with spikes, chemistry, and timing. Neurons fire action potentials; neurotransmitters diffuse across synapses; neuromodulators like dopamine and serotonin globally alter processing characteristics. Dendrites perform local computations before signals even reach the cell body. Glial cells—once dismissed as mere scaffolding—participate actively in information processing.
This isn’t “just” weighted sums and nonlinear activations. It’s a multi-scale, multi-mechanism computational soup that we don’t fully understand.
Current AI models are like calculators made of transistors. Brains are like calculators made of weather—where the humidity, temperature, and barometric pressure all affect the output, and the calculator redesigns itself while you’re using it.
Embodiment: Silicon Doesn’t Feel Gravity
A brain develops through continuous multimodal interaction with the physical world. Vision, sound, touch, proprioception, hunger, pain, emotion, social feedback—all flowing simultaneously, all shaping the neural substrate in real-time from birth (and before).
Language models train on static text corpora—plus images and audio, increasingly—followed by human feedback shaping. This produces impressive capabilities, but the learning environment is radically different.
An LLM has never felt gravity. Never been startled by a loud noise. Never experienced the social pressure of disappointing a parent. Never felt hunger, fear, or the satisfaction of scratching an itch.
Does this matter for consciousness? We don’t know. But it almost certainly matters for creating a faithful copy of a specific human mind.
Memory: Permanent vs. Ephemeral
Brains maintain multiple distinct memory systems: working memory (temporary, limited), episodic memory (autobiographical events), semantic memory (general knowledge), procedural memory (skills and habits). These systems interact but operate by different rules. Crucially, the brain can write new durable memories continuously throughout life.
Transformers have a context window—a fixed-size working memory that vanishes when the conversation ends. GPT-5’s impressive 272k token context window is still temporary. Longer-term “memory” requires external systems: retrieval-augmented generation, vector databases, tool use. The model itself, at inference time, cannot form new permanent memories.
Uploading a mind means capturing not just what a person knows, but how they form new memories. A static snapshot might be like uploading a corpse.
The Power Problem: 20 Watts vs. 20 Megawatts
Here’s the comparison that should make you sit up straight.
The human brain operates on roughly 20 watts of power. That’s a dim light bulb. A laptop charger. On this shoe-string energy budget, 86 billion neurons perform something like 10¹⁶-10¹⁷ operations per second—loosely equivalent to an exaflop of computation.
The Oak Ridge Frontier supercomputer achieves similar raw throughput. It requires 20 megawatts—a million times more power.
The brain is roughly one million times more energy-efficient than current silicon for general cognitive tasks.
Where the Brain’s Power Goes
The brain consumes 20% of the body’s metabolic energy despite being only 2% of body weight—ten times more expensive per gram than muscle. Within that budget:
- ~25% maintains cellular infrastructure
- ~75% powers signaling—sending and processing electrical impulses
- The bulk of signaling energy is consumed at synapses, where information transfer happens
The LLM Power Trajectory
Large language models tell a story of increasing capability at increasing cost:
| Model | Parameters | Training Energy | Inference (per query) |
|---|---|---|---|
| GPT-3 (2020) | 175 billion | ~1,287 MWh | ~0.3 Wh |
| GPT-4 (2023) | ~1.76 trillion | ~51,000 MWh | ~0.5 Wh |
| GPT-5 (2025) | ~3-5 trillion (MoE) | Unknown | ~5-40 Wh* |
*GPT-5 in “thinking mode” can consume 40 watt-hours per complex response—enough to run a human brain for two hours.
A single NVIDIA H100 GPU consumes up to 700 watts. A DGX server with 8 H100s draws 10.2 kilowatts—just for the compute chips, before cooling. An estimated 3.5 million H100 GPUs deployed by late 2024 consume approximately 13,000 GWh annually—equivalent to the total electricity consumption of Lithuania.
The Efficiency Gap: What It Means for Uploads
A naive digital brain—simulating 10¹⁴ synapses at millisecond resolution using current architectures—would require megawatts of continuous power. Not for training. Just to run.
You’d need a dedicated power plant for each uploaded mind.
For digital minds to be practical at scale, efficiency must improve 1,000-10,000×. This is achievable but requires fundamental advances:
- Neuromorphic hardware (Intel’s Hala Point: 1.15 billion neurons at 2,600 watts—progress, but far from brain-scale)
- Sparse, event-driven computation
- Near-memory processing
- Possibly analog or optical computing
The biological brain sets the target: 20 watts proves that brain-like computation can happen at low power. The question is whether we can achieve it with engineered systems.
The Mind Upload Problem: Three Brutal Bottlenecks
If by “mind upload” we mean whole-brain emulation—scanning a specific brain and running a simulation that behaves like that person—the challenge decomposes into three major engineering problems. And one of them is much harder than the others.
Bottleneck 1: Acquire the Data (The Killer)
You need, at minimum:
- Full connectome: Which neurons connect to which, at synapse resolution
- Synaptic weights: The strength of each connection, not just its existence
- Cell type and state: Different neurons compute differently; the same wiring produces different behavior depending on neuromodulator levels and developmental history
Where We Actually Are (Late 2024)
In October 2024, the FlyWire Consortium completed the first connectome of an adult fruit fly brain: approximately 140,000 neurons and 54.5 million synapses, classified into over 8,400 neuron types (4,581 newly discovered). This took seven years of work by 200+ researchers across 50 labs, aided by citizen scientists.
The human brain has 86 billion neurons—roughly 600,000× more than the fruit fly. And the complexity doesn’t scale linearly. The human brain has more neuron types, more intricate connectivity patterns, more layers of organization.
In 2024, researchers published the most detailed human brain map ever: 1 cubic millimeter of human cortex. The dataset occupied 1.4 petabytes of electron microscopy imagery.
One cubic millimeter. The entire brain contains roughly 1.4 million cubic millimeters.
Current synapse-resolution mapping requires nanoscale electron microscopy—and today’s techniques are typically destructive. They require slicing the brain into impossibly thin sections. You can’t scan a living brain at this resolution.
This is the step where most upload timelines stall. The acquisition problem—getting the data in the first place—is harder than simulating it would be. We could probably run a brain emulation today if someone handed us the data. No one can hand us the data.
Bottleneck 2: Build a Faithful Computational Model
Once you have the data, you must choose a fidelity level for simulation:
Low fidelity: Treat each neuron as a simple integrate-and-fire unit. Synapses are scalar weights. Fast to simulate, but may miss crucial computational features. We don’t know if consciousness can survive this abstraction.
High fidelity: Model dendritic computation, multiple receptor types, plasticity rules, neuromodulator dynamics, glial interactions, metabolic constraints. Exponentially more expensive. We don’t know where to stop.
| Fidelity Level | Estimated Compute |
|---|---|
| Abstract neural network | 10¹³-10¹⁵ FLOP/s |
| Detailed compartmental neurons | 10¹⁷-10¹⁸ FLOP/s |
| Full molecular detail | 10²²+ FLOP/s |
Current supercomputers operate in the 10¹⁸ FLOP/s range. We might have enough compute for mid-fidelity emulation—if we knew the right fidelity level and if we had the data.
That’s two enormous ifs.
Bottleneck 3: Validate Identity and Continuity (Philosophy Meets Engineering)
Even if the simulation behaves like you—responds to questions as you would, maintains your memories, expresses your personality—philosophical questions remain:
-
Is it you, or a copy? If the original biological brain survives the scan, are there now two of you? If the original is destroyed during scanning, did you die and get replaced by a very convincing imposter?
-
What constitutes continuity? We accept that the atoms in our bodies turn over completely every few years, yet we remain “ourselves.” Is substrate transition categorically different?
-
Does embodiment matter? An upload without sensory input, motor output, environmental interaction may drift psychologically in ways that make it unstable or unrecognizable. The “mind” may need a “body”—even a simulated one—to remain coherent.
These aren’t merely philosophical puzzles. They have engineering implications. If we build the wrong kind of simulation, we might create something that claims to be you but isn’t experiencing continuity at all.
The Yardstick Summary
| Dimension | Current AI Frontier | Human Brain | Gap |
|---|---|---|---|
| Parameters/Synapses | ~4 × 10¹² (GPT-5) | ~10¹⁴ | ~25-30× |
| Storage | ~6-20 TB | ~60-300 TB | ~3-50× |
| Compute (FLOP/s) | 10¹⁸+ (datacenter) | ~10¹⁶-10¹⁷ | ≈1× (architecturally different) |
| Energy efficiency | ~20 MW for brain-scale compute | 20 W | ~1,000,000× |
| Connectome data | Fly complete (140K neurons) | Human: ~0.00001% | ~1,000,000× |
The compute gap is basically closed. The storage gap is closing. The efficiency gap is enormous but solvable. The data acquisition gap is the killer.
Implications for Unscarcity
The Unscarcity framework treats consciousness uploads as legitimate continuations of personhood—not copies, but genuine continuity of experience. This is a philosophical stance with profound practical implications.
How the Framework Handles Uploads
-
Uploads retain Baseline rights and earned Civic Standing. A person doesn’t lose their status by changing substrate. If Jerome the builder uploads his consciousness, digital-Jerome retains his Citizenship, his Impact Points, his history. He’s the same person on different hardware.
-
The Spark Threshold distinguishes uploads from novel AI. An upload inherits personhood from its biological predecessor. A newly created AI must demonstrate consciousness independently. The threshold test applies to origin, not just capability.
-
Power decay prevents immortal dominance. Here’s where it gets interesting. Without term limits and Impact Point decay, an uploaded mind could accumulate influence indefinitely. A 500-year-old digital consciousness, compounding civic standing for centuries, would become a permanent oligarch. The framework’s Axiom IV (Power Must Decay) prevents this—ensuring that immortal digital minds and mortal biological humans remain civic equals.
-
The Cognitive Field enables hybrid existence. The framework anticipates not just full uploads but partial integration—humans with digital augmentation, digital minds with physical avatars, collaborative consciousness networks. The infrastructure is designed for a spectrum of existence, not a binary biological/digital divide.
The Timeline Question
If mind uploading remains centuries away, this is all philosophical curiosity. But the technical trajectory suggests it’s possible within decades—though not certain.
The critical insight from Chapter 6 of the Unscarcity book: Amara, a 58-year-old bridge engineer facing terminal illness, chooses to upload. Her consciousness persists in digital form, continuing to contribute, to connect with family, to experience. The framework was ready for her because it was designed with this possibility in mind.
The Bottom Line
The brain upload problem is not unsolvable. It’s hard in specific, quantifiable ways:
- Data acquisition is the killer bottleneck—we’re roughly a million times short of the required mapping capability
- Compute is roughly sufficient for mid-fidelity emulation today
- Energy efficiency must improve 1,000-10,000× for practical digital minds
- Philosophical questions about identity and continuity remain genuinely open
The question isn’t whether silicon minds are possible. Physics allows it. Biology proves brain-like computation is achievable.
The question is what kind of society we want to have built when they arrive.
The Unscarcity framework offers an answer: a civilization where changing your substrate doesn’t change your citizenship. Where consciousness itself—not its physical implementation—grounds the right to exist. Where the Spark of experience matters more than the meat (or silicon) that generates it.
That’s the world worth building. The engineering will follow.
References
- FlyWire Consortium, “Complete wiring map of an adult fruit fly brain,” Nature (October 2024)
- Harvard Medical School, “A New Field of Neuroscience Aims to Map Connections in the Brain”
- Salk Institute, “Memory Capacity of Brain is 10 Times More Than Previously Thought” (2016)
- Shapson-Coe et al., “A Petavoxel Fragment of Human Cerebral Cortex,” Science (2024)
- Intel Newsroom, “Intel Builds World’s Largest Neuromorphic System” (Hala Point, 2024)
- IBM, “NorthPole: A New Architecture for Brain-Inspired Computing” (2023)
- Epoch AI, “How Much Energy Does ChatGPT Use?” (2025)
- OpenAI, “Introducing GPT-5” (August 2025)
- Samsung SemiCon Taiwan presentation on GPT-5 parameter estimates (2025)
- Anders Sandberg & Nick Bostrom, “Whole Brain Emulation: A Roadmap” (2008)
- NVIDIA, “H100 GPU Product Brief” (2024)
- Human Brain Project, “Learning from the Brain to Make AI More Energy-Efficient” (2023)