Crazy: Scientists Compute With Human Brain Cells
Based on Sabine Hossenfelder's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Biological computing using living neurons is motivated by the brain’s estimated ~100,000× lower energy use than current AI systems.
Briefing
Human brain cells can be used to compute with a fraction of the energy consumed by today’s AI systems—about 100,000 times less—yet the field is still largely sidelined compared with flashier approaches like quantum computing. A key example is Corticle Labs’ biological computer, the CL1, which grows roughly 800,000 living human neurons on a chip. The neurons connect to electronics that can stimulate them and record their activity, while a life-support module keeps the cells alive for several months. Launched in 2025 and reportedly costing around $35,000 (with cloud access available), the CL1 is designed less for consumer products and more for research labs exploring how far “wet” computation can go.
Corticle Labs’ earlier breakthrough came in 2022, when researchers trained a cluster of live neurons to play Pong. The setup encoded the ball’s position as a stimulus and required the neural activity to update a hypothetical paddle position. Importantly, the learning mechanism relied on external electronic feedback: the system delivered corrective signals based on how close neuron firing was to the desired outcome, rather than letting the neurons independently discover the task. A similar effort is underway at Switzerland’s Final Spark, which grows smaller brain chunks—about 10,000 neurons each—on electronic chips. Final Spark also offers remote access for as little as $5,000 per month and has reported research showing one neuron chunk can encode Braille with about 80% accuracy.
The promise extends beyond energy savings. Unlike conventional AI, where learning is largely software-driven, these organic systems adapt through changes in their physical connections while they operate. That physical learning loop—embedded in living tissue—could, in theory, point toward more general forms of intelligence. The field also overlaps with neuromorphic chips, which aim to mimic aspects of brain-like computation but remain early-stage.
Still, three practical hurdles loom. First, the neuron cultures don’t last long—typically only a few months. Second, scaling up from small brain chunks to larger, controllable systems is an open engineering problem. Third, reproducibility is weak because organically grown tissue varies from sample to sample. Even the question of whether these systems can learn to “think” on their own remains unsettled; demonstrations so far depend heavily on engineered feedback.
Then come ethical and philosophical concerns. Growing and training human brain organoids raises questions about sentience, suffering, and what researchers should even look for if consciousness emerges. The transcript also challenges the premise of building human-like intelligence from human tissue: if the goal is a new mind, why not start with a baby? For now, the research continues as a high-risk, high-interdisciplinary path—one that may eventually lead to computing systems that feel less like software and more like living substrates, potentially even “cloud” cognition. But widespread use remains far off, and the immediate reality is experimental, fragile, and tightly controlled.
Cornell Notes
Biological computers aim to use living human neurons as computing hardware, motivated by the brain’s extreme energy efficiency—around 100,000× less than today’s AI. Corticle Labs’ CL1 grows about 800,000 neurons on a chip and uses electronics to stimulate and record activity, with life support keeping cells alive for months. Training examples like Pong and Braille rely on external electronic feedback to shape neural responses, not fully autonomous learning. The approach could matter because learning happens through physical rewiring in living tissue, unlike software-only training. Major barriers remain: short lifespans, scaling and control challenges, poor reproducibility, and unresolved questions about whether these neuron systems can learn independently or develop ethical risks like sentience.
How does Corticle Labs’ CL1 turn living neurons into a functioning computing system?
What does “training” look like in the Pong demonstration, and why doesn’t it prove the neurons learned independently?
How does Final Spark’s approach differ from Corticle Labs, and what results have been reported?
Why are short lifespan, scaling, and reproducibility considered the biggest technical obstacles?
What ethical risks and open questions come with growing human brain organoids for computation?
What makes this line of research distinct from conventional AI and from neuromorphic chips?
Review Questions
- What are the three major technical challenges facing neuron-based biological computers, and how does each one limit progress?
- In the Pong example, what role does external electronic feedback play, and what does that imply about the neurons’ autonomy?
- How do energy efficiency and physical rewiring differentiate biological computation from today’s software-trained AI systems?
Key Points
- 1
Biological computing using living neurons is motivated by the brain’s estimated ~100,000× lower energy use than current AI systems.
- 2
Corticle Labs’ CL1 grows about 800,000 human neurons on a chip and uses electronics to stimulate and record neural activity, supported by a life-support module lasting several months.
- 3
Corticle Labs reported training live neurons to play Pong in 2022, but the learning depended on external electronic feedback that corrected neural responses.
- 4
Final Spark grows smaller neuron chunks (~10,000 neurons each) on chips and has reported remote access and research results including Braille encoding at about 80% accuracy.
- 5
Neuron-based systems face major hurdles: short culture lifetimes, difficulty scaling while maintaining control, and poor reproducibility due to biological variability.
- 6
Learning in these systems happens through physical rewiring in living tissue, which differs from software-only learning and may offer a different route toward general intelligence.
- 7
Ethical questions—especially around sentience, suffering, and what researchers should detect—remain unresolved as the field advances.