I spent four years building AI inference hardware at a semiconductor startup before joining Neuron Factory as a Partner. In that time, I watched the industry's relationship with energy efficiency evolve from an afterthought to an existential constraint. The arithmetic of modern AI training and inference is brutal: the energy cost of running large language models at scale is already straining the grid capacity of major data center regions, and the compute demands of next-generation AI systems will be orders of magnitude larger.
The GPU, which powered the deep learning revolution of the past fifteen years, is a magnificent machine for a specific class of workloads. But it was designed for graphics rendering and later adapted for the dense matrix multiplications that underlie modern neural network training. It is not, in any meaningful sense, designed for intelligence. Neuromorphic computing takes the opposite approach: it starts by asking how biological brains achieve such extraordinary intelligence at such extraordinarily low energy cost, and builds silicon from those first principles.
What Neuromorphic Actually Means
The term "neuromorphic" was coined by Carver Mead at Caltech in the late 1980s to describe analog circuits that mimic the electrophysiological behavior of neurons and synapses. The field has evolved substantially since then, but the core insight remains: biological neural computation is fundamentally different from the digital, synchronous, clock-driven computation that dominates today's processors.
Biological neurons are sparse, event-driven, and asynchronous. A typical neuron in the human cortex fires at an average rate of perhaps 1 Hz under normal conditions — meaning it is inactive 99.9% of the time. The brain's extraordinary efficiency comes precisely from this sparsity: computation happens only where and when it is needed. By contrast, a GPU runs every multiply-accumulate unit in every clock cycle regardless of whether useful work is being done, burning power continuously.
Neuromorphic chips replicate this event-driven paradigm in silicon through spiking neural networks (SNNs). Rather than propagating continuous-valued activations through dense matrices, SNNs transmit discrete spikes — binary events — that propagate through a network only when a neuron's membrane potential crosses a threshold. This sparsity translates directly into energy efficiency: chips like Intel's Loihi 2 and IBM's NorthPole demonstrate inference energy costs that are 100x to 1000x lower than equivalent GPU implementations for certain workloads.
The Scalability Question
The criticism most often leveled at neuromorphic computing is that spiking neural networks are harder to train than conventional deep neural networks and achieve lower accuracy on benchmark tasks. This criticism was substantially valid three years ago. It is much less valid today.
The training difficulty of SNNs stems from the non-differentiability of the spike function, which complicates the application of backpropagation. Researchers have developed a range of surrogate gradient methods, temporal coding schemes, and direct training algorithms that have dramatically closed the accuracy gap with conventional networks. On the ImageNet image classification benchmark, SNN implementations have reached top-1 accuracy above 80% — competitive with many production CNN deployments.
More importantly, the accuracy comparison is somewhat misleading because it ignores the task domain most relevant to neuromorphic hardware's competitive advantage: continuous sensor processing in real time with tight energy budgets. Applications like always-on keyword detection, continuous health monitoring, industrial anomaly detection, and robotic proprioception are precisely the domains where neuromorphic architectures are not just competitive but categorically superior.
The Commercial Landscape
The neuromorphic hardware ecosystem is maturing rapidly. Intel's Loihi 2 processor is now available to research partners and represents a genuinely impressive capability demonstration. BrainChip's Akida platform is shipping in commercial products including automotive and industrial IoT applications. SpiNNaker 2, developed at TU Dresden, is being deployed in several European research networks as a large-scale brain simulation platform. And a cohort of well-funded startups — including our own portfolio company CortexLabs — are building the next generation of commercial neuromorphic silicon.
On the software side, the ecosystem is still nascent but developing quickly. PyTorch-compatible SNN frameworks like snnTorch and SpikingJelly have made it substantially easier for conventional deep learning engineers to begin experimenting with spiking networks. ONNX extensions for neuromorphic model interchange are under active development. And several research groups are publishing production-quality toolchains for mapping conventional ANN models to neuromorphic hardware.
Investment Implications
For Neuron Factory, neuromorphic computing is a core investment thesis. We believe the following will be true within this decade:
- Edge AI applications requiring always-on processing — wearables, industrial sensors, autonomous robots — will be dominated by neuromorphic inference hardware due to the prohibitive energy cost of running conventional silicon in battery-powered or thermally constrained environments.
- A new class of AI-native sensor will emerge in which sensing and neural computation are integrated at the physical level, enabling perception systems that operate at the speed of light and the energy budget of biology.
- Large-scale neuromorphic systems will find application in scientific computing, particularly for brain simulation, drug discovery, and climate modeling, where their ability to model complex dynamical systems efficiently creates substantial research value.
We are actively looking for seed-stage companies building in this space. If you are working on neuromorphic hardware, SNN training infrastructure, or applications designed specifically for neuromorphic deployment, we would like to talk.
Conclusion
The GPU revolution created trillions of dollars of value by unlocking a new class of computation. Neuromorphic computing is not a replacement for GPUs — it is a complement that will unlock the next class. The applications that require intelligence in the physical world, operating continuously, in real time, with the energy budget of a watch battery rather than a power plant, will be built on brain-inspired silicon. The companies building that silicon today are the ones we want to back.