HomeNews & Insights › Deep Tech

Why We Invest in Neuromorphic Computing: The Energy Efficiency Imperative

The semiconductor industry is approaching a wall it cannot compute its way through. Within five years, training a single frontier AI model will consume more electricity than a mid-sized city uses in a week. Neuromorphic computing is not a research curiosity — it is the only credible path forward. Here is why we are betting on it early, aggressively, and with full conviction.

Neuromorphic chip architecture visualization

In 2012, training AlexNet — the convolutional neural network that launched the modern deep learning era — required roughly 670 kilowatt-hours of electricity and took about a week on two consumer NVIDIA GPUs. In 2023, training GPT-4 required an estimated 50 gigawatt-hours — a 75,000-fold increase in eleven years. The trajectory is unsustainable, and every serious hardware engineer knows it. The question is not whether the GPU era will end, but what replaces it. At Neuron Factory, we believe the answer is neuromorphic computing — and we are investing accordingly.

The Scale of the Problem

Let us be precise about what we mean by the energy efficiency crisis in AI. The problem operates at three distinct levels, each reinforcing the others.

Training Costs

The cost of training frontier models has been doubling roughly every six months for the past five years. OpenAI's most recent estimates suggest that training their next flagship model will cost over $1 billion in compute alone — the majority of which is electricity. At current US commercial electricity prices and the energy mix of major data center hubs, this translates to a carbon footprint comparable to thousands of transatlantic flights. For global AI deployment at scale — billions of inference calls per day, models running on edge devices in resource-constrained environments — the energy math simply does not work under the current GPU paradigm.

Inference at the Edge

Training costs are large but episodic. Inference costs are continuous and growing exponentially. Every AI-assisted search query, every recommendation, every real-time autonomous system decision requires compute. As AI moves from cloud-hosted services to edge deployment — embedded in vehicles, industrial robots, medical devices, smartphones — the energy constraints become even more acute. A data center can be connected to grid power and cooled efficiently. A surgical robot, a prosthetic limb, or a remote industrial sensor cannot. The future of AI is not purely in the cloud. It requires computing architectures that work in the physical world.

The Data Center Buildout

Major technology companies are collectively planning to spend over $500 billion on data center construction and expansion in the 2024–2028 period. A significant fraction of this — roughly 40% by most estimates — will go toward electrical infrastructure and cooling systems. Several US states have already begun limiting new data center construction permits due to grid strain. The International Energy Agency projects that data centers will account for 4-6% of global electricity consumption by 2026, up from under 1.5% in 2020. This is not a trend that can be sustained through incremental efficiency improvements to the existing GPU architecture. It requires a fundamentally different paradigm.

What Neuromorphic Computing Actually Is

The term "neuromorphic computing" was coined by Carver Mead at Caltech in the late 1980s. It describes computing architectures that are inspired by the biological structure of the brain — specifically, the way biological neurons process information through discrete electrical spikes rather than continuous analog signals, and the way synaptic connections between neurons are modified over time through a process analogous to learning.

In practical terms, modern neuromorphic computing research focuses on two related paradigms:

Spiking Neural Networks (SNNs)

Conventional artificial neural networks process information in continuous floating-point arithmetic — every neuron in every layer fires on every forward pass, regardless of whether there is meaningful signal to propagate. Spiking neural networks, by contrast, operate on temporal pulse sequences: neurons only "fire" when their membrane potential exceeds a threshold, producing sparse, event-driven computation. The biological analogy is precise — the human brain processes an enormous amount of sensory information while consuming just 20 watts, largely because most neurons are inactive at any given moment.

The energy savings from sparsity are dramatic. Early benchmarks from Intel's Loihi 2 neuromorphic chip and IBM's NorthPole architecture show inference energy reductions of 10x to 100x compared to equivalent GPU implementations, depending on the task and the degree to which the SNN architecture can exploit temporal sparsity. For certain classes of tasks — particularly temporal sequence processing, sensory event processing, and reinforcement learning in physical environments — SNNs demonstrate energy efficiency advantages that are not merely incremental but categorical.

In-Memory Computing

A secondary but related paradigm involves eliminating the separation between memory and compute that defines conventional von Neumann architectures. In a standard GPU cluster, the "memory wall" — the latency and bandwidth cost of moving data between processing units and memory — accounts for a substantial fraction of total energy consumption. Neuromorphic chips that integrate analog memory elements (such as phase-change memory or resistive RAM) with compute units can perform matrix operations locally, eliminating this bottleneck. The energy savings from eliminating data movement can match or exceed the savings from sparse computation alone.

"The brain achieves computational feats that no silicon system has matched, while running on less power than a dim light bulb. That is not an accident of biology. It is the result of architectural principles that we are only beginning to understand well enough to engineer. The companies that engineer them first will define the next thirty years of computing." — Dr. Elena Vasquez

The Market Opportunity

We estimate the total addressable market for neuromorphic computing hardware, software, and associated systems at $45–65 billion by 2032. This estimate accounts for:

  • Edge AI chips for automotive and robotics: The transition to autonomous vehicles and industrial robots creates massive demand for low-power, high-performance inference chips that can run continuously in power-constrained environments. We estimate this segment at $12–18B by 2032.
  • Medical and wearable device compute: The next generation of medical-grade AI — real-time cardiac monitoring, neural prosthetics, continuous glucose prediction, surgical assistance — requires compute architectures that can run on battery-sized power budgets. This segment represents $8–12B by 2032.
  • Edge IoT and industrial sensing: Smart factories, precision agriculture, and infrastructure monitoring all require local intelligence at enormous scale. Neuromorphic chips optimized for event-based sensor fusion represent a $10–15B opportunity.
  • Data center acceleration: Even in cloud infrastructure, neuromorphic co-processors can handle specific workloads — particularly time-series prediction and reinforcement learning — with dramatic efficiency advantages. We estimate $15–20B in data center adoption by 2032.

These are conservative estimates. They do not account for applications we cannot yet anticipate, which is historically where the largest markets in computing have emerged.

Where We Are in the Technology Curve

Understanding our investment timing requires an honest assessment of the technology readiness level. Neuromorphic computing is not yet a general-purpose replacement for GPU-based AI. It is a specialized architecture that outperforms conventional silicon on a specific and growing set of tasks. The technology readiness landscape in 2025 looks roughly like this:

What Works Today

Event-based vision processing (where neuromorphic architectures are already commercially superior), temporal sequence modeling for sensor fusion, reinforcement learning in physical simulations, and certain classes of energy-constrained inference at the edge. Intel's Loihi 2, BrainChip's Akida, and a handful of academic spinouts are producing commercially validated hardware in these categories today.

What Is 2-4 Years Away

General-purpose edge inference competitive with current-generation NPUs, training on neuromorphic hardware for medium-complexity tasks, and the first full-stack neuromorphic development tools that allow software engineers to target neuromorphic hardware without PhD-level expertise in SNN theory. This is the transition we are investing in front of.

What Is 5-10 Years Away

Full-stack general-purpose neuromorphic computing that competes with GPUs across the majority of AI workloads. We believe this transition will occur, and that the companies who establish platform positions in the 2024-2027 window will capture disproportionate value when it does. But we are not investing in that future directly — we are investing in the near-term applications that can achieve commercial traction today and build the technical moats that will matter in 2030.

Our Portfolio Thesis: CortexLabs as a Case Study

Nothing illustrates our investment framework better than our decision to lead CortexLabs' $1.8M seed round in January 2025. CortexLabs was founded by a team of four — two PhD engineers from the MIT Research Laboratory of Electronics, a former Intel Loihi architecture lead, and a go-to-market executive with a decade of enterprise chip sales experience. The technical-commercial balance was exceptional from day one.

What convinced us was not the chip architecture, impressive as it was. It was the specific commercial beachhead the team had identified: edge inference for automotive LIDAR and radar sensor fusion, where power constraints are severe (the entire sensor suite must run on under 15W in most EV architectures) and latency requirements are unforgiving. CortexLabs had already secured two automotive OEM evaluation agreements — not letters of intent, actual hardware-in-loop testing partnerships — before they had a product. That told us something important about both the founder quality and the urgency of the market need.

Since our investment, CortexLabs has:

  • Validated 100x energy efficiency improvement over GPU-equivalent inference in controlled benchmark conditions
  • Advanced two automotive OEM pilots from evaluation to supplier qualification process
  • Scheduled chip tape-out for Q3 2025, which will produce the first full production samples
  • Expanded the team from 8 to 23 people, including two additional senior architects from Intel and Qualcomm

This is the pattern we look for: a technically differentiated team, a specific commercial beachhead where the energy efficiency advantage translates directly to customer value, and early evidence of customer urgency that validates both the technology and the timing.

What We Look For in Neuromorphic Investments

We have now evaluated over sixty companies in the neuromorphic and near-neuromorphic space. Our diligence framework has evolved considerably from our first investment. The six factors we weight most heavily are:

1. Architectural Differentiation That Is Patent-Defensible

The neuromorphic space is active in both industry and academia. Intel, IBM, Samsung, and dozens of university labs are all publishing in this space. We want to see teams with intellectual property that is genuinely novel — not obvious extensions of published work, but architectural innovations that provide durable competitive advantage. We work with technical advisors from MIT, Caltech, and ETH Zurich to evaluate IP quality before investment.

2. A Clear Path From Research to Product

The graveyard of deep tech is full of brilliant research that never achieved commercial form. We look hard at the specific product roadmap — what is the first commercially deployable version, what does it do, who buys it, and what does the sales process look like? Teams that cannot answer these questions with specificity are not ready for capital deployment, regardless of how impressive the underlying science is.

3. Team Completeness

A neuromorphic chip company needs at minimum: world-class hardware architects, software engineers who understand SNN-specific toolchain challenges, and a go-to-market leader who has actually sold silicon to enterprise customers. All three components are rare. All three together in a single founding team is exceptional. We will not lead a round without confidence that the team can actually ship — and shipping silicon is a fundamentally different challenge from publishing research.

4. A Target Market With Existing Urgency

We are not interested in educational applications of neuromorphic computing or in teams whose market thesis depends on customers changing behavior they have no current incentive to change. We want to see markets where the energy efficiency advantage of neuromorphic hardware maps directly onto an existing, painful, expensive problem. Automotive sensor fusion is one such market. Medical device compute is another. Industrial edge inference in safety-critical environments is a third.

5. Manufacturing Readiness

Many academic spinouts in this space have beautiful simulation results and terrible tape-out plans. We require a credible path to TSMC or GlobalFoundries tape-out on a competitive process node before we will lead a round. This is not about demanding that founders have it all figured out — it is about ensuring they have thought rigorously about the hardware engineering, not just the algorithm research.

6. Software Ecosystem Strategy

Hardware companies that do not build software ecosystems become commodity suppliers. The neuromorphic companies with long-term platform potential are those that build the SNN compiler toolchains, the training frameworks, and the application libraries that make it easy for software engineers to target their hardware. We look for teams that understand this and have a coherent software strategy — even if they are currently focused on hardware development.

The Investment Landscape: Why Now

One question we receive regularly is why 2024-2025 is the right moment to invest aggressively in neuromorphic computing, rather than waiting for greater technical maturity. Our answer has three components.

First, the commercialization window is opening now. Intel's Loihi 2 research program has published enough benchmark data to validate the energy efficiency claims at commercial scale. BrainChip's Akida has achieved commercial deployments in smart city sensor networks. The technology works — not universally, but in specific domains. The first commercial wave is underway, and the companies that establish customer relationships and reference architectures in this wave will be nearly impossible to displace.

Second, the talent is available. A cohort of PhD-trained neuromorphic engineers who spent the last decade in well-funded academic programs and at Intel and IBM research labs is now ready to found companies. This is the generation of engineers who understand both the science and the engineering challenges deeply enough to build commercial products. That cohort will not be larger or more available in two years — they will mostly have founded companies or been recruited into large incumbents.

Third, the regulatory and infrastructure tailwinds are strengthening. The EU AI Act's energy efficiency provisions, California's data center energy regulations, and the US Department of Energy's stated priority on advanced computing efficiency are all creating procurement incentives for neuromorphic hardware in government and regulated enterprise markets. These tailwinds will be much stronger in 2027 than they are today. Companies entering the market in 2025 will have two years of customer relationship building before those tailwinds peak.

Our Commitments

At Neuron Factory, we commit to being the most technically informed investors in the neuromorphic space. Every member of our investment team maintains active research relationships with at least one leading university laboratory. We retain technical advisors with direct expertise in SNN algorithm design, analog CMOS circuit design, and neuromorphic software toolchain development. We do not invest in areas we cannot evaluate rigorously.

We also commit to long-term partnership with our portfolio companies. Deep tech hardware development timelines are longer than SaaS — we invest knowing that the companies we back today may take five to eight years to reach their full potential. We reserve significant capital for follow-on investment and structure our initial checks to ensure our portfolio companies have enough runway to reach technical milestones, not just fundraising milestones.

If you are building in the neuromorphic or near-neuromorphic space, we want to hear from you. The energy efficiency imperative is real, the technology is ready for its first commercial wave, and the teams that build platform companies in this window will define the next era of computing. We want to be part of building that future.

EV

Dr. Elena Vasquez

Partner, Neuron Factory. PhD Electrical Engineering, Stanford (Neuromorphic Systems). Former Research Scientist, Intel Labs. Leads hardware and systems investments.

All Insights Read Our Full Thesis