Analog computing from waste heat
MIT researchers have figured out how to use waste heat from electronics as a computing medium, performing matrix math at over 99% accuracy without any electrical current. This could reshape how AI hardware handles one of its most fundamental bottlenecks, which is energy consumpti...
MIT Technology Review broke the story in April 2026, reporting on work led by Giuseppe Romano, a research scientist at MIT's Institute for Soldier Nanotechnologies. The research introduces an analog computing method that treats heat not as a nuisance to be cooled away, but as raw material for doing actual math. No author byline was attached to the MIT Tech Review piece, so full credit goes to the publication for surfacing this research.
Why This Matters
Heat is the single biggest physical constraint on scaling AI hardware right now, and every major chip company from NVIDIA to Intel spends enormous engineering resources on keeping it under control. Data centers already consume between 1 and 2 percent of global electricity, with neural network inference and training driving a growing share of that. A method that turns waste heat into computation rather than into a cooling bill is not a minor footnote, it is a direct attack on that constraint. If Romano's team can scale this beyond simple matrix operations, the energy economics of running large language models could shift in ways that chip designers have not planned for.
Daily briefing from 50+ sources. Free, 5-minute read.
The Full Story
The core idea sounds almost counterintuitive the first time you hear it. Electronic devices generate heat constantly, and most engineering effort goes into moving that heat away from sensitive components as fast as possible. Romano's team asked a different question: what if the heat itself could carry information?
The method works by encoding input data as a set of temperatures, drawn from heat that the device is already generating, rather than as the binary 1s and 0s that traditional digital computing relies on. Tiny silicon structures, whose shapes are determined by a physics-based optimization algorithm that the team developed themselves, guide and distribute that heat in specific patterns. Those patterns are not random. They are the computation. The output is read as the amount of thermal power collected at the far end of the structure.
The mathematical operation the team demonstrated is matrix vector multiplication. If you have spent any time reading about how large language models work, you already know this operation is not optional. It is the fundamental arithmetic that lets a neural network process a prompt and generate a response. Every transformer layer in GPT-4, Claude, or Gemini is doing enormous volumes of matrix math. Demonstrating that heat-based analog structures can handle this specific operation, even at small scale, is the reason researchers and hardware engineers should pay attention.
The accuracy numbers from initial demonstrations exceeded 99 percent in many test cases. Caio Silva, an undergraduate student in MIT's Department of Physics and the lead author of the paper, put the conceptual shift plainly. "Most of the time, when you are performing computations in an electronic device, heat is the waste product," Silva said. "You often want to get rid of as much heat as you can. But here, we've taken the opposite approach by using heat as a form of information itself."
The research was first announced through MIT News in January 2026, with MIT Technology Review publishing extended coverage in April 2026. The silicon structures are manufactured using geometries that are discovered automatically by the optimization algorithm, not hand-designed by engineers. That algorithm searches through possible physical configurations to find the shapes that naturally map a thermal input to a desired mathematical output, exploiting the laws of thermodynamics rather than fighting them. Silicon was deliberately chosen as the material because its thermal properties are well understood and because existing semiconductor fabrication infrastructure can work with . There are real obstacles ahead. As matrix dimensions grow larger and more complex, accuracy drops. The degradation is especially pronounced when input and output terminals are far apart physically, which becomes unavoidable when you try to tile millions of these structures together to handle the matrix sizes that modern deep learning actually requires. The team is direct about the fact that scaling to production-grade AI workloads is not solved.
Key Details
- Lead researcher: Giuseppe Romano, MIT's Institute for Soldier Nanotechnologies
- Lead paper author: Caio Silva, undergraduate student, MIT Department of Physics
- Operation demonstrated: matrix vector multiplication, the core operation in large language model inference
- Accuracy achieved: greater than 99 percent in multiple test cases
- Material used: silicon, compatible with existing semiconductor fabrication processes
- Research first announced: January 2026 via MIT News
- Extended coverage published: April 21, 2026, MIT Technology Review
- Global data center electricity share: approximately 1 to 2 percent of total global consumption
What's Next
The team's most achievable near-term application is not running LLMs but rather using waste-heat computation for on-chip temperature sensing and anomaly detection, eliminating the need for dedicated temperature sensors that currently consume physical space on chip designs. Scaling the tiling of these structures to handle matrices of the size used in modern neural networks is the primary engineering problem that needs to be solved before broader AI applications are realistic, and that work will require progress on maintaining thermal gradient precision across large physical distances. Researchers and hardware companies watching this space should track whether the MIT group publishes follow-up results on tiling accuracy within the next 12 to 18 months.
How This Compares
This work sits alongside a broader wave of non-digital computing research that has been gaining momentum as Moore's Law runs into hard physical limits. Photonic computing, which routes light through silicon waveguides to perform matrix math, is probably the closest analog. Companies like Lightmatter have raised substantial funding to commercialize optical matrix accelerators, and they share the same target workload: transformer inference. The key difference is that photonic approaches still require electrical input and output conversion, while Romano's heat-based method draws its input energy from waste heat that the device is generating anyway. That is a meaningful distinction for edge devices and embedded systems where power budgets are tight.
IBM and Intel have both invested in neuromorphic computing, which takes biological inspiration to reshape how chips handle certain AI workloads. IBM's NorthPole chip, announced in late 2023, demonstrated that moving computation closer to memory could cut energy use dramatically for inference tasks. But neuromorphic chips are still fundamentally electrical and digital in their operation. The MIT heat-computing approach operates on entirely different physics and does not require transistors in the computing structure at all, which is a more radical departure from the current paradigm than anything the large chip companies have shipped.
The honest comparison point is that Romano's research is years behind commercial viability relative to photonics or neuromorphic chips, which already have prototype hardware running real workloads. What the MIT work does is open a genuinely new physical mechanism with a credible proof of concept. The 99 percent accuracy figure for matrix vector multiplication, achieved in January 2026, gives it a foundation that pure theoretical proposals lack. It belongs in the same conversation as these other approaches but is currently at an earlier stage.
FAQ
Q: What is analog computing and how does it differ from regular computing? A: Regular computers process information as binary digits, either a 1 or a 0, using transistors that switch between two discrete states. Analog computing uses continuous physical quantities, in this case temperature and heat flow, to represent and process data. The MIT approach lets heat distribution through a physical structure perform the math directly, without transistors doing the switching.
Q: Why does matrix vector multiplication matter for AI? A: Every time a large language model processes text and generates a response, it performs billions of matrix vector multiplication operations across its neural network layers. This operation is so central to how modern AI works that any hardware designed to accelerate or replace the electrical version of it has direct implications for the cost and energy footprint of running AI at scale.
Q: Can this technology run ChatGPT or other large language models today? A: Not yet. The current demonstration handles small, simple matrix operations with high accuracy, but accuracy drops as matrix size and complexity increase. The researchers also need to solve the challenge of connecting millions of these tiny silicon structures together before the approach could handle the scale of computation that production large language models require.
The MIT team's heat-computing proof of concept will not replace GPUs next year, but it has done something more valuable in the short term, which is demonstrate that the laws of thermodynamics can be engineered into a legitimate computational tool. That changes how hardware researchers think about waste heat as a resource. Subscribe to the AI Agents Daily weekly newsletter for daily updates on AI agents, tools, and automation.
Get stories like this daily
Free briefing. Curated from 50+ sources. 5-minute read every morning.




