Postavke privatnosti

MIT's silicon metastructures calculate using heat: waste energy becomes a signal for calculations

Learn how MIT researchers developed microscopic silicon metastructures that use excess heat in a chip instead of electricity for analog matrix-vector multiplication – a key operation in machine learning. We bring what the paper published in Physical Review Applied shows, what applications it promises for thermal monitoring of electronics, and where the current limits are.

MIT
Photo by: Domagoj Skledar - illustration/ arhiva (vlastita)

Computing with Heat: MIT's approach that turns waste heat into a “signal” for calculations

In electronics, heat is mostly a byproduct: the more powerful the chip, the larger the portion of energy that ends up turning into heating the casing, packaging, and surrounding layers of material. Engineers have therefore been dealing with cooling, component layout, and consumption optimization for decades, as overheating shortens device life and limits performance. A team from the Massachusetts Institute of Technology (MIT) proposes a twist that sounds counterintuitive at first glance. Instead of treating excess heat solely as a problem to be diverted from the chip, heat can be used as an information carrier, and the propagation of heat flow itself can “perform” part of the calculation. The idea is simple at its core, but technically demanding: shaping the material so that its thermal behavior corresponds to a given mathematical operation.

In a paper available at the time of writing on January 30, 2026, with a preprint version dated January 29, 2026, authors Caio Silva and Giuseppe Romano describe microscopic silicon metastructures that can perform selected linear calculations by relying on heat conduction. The input data are not zeros and ones, but a set of temperatures imposed on the input “ports” of the structure. The heat then spreads through a specially shaped pattern, and the distribution of temperature and heat flow becomes the physical realization of the calculation. At the output, the result is not read as a digital number, but as the power or heat flow collected at output terminals maintained at a reference temperature, like a kind of thermostat on a microscale. In practice, this could, at least in certain scenarios, allow computing without additional electrical energy, using the “waste” that the device produces anyway.

Such an approach belongs to analog computing, where information is processed by continuous physical quantities instead of digital switching of logical states. Analog systems have gained new attention in recent years due to the growing energy requirements of modern data processing and bottlenecks in data transfer between memory and processors. In the case of heat, the signal carrier already exists in every processor, voltage regulator, or densely packed system-on-chip. The authors therefore emphasize that they are not aiming to replace classic processors, but rather a new class of passive thermal elements that can serve thermal mapping, heat source detection, and local signal processing in microelectronics. Thus, heat stops being viewed only as a limitation and becomes an active part of the device architecture.

How the calculation is “written” into the material

The key to the technology is not in exotic material, but in geometry. In the classic engineering approach, the structure is designed first, and then how it behaves is checked. Here, the process is reversed: first, the exact mathematical transformation to be generated is specified, and a computer algorithm then looks for the geometry that will produce the desired relationship between input and output through heat propagation. This approach is called inverse design because the engineering starts from the target function and works backward to the shape. The MIT team has used this concept before to design nanomaterials that conduct heat in specific ways, and is now applying it to structures that, instead of “just” directing heat, perform calculations.

In practice, the pattern is shown as a grid of tiny elements whose “density” can continuously vary from solid silicon to empty space, resulting in a porous pattern. An algorithm iteratively adjusts the arrangement of these elements so that the solution of the heat conductivity equation gives the required output for a series of input excitations. The authors describe the process as topological optimization supported by a differentiable thermal solver and automatic differentiation, which enables stable calculation of gradients and speeds up the search of the solution space. In such a framework, the computer does not “guess” randomly, but uses information about how a small change in geometry affects the result, systematically bringing the design closer to the target function.

The scales are micrometric: the authors show designs comparable to a dust particle, but within that miniature surface, the geometry is extremely rich. Micropores and channels change local conductivity and create preferred “corridors” through which heat passes more easily. Thus, in analogy with electrical networks, it is possible to control how the heat flow is distributed towards the output ports. When the system is excited by given temperatures at the input, the heat seeks the path of least resistance, and the design is such that this path implements the coefficients of a given matrix. Ultimately, the mathematical function is not written in software, but in the arrangement of pores and “bridges” within the silicon.

Why matrix–vector multiplication is key for AI and signal processing

At the center of the demonstration is matrix–vector multiplication (MVM), a fundamental operation in signal processing, control, and machine learning. Much of the work of modern artificial intelligence models can be reduced to a huge number of such products, so industry and academia are constantly looking for ways to perform them faster and with less energy. Because of this, other analog platforms, such as memristor arrays or photonic chips, have become important research topics. Matrix–vector multiplication is also of interest outside of AI: it appears in signal filtering, control system stabilization, image processing, and in a range of diagnostic procedures where small samples of data are transformed into useful features. If such operations can be performed with minimal additional consumption, a tool is gained that unburdens the digital part of the system.

The MIT approach differs in that it uses heat as the signal carrier. The input vector is represented as a set of imposed temperatures at multiple input ports, while the output vector is read as a set of collected powers at output ports maintained at a reference temperature. This means that the input “enters” as a temperature pattern, and the output is “extracted” as a flow of energy that can be measured. Such a hybrid representation (temperatures at the input, powers at the output) is linked by the authors to the expansion of the concept of effective thermal conductivity to systems with multiple input and output points. In real devices, this opens up the possibility for part of the data processing to take place directly where heat is already being generated, for example, next to voltage regulators, dense logic, or peripheral sensor blocks. Instead of the hot spot being only detected and cooled, it could become a source of information that is processed immediately.

The problem of negative coefficients and the solution with separation of contributions

Thermal conductivity in the simplest description has a natural limitation: heat spontaneously flows from warmer to colder, so it is difficult to directly realize a “negative” contribution in linear mapping. This is important because many matrices in signal processing, and especially those related to transformations, have both positive and negative coefficients. Researchers therefore separate the target matrix into a positive and a negative part. Instead of one structure carrying the entire matrix, separate structures are designed that realize only the positive contributions, and then the result is combined by subtracting the output powers to obtain the effective negative value in the total calculation. In this way, the limitation of the physics of heat flow is transformed into a design step: negativity is not “invented” in the material, but is achieved by the difference between two positive measurements.

An additional degree of freedom is also given through the thickness of the structure. Thicker samples, with the same lateral arrangement of pores, can conduct more heat, which changes the effective weight of individual paths and expands the set of matrices that can be approximated. The authors state that finding the “right topology” for a given matrix is demanding because the optimization must simultaneously hit the desired mathematical relationship and avoid impractical or unstable shapes. For this purpose, they develop an optimization procedure that keeps the geometry close to the target mapping but prevents the appearance of “strange” parts that would increase heat leakage or create unwanted shortcuts. This, at least in simulations, results in a design that is mathematically faithful and physically meaningful.

Results: high accuracy on small matrices and examples of useful transformations

According to data from the paper, the system in simulations achieved an accuracy greater than 99 percent in most cases on a set of matrices of dimensions 2×2 and 3×3. Although these seem like small dimensions at first glance, these are exactly the operators relevant for a series of tasks in electronics, including local filters, simple transformations, and diagnostic procedures. The authors do not stay on abstract examples, but show matrices that have a clear application in practice. Among them are the Hadamard matrix, convolutional operators described by Toeplitz matrices, and the discrete Fourier transform, the foundation of frequency analysis in signal processing. In the context of chips, these are precisely the transformations that are often performed a large number of times, so even a small energy saving is potentially significant.

An important part of the demonstration is the way more complex objects are decomposed into multiple contributions. The discrete Fourier transform also includes imaginary components, so the target matrix is decomposed into multiple real matrices which are then realized by separate structures, and the result is combined afterward. The authors use this approach for the realization of signed coefficients as well and for the control of interconnected outputs, where the number of required structures grows with the “weight” of the task. In the examples, they compare the target and obtained matrix and show deviations on the order of a percentage, depending on the complexity and selected geometries. It is also important that this is a method that, at least according to the presentation, does not rely on a single “magic” structure, but on a principle that can be applied to different matrices, with adjustment of the number of structures and the way the outputs are combined.

In the paper, they emphasize the difference compared to classic proposals for “thermal logic” that seek to build thermal diodes or transistors as an analogy to digital circuits. Such systems usually work with discrete “hot/cold” states and thus mimic digital switching. Here, the goal is a continuous regime, where geometry directs heat so that the desired linear relationship between input and output is obtained. Such continuity is essential because it allows signal processing without conversion into a series of digital switches, which is often a source of consumption and delay. At the same time, the authors do not hide that these are linear operations and that for broader forms of computing, additional nonlinearities or sequential connection of multiple blocks would be needed.

Applications in microelectronics: thermal mapping without additional consumption

The most direct application highlighted by the authors is not the immediate running of large artificial intelligence models, but smarter heat handling in microelectronics. Devices today often contain multiple temperature sensors, because local hotspots are critical for reliability: large gradients can cause mechanical stresses, accelerated material aging, and failures. If part of the thermal mapping and gradient detection can be done passively, using heat itself as a signal, an additional layer of monitoring is gained without burdening the power supply and without taking up extra surface for classic sensors. In such a scenario, the thermal metastructure acts as an “analyzer” of the thermal pattern, and the output terminals provide a measurable signal that can warn of an unwanted heat source or a change in operating conditions.

The approach is particularly interesting in scenarios where available energy is limited or where any additional heating worsens the problem to be monitored. Instead of adding a series of classic sensors and analog-to-digital converters to the chip, thermal structures can be integrated into the design that locally convert temperature patterns into output signals that can be read at edge terminals. These signals can then be used for load distribution, for cooling activation, or for early failure detection. The authors also mention the possibility of these blocks being “plugged” into existing systems without additional digital components, precisely because they use the heat that the system produces anyway. Ultimately, it is the idea of “in-situ” processing: processing takes place where the problem arises, instead of everything being measured, sent, and processed on a distant digital block.

In the broader context of heat research, MIT has in recent years published papers on tools for the automatic design of nanomaterials that direct heat flow, as well as methods that speed up the prediction of thermal properties of materials. The new work adds to this line the idea that heat flow does not have to be viewed only through the prism of cooling, but also as an information resource in electronics itself. Such a shift in perspective is especially relevant in an era when computing demand is rapidly increasing and energy efficiency becomes a limiting factor in data centers, portable devices, and industrial electronics. In this sense, “computing with heat” is not just an exotic demonstration, but an attempt to turn a physical byproduct into a functional design element.

Limitations and next steps: throughput, scaling, and programmability

Despite high accuracy on small matrices, the road to scaling is long. To use this approach for large deep learning models, a large number of structures would need to be tiled and connected, along with control of noise, manufacturing tolerances, and material variability. In the paper itself, the authors also analyze dynamic limits, because thermal expansion is not instantaneous: the signal propagates by diffusion, so throughput is limited by the time required for the system to “settle” after an input change. For an example of a structure with a lateral dimension of about 100 micrometers, they state a settling time of approximately 83.7 microseconds, which corresponds to a throughput on the order of about 1.9 kilohertz. Such figures are sufficient for some sensor and control tasks, but are far from the speeds expected in classic computing or training large models.

Additionally, as the complexity of the matrix and the distance between input and output terminals increase, precision can fall, and optimization becomes more sensitive. The authors mention the development of an architecture in which the output of one structure can be used as the input of the next as a natural upgrade, which would enable sequential operations. This is important because both neural networks and many algorithms do not perform just one product, but a series of successive transformations that together make up a model. Another important direction is programmability: instead of making a new structure for each matrix, the goal is to get an element that can be reconfigured and thus encode different matrices without “starting from zero.” This could include active thermal elements, variable boundary conditions, or other mechanisms that change effective conductivity and geometric function.

If these steps prove feasible, thermal analog computing could become a specialized technology for thermally active environments. It would not be a replacement for classic processors, but an addition in parts of the system where heat is the dominant factor and where information is already “hidden” in the temperature field. In such niches, from hotspot detection to more compact thermal sensors and local signal processing, the idea of turning waste heat into useful information could open space for different, more energy-efficient electronics designs.

Sources:
  • arXiv – preprint “Thermal Analog Computing: Application to Matrix-vector Multiplication with Inverse-designed Metastructures” (PDF, version dated January 29, 2026) (link)
  • American Physical Society (Physical Review Applied) – page of accepted paper with DOI 10.1103/5drp-hrx1 (acceptance status; lists acceptance December 23, 2025) (link)
  • Zenodo – repository of data/software accompanying the paper (version published December 24, 2025) (link)
  • MIT News – article on the system for computer design of nanomaterials that direct heat conduction (background of the inverse design method) (link)
  • MIT News – text on methods for faster prediction of thermal properties of materials (broader research context) (link)

Find accommodation nearby

Creation time: 3 hours ago

Science & tech desk

Our Science and Technology Editorial Desk was born from a long-standing passion for exploring, interpreting, and bringing complex topics closer to everyday readers. It is written by employees and volunteers who have followed the development of science and technological innovation for decades, from laboratory discoveries to solutions that change daily life. Although we write in the plural, every article is authored by a real person with extensive editorial and journalistic experience, and deep respect for facts and verifiable information.

Our editorial team bases its work on the belief that science is strongest when it is accessible to everyone. That is why we strive for clarity, precision, and readability, without oversimplifying in a way that would compromise the quality of the content. We often spend hours studying research papers, technical documents, and expert sources in order to present each topic in a way that will interest rather than burden the reader. In every article, we aim to connect scientific insights with real life, showing how ideas from research centres, universities, and technology labs shape the world around us.

Our long experience in journalism allows us to recognize what is truly important for the reader, whether it is progress in artificial intelligence, medical breakthroughs, energy solutions, space missions, or devices that enter our everyday lives before we even imagine their possibilities. Our view of technology is not purely technical; we are also interested in the human stories behind major advances – researchers who spend years completing projects, engineers who turn ideas into functional systems, and visionaries who push the boundaries of what is possible.

A strong sense of responsibility guides our work as well. We want readers to trust the information we provide, so we verify sources, compare data, and avoid rushing to publish when something is not fully clear. Trust is built more slowly than news is written, but we believe that only such journalism has lasting value.

To us, technology is more than devices, and science is more than theory. These are fields that drive progress, shape society, and create new opportunities for everyone who wants to understand how the world works today and where it is heading tomorrow. That is why we approach every topic with seriousness but also with curiosity, because curiosity opens the door to the best stories.

Our mission is to bring readers closer to a world that is changing faster than ever before, with the conviction that quality journalism can be a bridge between experts, innovators, and all those who want to understand what happens behind the headlines. In this we see our true task: to transform the complex into the understandable, the distant into the familiar, and the unknown into the inspiring.

NOTE FOR OUR READERS
Karlobag.eu provides news, analyses and information on global events and topics of interest to readers worldwide. All published information is for informational purposes only.
We emphasize that we are not experts in scientific, medical, financial or legal fields. Therefore, before making any decisions based on the information from our portal, we recommend that you consult with qualified experts.
Karlobag.eu may contain links to external third-party sites, including affiliate links and sponsored content. If you purchase a product or service through these links, we may earn a commission. We have no control over the content or policies of these sites and assume no responsibility for their accuracy, availability or any transactions conducted through them.
If we publish information about events or ticket sales, please note that we do not sell tickets either directly or via intermediaries. Our portal solely informs readers about events and purchasing opportunities through external sales platforms. We connect readers with partners offering ticket sales services, but do not guarantee their availability, prices or purchase conditions. All ticket information is obtained from third parties and may be subject to change without prior notice. We recommend that you thoroughly check the sales conditions with the selected partner before any purchase, as the Karlobag.eu portal does not assume responsibility for transactions or ticket sale conditions.
All information on our portal is subject to change without prior notice. By using this portal, you agree to read the content at your own risk.