Postavke privatnosti

Revolution in robotics: mit's new method allows machines to 'feel' the weight and softness of objects just by touching them

Researchers from mit, Amazon Robotics and the University of British Columbia have developed an innovative technique that allows robots to assess the weight, softness and other physical properties of objects using internal sensors and advanced differentiable simulation only. This method, based on proprioception, also works without external cameras.

Revolution in robotics: mit
Photo by: Domagoj Skledar/ arhiva (vlastita)

In a world increasingly reliant on automation, the ability of robots to understand and interact with physical objects in their environment is becoming crucial. New research opens the door to a future where machines can assess the properties of objects, such as weight or softness, simply by lifting and shaking them, similar to how humans do. This advancement, stemming from a collaboration of scientists from prestigious institutions like the Massachusetts Institute of Technology (MIT), Amazon Robotics, and the University of British Columbia, promises to revolutionize how robots learn and operate in complex environments.


Senses from within: A new paradigm in robotic perception


Traditionally, robots have heavily relied on external sensors, such as cameras and computer vision systems, to gather information about objects. However, the new method shifts the focus to internal sensors, allowing robots to "feel" the physical properties of objects. This technique does not require expensive external measurement tools or cameras, making it extremely useful in situations where visibility is limited or where cameras might be less effective. Imagine a robot sorting objects in a dark basement or clearing debris after an earthquake – it is precisely in such scenarios that this innovation shows its full potential.


The core of this approach lies in the use of proprioception, the ability of a robot (or human) to sense its own movement and position in space. Just as a person lifting a weight in the gym feels its heaviness through the muscles and joints of their arm, a robot can "feel" the weight of an object through the multiple joints of its robotic arm. Researchers point out that while humans do not have extremely precise measurements of joint angles or the exact amount of torque they apply, robots possess these capabilities thanks to advanced sensors built into their motors.


How do robots "learn" by touch?


When a robot lifts an object, the system collects signals from joint encoders. Encoders are sensors that detect the rotational position and speed of joints during movement. Most modern robots already have encoders within the motors that drive their moving parts, making this technique more cost-effective compared to approaches that require additional components like tactile sensors or complex vision tracking systems.


The system relies on two key models to estimate object properties during interaction: one that simulates the robot itself and its movements, and another that simulates the object's dynamics. Peter Yichen Chen, a postdoctoral fellow at MIT and lead author of the paper on this technique, emphasizes the importance of having an accurate "digital twin" of the real world for the method's success. The algorithm observes the movement of the robot and the object during physical interaction and uses data from the joint encoders to reverse-calculate and identify the object's properties. For example, a heavier object will move slower than a lighter one if the robot applies the same amount of force.


This process allows the robot to accurately estimate parameters such as the object's mass in just a few seconds. The research team has shown that their technique is as good at guessing an object's mass as some more complex and expensive methods involving computer vision. An additional advantage is the robustness of the approach, which is data-efficient and capable of handling many types of unseen scenarios where the robot encounters objects it has not previously "met."


The power of differentiable simulation


A key element enabling this rapid and precise estimation is a technique called differentiable simulation. This advanced simulation process allows the algorithm to predict how small changes in object properties, such as mass or softness, affect the final position of the robot's joints. In other words, the simulation can "differentiate" the effects of different physical parameters on the robot's movement.


To build these complex simulations, the researchers used the NVIDIA Warp library, an open-source tool for developers that supports differentiable simulations. Warp allows developers to write GPU-accelerated programs for simulation, artificial intelligence, and machine learning directly in Python, offering performance comparable to native CUDA code while maintaining Python's productivity. Once the differentiable simulation aligns with the robot's actual movements, the system has successfully identified the correct property. The algorithm can achieve this in a few seconds and needs only one real-world trajectory of the robot in motion to perform the calculations.


Chao Liu, also a postdoctoral fellow at MIT and one of the co-authors of the study, explains: "Technically, as long as you know the model of the object and how the robot can apply force to that object, you should be able to figure out the parameter you want to identify." Although the researchers primarily focused on learning an object's mass and softness, their technique has the potential to determine other properties as well, such as moment of inertia or the viscosity of a liquid inside a container.


Advantages and future directions


One of the significant advantages of this approach is its independence from extensive training datasets, unlike some methods that rely on computer vision or external sensors. This makes it less prone to failure when faced with unfamiliar environments or new objects. Robots equipped with this capability could be considerably more adaptable and resourceful.


In the future, the research team plans to combine their method with computer vision to create a multimodal perception technique that would be even more powerful. "This work is not trying to replace computer vision. Both methods have their pros and cons. But here we've shown that even without a camera, we can already figure out some of these properties," says Chen. Integrating different sensory modalities could lead to robots with extremely sophisticated environmental perception.


There is also interest in exploring applications with more complicated robotic systems, such as soft robots, whose flexible bodies present unique challenges and opportunities for sensory interaction. Similarly, there are plans to extend the technique to more complex objects, including sloshing liquids or granular media like sand. Understanding the dynamics of such materials solely through tactile interaction would be a significant step forward.


The long-term goal is to apply this technique to enhance robot learning, enabling future generations of robots to quickly develop new manipulation skills and adapt to changes in their environments. "Determining the physical properties of objects from data has long been a challenge in robotics, especially when only limited or noisy measurements are available," commented Miles Macklin, senior director of simulation technology at NVIDIA, who was not involved in this research. "This work is significant because it shows that robots can accurately infer properties like mass and softness using only their internal joint sensors, without relying on external cameras or specialized measurement tools."


This advancement opens up a vision of robots independently exploring the world, touching and moving objects in their environment, and thereby learning about the properties of everything they interact with. Such a capability would not only advance industrial automation but also have a profound impact on areas like household assistance, medical care, and research in hazardous environments. The ability of robots to "feel" and understand the world around them in a more intuitive, human-like way is key to their fuller integration into our daily lives. Funding for this promising work was partly provided by Amazon and the GIST-CSAIL research program, signaling industry interest in the practical applications of such technologies.


The development of such technologies also encourages reflection on the future of human-robot interaction. As robots become increasingly capable of perceiving and reacting to their environment in subtle ways, new possibilities for collaboration and teamwork open up. Peter Yichen Chen's vision of robots independently exploring and learning about object properties through touch is not just a technical goal, but also a step towards creating more intelligent and autonomous systems that can help humanity solve complex problems.

Source: Massachusetts Institute of Technology

Find accommodation nearby

Creation time: 09 May, 2025

Science & tech desk

Our Science and Technology Editorial Desk was born from a long-standing passion for exploring, interpreting, and bringing complex topics closer to everyday readers. It is written by employees and volunteers who have followed the development of science and technological innovation for decades, from laboratory discoveries to solutions that change daily life. Although we write in the plural, every article is authored by a real person with extensive editorial and journalistic experience, and deep respect for facts and verifiable information.

Our editorial team bases its work on the belief that science is strongest when it is accessible to everyone. That is why we strive for clarity, precision, and readability, without oversimplifying in a way that would compromise the quality of the content. We often spend hours studying research papers, technical documents, and expert sources in order to present each topic in a way that will interest rather than burden the reader. In every article, we aim to connect scientific insights with real life, showing how ideas from research centres, universities, and technology labs shape the world around us.

Our long experience in journalism allows us to recognize what is truly important for the reader, whether it is progress in artificial intelligence, medical breakthroughs, energy solutions, space missions, or devices that enter our everyday lives before we even imagine their possibilities. Our view of technology is not purely technical; we are also interested in the human stories behind major advances – researchers who spend years completing projects, engineers who turn ideas into functional systems, and visionaries who push the boundaries of what is possible.

A strong sense of responsibility guides our work as well. We want readers to trust the information we provide, so we verify sources, compare data, and avoid rushing to publish when something is not fully clear. Trust is built more slowly than news is written, but we believe that only such journalism has lasting value.

To us, technology is more than devices, and science is more than theory. These are fields that drive progress, shape society, and create new opportunities for everyone who wants to understand how the world works today and where it is heading tomorrow. That is why we approach every topic with seriousness but also with curiosity, because curiosity opens the door to the best stories.

Our mission is to bring readers closer to a world that is changing faster than ever before, with the conviction that quality journalism can be a bridge between experts, innovators, and all those who want to understand what happens behind the headlines. In this we see our true task: to transform the complex into the understandable, the distant into the familiar, and the unknown into the inspiring.

NOTE FOR OUR READERS
Karlobag.eu provides news, analyses and information on global events and topics of interest to readers worldwide. All published information is for informational purposes only.
We emphasize that we are not experts in scientific, medical, financial or legal fields. Therefore, before making any decisions based on the information from our portal, we recommend that you consult with qualified experts.
Karlobag.eu may contain links to external third-party sites, including affiliate links and sponsored content. If you purchase a product or service through these links, we may earn a commission. We have no control over the content or policies of these sites and assume no responsibility for their accuracy, availability or any transactions conducted through them.
If we publish information about events or ticket sales, please note that we do not sell tickets either directly or via intermediaries. Our portal solely informs readers about events and purchasing opportunities through external sales platforms. We connect readers with partners offering ticket sales services, but do not guarantee their availability, prices or purchase conditions. All ticket information is obtained from third parties and may be subject to change without prior notice. We recommend that you thoroughly check the sales conditions with the selected partner before any purchase, as the Karlobag.eu portal does not assume responsibility for transactions or ticket sale conditions.
All information on our portal is subject to change without prior notice. By using this portal, you agree to read the content at your own risk.