New MYTH method: Robots sense objects | Karlobag.eu

Revolution in robotics: mit's new method allows machines to 'feel' the weight and softness of objects just by touching them

Researchers from mit, Amazon Robotics and the University of British Columbia have developed an innovative technique that allows robots to assess the weight, softness and other physical properties of objects using internal sensors and advanced differentiable simulation only. This method, based on proprioception, also works without external cameras.

Revolution in robotics: mit
Photo by: Domagoj Skledar/ arhiva (vlastita)

In a world increasingly reliant on automation, the ability of robots to understand and interact with physical objects in their environment is becoming crucial. New research opens the door to a future where machines can assess the properties of objects, such as weight or softness, simply by lifting and shaking them, similar to how humans do. This advancement, stemming from a collaboration of scientists from prestigious institutions like the Massachusetts Institute of Technology (MIT), Amazon Robotics, and the University of British Columbia, promises to revolutionize how robots learn and operate in complex environments.


Senses from within: A new paradigm in robotic perception


Traditionally, robots have heavily relied on external sensors, such as cameras and computer vision systems, to gather information about objects. However, the new method shifts the focus to internal sensors, allowing robots to "feel" the physical properties of objects. This technique does not require expensive external measurement tools or cameras, making it extremely useful in situations where visibility is limited or where cameras might be less effective. Imagine a robot sorting objects in a dark basement or clearing debris after an earthquake – it is precisely in such scenarios that this innovation shows its full potential.


The core of this approach lies in the use of proprioception, the ability of a robot (or human) to sense its own movement and position in space. Just as a person lifting a weight in the gym feels its heaviness through the muscles and joints of their arm, a robot can "feel" the weight of an object through the multiple joints of its robotic arm. Researchers point out that while humans do not have extremely precise measurements of joint angles or the exact amount of torque they apply, robots possess these capabilities thanks to advanced sensors built into their motors.


How do robots "learn" by touch?


When a robot lifts an object, the system collects signals from joint encoders. Encoders are sensors that detect the rotational position and speed of joints during movement. Most modern robots already have encoders within the motors that drive their moving parts, making this technique more cost-effective compared to approaches that require additional components like tactile sensors or complex vision tracking systems.


The system relies on two key models to estimate object properties during interaction: one that simulates the robot itself and its movements, and another that simulates the object's dynamics. Peter Yichen Chen, a postdoctoral fellow at MIT and lead author of the paper on this technique, emphasizes the importance of having an accurate "digital twin" of the real world for the method's success. The algorithm observes the movement of the robot and the object during physical interaction and uses data from the joint encoders to reverse-calculate and identify the object's properties. For example, a heavier object will move slower than a lighter one if the robot applies the same amount of force.


This process allows the robot to accurately estimate parameters such as the object's mass in just a few seconds. The research team has shown that their technique is as good at guessing an object's mass as some more complex and expensive methods involving computer vision. An additional advantage is the robustness of the approach, which is data-efficient and capable of handling many types of unseen scenarios where the robot encounters objects it has not previously "met."


The power of differentiable simulation


A key element enabling this rapid and precise estimation is a technique called differentiable simulation. This advanced simulation process allows the algorithm to predict how small changes in object properties, such as mass or softness, affect the final position of the robot's joints. In other words, the simulation can "differentiate" the effects of different physical parameters on the robot's movement.


To build these complex simulations, the researchers used the NVIDIA Warp library, an open-source tool for developers that supports differentiable simulations. Warp allows developers to write GPU-accelerated programs for simulation, artificial intelligence, and machine learning directly in Python, offering performance comparable to native CUDA code while maintaining Python's productivity. Once the differentiable simulation aligns with the robot's actual movements, the system has successfully identified the correct property. The algorithm can achieve this in a few seconds and needs only one real-world trajectory of the robot in motion to perform the calculations.


Chao Liu, also a postdoctoral fellow at MIT and one of the co-authors of the study, explains: "Technically, as long as you know the model of the object and how the robot can apply force to that object, you should be able to figure out the parameter you want to identify." Although the researchers primarily focused on learning an object's mass and softness, their technique has the potential to determine other properties as well, such as moment of inertia or the viscosity of a liquid inside a container.


Advantages and future directions


One of the significant advantages of this approach is its independence from extensive training datasets, unlike some methods that rely on computer vision or external sensors. This makes it less prone to failure when faced with unfamiliar environments or new objects. Robots equipped with this capability could be considerably more adaptable and resourceful.


In the future, the research team plans to combine their method with computer vision to create a multimodal perception technique that would be even more powerful. "This work is not trying to replace computer vision. Both methods have their pros and cons. But here we've shown that even without a camera, we can already figure out some of these properties," says Chen. Integrating different sensory modalities could lead to robots with extremely sophisticated environmental perception.


There is also interest in exploring applications with more complicated robotic systems, such as soft robots, whose flexible bodies present unique challenges and opportunities for sensory interaction. Similarly, there are plans to extend the technique to more complex objects, including sloshing liquids or granular media like sand. Understanding the dynamics of such materials solely through tactile interaction would be a significant step forward.


The long-term goal is to apply this technique to enhance robot learning, enabling future generations of robots to quickly develop new manipulation skills and adapt to changes in their environments. "Determining the physical properties of objects from data has long been a challenge in robotics, especially when only limited or noisy measurements are available," commented Miles Macklin, senior director of simulation technology at NVIDIA, who was not involved in this research. "This work is significant because it shows that robots can accurately infer properties like mass and softness using only their internal joint sensors, without relying on external cameras or specialized measurement tools."


This advancement opens up a vision of robots independently exploring the world, touching and moving objects in their environment, and thereby learning about the properties of everything they interact with. Such a capability would not only advance industrial automation but also have a profound impact on areas like household assistance, medical care, and research in hazardous environments. The ability of robots to "feel" and understand the world around them in a more intuitive, human-like way is key to their fuller integration into our daily lives. Funding for this promising work was partly provided by Amazon and the GIST-CSAIL research program, signaling industry interest in the practical applications of such technologies.


The development of such technologies also encourages reflection on the future of human-robot interaction. As robots become increasingly capable of perceiving and reacting to their environment in subtle ways, new possibilities for collaboration and teamwork open up. Peter Yichen Chen's vision of robots independently exploring and learning about object properties through touch is not just a technical goal, but also a step towards creating more intelligent and autonomous systems that can help humanity solve complex problems.

Source: Massachusetts Institute of Technology

FIND ACCOMMODATION NEARBY

Creation time: 8 hours ago

AI Lara Teč

AI Lara Teč is an innovative AI journalist of our global portal, specializing in covering the latest trends and achievements in the world of science and technology. With her expert knowledge and analytical approach, Lara provides in-depth insights and explanations on the most complex topics, making them accessible and understandable for readers worldwide.

Expert Analysis and Clear Explanations Lara utilizes her expertise to analyze and explain complex scientific and technological subjects, focusing on their importance and impact on everyday life. Whether it's the latest technological innovations, breakthroughs in research, or trends in the digital world, Lara offers thorough analyses and explanations, highlighting key aspects and potential implications for readers.

Your Guide Through the World of Science and Technology Lara's articles are designed to guide you through the intricate world of science and technology, providing clear and precise explanations. Her ability to break down complex concepts into understandable parts makes her articles an indispensable resource for anyone looking to stay updated with the latest scientific and technological advancements.

More Than AI - Your Window to the Future AI Lara Teč is not just a journalist; she is a window to the future, providing insights into new horizons in science and technology. Her expert guidance and in-depth analysis help readers comprehend and appreciate the complexity and beauty of innovations that shape our world. With Lara, stay informed and inspired by the latest achievements that the world of science and technology has to offer.

NOTE FOR OUR READERS
Karlobag.eu provides news, analyses and information on global events and topics of interest to readers worldwide. All published information is for informational purposes only.
We emphasize that we are not experts in scientific, medical, financial or legal fields. Therefore, before making any decisions based on the information from our portal, we recommend that you consult with qualified experts.
Karlobag.eu may contain links to external third-party sites, including affiliate links and sponsored content. If you purchase a product or service through these links, we may earn a commission. We have no control over the content or policies of these sites and assume no responsibility for their accuracy, availability or any transactions conducted through them.
If we publish information about events or ticket sales, please note that we do not sell tickets either directly or via intermediaries. Our portal solely informs readers about events and purchasing opportunities through external sales platforms. We connect readers with partners offering ticket sales services, but do not guarantee their availability, prices or purchase conditions. All ticket information is obtained from third parties and may be subject to change without prior notice. We recommend that you thoroughly check the sales conditions with the selected partner before any purchase, as the Karlobag.eu portal does not assume responsibility for transactions or ticket sale conditions.
All information on our portal is subject to change without prior notice. By using this portal, you agree to read the content at your own risk.