Researchers from the Massachusetts Institute of Technology (MIT) have pioneered an innovative method that simplifies ‘contact-rich manipulation planning’ in robots, an intricate process involving numerous potential contact events when robots interact with objects.
Typically, humans adeptly use their whole body to manipulate objects, like carrying a heavy box. For robots, however, determining every potential point of contact on an object has traditionally been an enormous computational challenge.
The MIT team’s breakthrough involves a process called smoothing, an AI technique that condenses multiple contact events into fewer decisions. This simplification allows even straightforward algorithms to rapidly produce an efficient manipulation plan for robots.
This novel approach holds significant promise for industrial applications. For instance, factories might soon deploy smaller, versatile robots that utilize their entire arms or bodies to manipulate items, as opposed to the current standard: large robotic arms limited to fingertip grasping. Such advancements could lead to reduced energy usage and lower operational costs. The technology also bears potential for robots on space exploration missions, allowing them to quickly adapt to extraterrestrial terrains using their onboard computers.
H.J. Terry Suh, an electrical engineering and computer science (EECS) graduate student at MIT, and a co-lead author on the research, emphasized the importance of this development. Suh stated, “If we can leverage the structure of these robotic systems using models, there’s an opportunity to speed up the whole decision-making procedure.”
The research, which is set to appear in the IEEE Transactions on Robotics, was conducted in collaboration with Tao Pang PhD ’23 from Boston Dynamics AI Institute, Lujie Yang (another EECS graduate student), and senior author Russ Tedrake from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
The MIT team delved deep into reinforcement learning – a method where an agent learns tasks via trial and error. While traditionally effective for contact-rich manipulation, the trial-and-error approach is computationally intensive. Through their detailed analysis, the researchers identified that smoothing, which averages out unimportant decisions, is integral to the efficiency of reinforcement learning.
By integrating this insight into their model, the team successfully reduced computation times significantly. When tested, their model matched the performance of reinforcement learning in simulations but required substantially less time.
espite the progress, the current model does have its limitations, particularly concerning very dynamic motions like falling objects. The team is now aiming to refine their technique to address such challenges.
The research received funding from various entities, including Amazon, MIT Lincoln Laboratory, the National Science Foundation, and the Ocado Group.
Photo: H.J. Terry Suh, Lujie Yang, Russ Tedrake, et al