Researchers at the Massachusetts Institute of Technology have developed a robotic system designed to identify and prioritize objects in a physical environment based on their relevance to human objectives. The system, called “Relevance,” aims to improve how robots interpret and respond to human needs by filtering visual and audio inputs to focus on contextually important items, potentially enhancing their utility in settings such as households, workplaces, and warehouses.
The system draws inspiration from the human brain’s Reticular Activating System (RAS), which helps manage sensory input by filtering out extraneous information. Similarly, the Relevance method enables a robot to selectively process scene elements by integrating a suite of artificial intelligence tools. These tools include large language models for processing spoken language and algorithms for recognizing and classifying objects, actions, and task goals.
The robotic process involves four stages. Initially, the system enters a passive perception phase, where it collects and processes background data. A subsequent trigger check identifies the presence of humans, activating the core relevance algorithm. This algorithm assesses the likelihood of object classes and individual items being useful based on the interpreted human intent. In the final stage, the robot performs actions to access and offer the selected items to the user.
The research team conducted a series of tests using a robotic arm in a scenario modeled after a conference breakfast buffet. During these tests, the system identified human goals and associated relevant objects with high accuracy, predicting user objectives with 90 percent accuracy and relevant items with 96 percent accuracy. It also reduced task-related collisions by over 60 percent compared to non-relevance-based approaches.
The results suggest that the Relevance system may facilitate more intuitive and efficient human-robot interactions without requiring constant verbal instructions. The research team plans to explore applications in manufacturing and warehouse environments and to evaluate performance in everyday household contexts.
The work will be presented at the IEEE International Conference on Robotics and Automation (ICRA) in May and builds on a previous paper presented at the same conference in the prior year. The research was led by mechanical engineering professor Kamal Youcef-Toumi, along with graduate students Xiaotong Zhang and Dingcheng Huang.
Photo: MIT