The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Humanoids
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Business
    • Financial
      • Investments
      • Mergers & Acquisitions
      • Earnings
    • Markets
      • Agriculture
      • Healthcare
      • Logistics
      • Manufacturing
      • Mining
      • Security
    • RBR50
      • RBR50 Winners 2025
      • RBR50 Winners 2024
      • RBR50 Winners 2023
      • RBR50 Winners 2022
      • RBR50 Winners 2021
  • Resources
    • Automated Warehouse Research Reports
    • Digital Issues
    • eBooks
    • Publications
      • Automated Warehouse
      • Collaborative Robotics Trends
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
  • Advertise
  • Subscribe

MIT framework allows robots to learn faster in new environments

By Brianna Wessling | July 18, 2023

MIT.

Researchers at MIT have developed a system that allows people without technical knowledge to fine-tune a robot’s ability to perform tasks. | Source: MIT

A group of researchers at MIT have developed a framework that could help robots learn faster in new environments without needing a user to have technical knowledge. This technique helps users without technical knowledge understand why a robot might have failed to perform a task and then allows them to fine-tune the robot with minimal effort. 

This software is aimed at home robots that are built and trained in a factory on certain tasks but have never seen the items in the user’s home. While these robots have been trained in controlled environments, they can often fail when presented with objects and spaces they didn’t learn in. 

“Right now, the way we train these robots, when they fail, we don’t really know why. So you would just throw up your hands and say, ‘OK, I guess we have to start over.’ A critical component that is missing from this system is enabling the robot to demonstrate why it is failing so the user can give it feedback,” Andi Peng, an electrical engineering and computer science (EECS) graduate student at MIT, said.

Peng collaborated with other researchers at MIT, New York University, and the University of California at Berkeley on the project. 

To tackle this problem, the MIT team’s system uses an algorithm to generate counterfactual explanations whenever a robot fails. These counterfactual explanations describe what needed to change for the robot to succeed in its task.

The system then shows these counterfactuals to the user and asks for additional feedback on why the robot failed. It uses this feedback and the counterfactual explanations to generate new data and it can use to fine-tune the robot. This fine-tuning could mean tweaking a machine-learning model that has already been trained to perform one task so that it can perform a second, similar task. 

For example, imagine asking a home robot to pick up a mug with a logo on it on a table. The robot might look at the mug and notice the logo and be unable to pick it up. Traditional training methods might fix this kind of issue by having a user retrain the robot by demonstrating how to pick up the mug, but this method isn’t very effective at teaching robots how to pick up any kind of mug. 

“I don’t want to have to demonstrate with 30,000 mugs. I want to demonstrate with just one mug. But then I need to teach the robot so it recognizes that it can pick up a mug of any color,” Peng said.

This new framework, however, can take the user demonstration and identify what needs to change about the situation for the robot to work, like possibly changing the color of the mug. These are the counterfactual explanations presented to the user, who can then help the system understand what elements aren’t important to complete the task, like the color of the mug. 

The system uses this information to generate new, synthetic data by changing these unimportant visual concepts through a process called data augmentation. 

MIT’s team tested this research with different human users, as this framework makes them an important part of the training loop. The team found that users were able to easily identify elements of a scenario that can be changed without affecting the task. 

When tested in simulation, this system was able to learn new tasks faster than other techniques and with fewer demonstrations from users. 

The research was completed by Peng, the lead author, as well as co-authors Aviv Netanyahu, an EECS graduate student; Mark Ho, an assistant professor at the Stevens Institute of Technology; Tianmin Shu, an MIT postdoc; Andreea Bobu, a graduate student at UC Berkeley; and senior authors Julie Shah, an MIT professor of aeronautics and astronautics and the director of the Interactive Robotics Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Pulkit Agrawal, a professor in CSAIL.

This research is supported, in part, by a National Science Foundation Graduate Research Fellowship, Open Philanthropy, an Apple AI/ML Fellowship, Hyundai Motor Corporation, the MIT-IBM Watson AI Lab, and the National Science Foundation Institute for Artificial Intelligence and Fundamental Interactions.

About The Author

Brianna Wessling

Brianna Wessling is an Associate Editor, Robotics, WTWH Media. She joined WTWH Media in November 2021, after graduating from the University of Kansas with degrees in Journalism and English. She covers a wide range of robotics topics, but specializes in women in robotics, robotics in healthcare, and space robotics.

She can be reached at [email protected]

Comments

  1. M says

    July 21, 2023 at 10:14 am

    learn faster? I thought the whole idea behind AI & computerized robots was to put an entire (for instance) Medical PhD program on a USB drive, plug it into the robot & once uploaded the bot is a fully trained Doctor knowing all there is to know about medicine. Knowing all the tips & tricks & history of the profession. So these units have to LEARN? So it will take 21 years for a bot to learn all that a human knows? These bots will have to attend MIT?

    !

    Reply

Tell Us What You Think! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles Read More >

ARM Institute issues final project call for defense systems
A thermal camera can capture data such as this on which synthetic data can build.
How robots learn to handle the heat with synthetic data
The MobED platform from Hyundai Motor Group can operate inside and outside.
Hyundai Motor Group unveils its first mass-produced mobility robot platform
CivNav works with existing solar equipment such as this loader, says Civ Robotics.
Civ Robotics provides CivNav AI navigation for solar construction systems

RBR50 Innovation Awards

“rr
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for Robotics Professionals.

Latest Episode of The Robot Report Podcast

Automated Warehouse Research Reports

Sponsored Content

  • Supporting the future of medical robotics with smarter motor solutions
  • YUAN Unveils Next-Gen AI Robotics Powered by NVIDIA for Land, Sea & Air
  • ASMPT chooses Renishaw for high-quality motion control
  • Revolutionizing Manufacturing with Smart Factories
  • How to Set Up a Planetary Gear Motion with SOLIDWORKS
The Robot Report
  • Automated Warehouse
  • RoboBusiness Event
  • Robotics Summit & Expo
  • About The Robot Report
  • Subscribe
  • Contact Us

Copyright © 2025 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Humanoids
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Business
    • Financial
      • Investments
      • Mergers & Acquisitions
      • Earnings
    • Markets
      • Agriculture
      • Healthcare
      • Logistics
      • Manufacturing
      • Mining
      • Security
    • RBR50
      • RBR50 Winners 2025
      • RBR50 Winners 2024
      • RBR50 Winners 2023
      • RBR50 Winners 2022
      • RBR50 Winners 2021
  • Resources
    • Automated Warehouse Research Reports
    • Digital Issues
    • eBooks
    • Publications
      • Automated Warehouse
      • Collaborative Robotics Trends
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
  • Advertise
  • Subscribe