The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Humanoids
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Business
    • Financial
      • Investments
      • Mergers & Acquisitions
      • Earnings
    • Markets
      • Agriculture
      • Healthcare
      • Logistics
      • Manufacturing
      • Mining
      • Security
    • RBR50
      • RBR50 Winners 2025
      • RBR50 Winners 2024
      • RBR50 Winners 2023
      • RBR50 Winners 2022
      • RBR50 Winners 2021
  • Resources
    • Automated Warehouse Research Reports
    • Digital Issues
    • eBooks
    • Publications
      • Automated Warehouse
      • Collaborative Robotics Trends
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
  • Advertise
  • Subscribe

MIT CSAIL teaches robots to do chores using Real-to-Sim-to-Real

By The Robot Report Staff | July 30, 2024

To work in a wide range of real-world conditions, robots need to learn generalist policies. To that end, researchers at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory, or MIT CSAIL, have created a Real-to-Sim-to-Real model.

The goal of many developers is to create hardware and software so that robots can work everywhere under all conditions. However, a robot that operates in one person’s home doesn’t need to know how to operate in all of the neighboring homes.

MIT CSAIL’s team chose to focus on RialTo, a method to easily train robot policies for specific environments. The researchers said it improved policies by 67% over imitation learning with the same number of demonstrations.

It taught the system to perform everyday tasks, such as opening a toaster, placing a book on a shelf, putting a plate on a rack, placing a mug on a shelf, opening a drawer, and opening a cabinet.

“We aim for robots to perform exceptionally well under disturbances, distractions, varying lighting conditions, and changes in object poses, all within a single environment,” said Marcel Torne Villasevil, MIT CSAIL research assistant in the Improbable AI lab and lead author on a new paper about the work.

“We propose a method to create digital twins on the fly using the latest advances in computer vision,” he explained. “With just their phones, anyone can capture a digital replica of the real world, and the robots can train in a simulated environment much faster than the real world, thanks to GPU parallelization. Our approach eliminates the need for extensive reward engineering by leveraging a few real-world demonstrations to jumpstart the training process.”


SITE AD for the 2026 Robotics Summit save the date.

RialTo builds policies from reconstructed scenes

Torne’s vision is exciting, but RialTo is more complicated than just waving your phone and having a home robot on call. First, the user uses their device to scan the chosen environment with tools like NeRFStudio, ARCode, or Polycam.

Once the scene is reconstructed, users can upload it to RialTo’s interface to make detailed adjustments, add necessary joints to the robots, and more. 

Next, the redefined scene is exported and brought into the simulator. Here, the goal is to create a policy based on real-world actions and observations. These real-world demonstrations are replicated in the simulation, providing some valuable data for reinforcement learning (RL). 

“This helps in creating a strong policy that works well in both the simulation and the real world,” said Torne. “An enhanced algorithm using reinforcement learning helps guide this process, to ensure the policy is effective when applied outside of the simulator.”

Researchers test model’s performance

In testing, MIT CSAIL found that RialTo created strong policies for a variety of tasks, whether in controlled lab settings or in more unpredictable real-world environments. For each task, the researchers tested the system’s performance under three increasing levels of difficulty: randomizing object poses, adding visual distractors, and applying physical disturbances during task executions.

“To deploy robots in the real world, researchers have traditionally relied on methods such as imitation learning from expert data which can be expensive, or reinforcement learning, which can be unsafe,” said Zoey Chen, a computer science Ph.D. student at the University of Washington who wasn’t involved in the paper. “RialTo directly addresses both the safety constraints of real-world RL, and efficient data constraints for data-driven learning methods, with its novel real-to-sim-to-real pipeline.”

“This novel pipeline not only ensures safe and robust training in simulation before real-world deployment, but also significantly improves the efficiency of data collection,” she added. “RialTo has the potential to significantly scale up robot learning and allows robots to adapt to complex real-world scenarios much more effectively.”

When paired with real-world data, the system outperformed traditional imitation-learning methods, especially in situations with lots of visual distractions or physical disruptions, the researchers said.

MIT CSAIL's RialTo system at work on a robot arm trying to open a cabinet.

MIT CSAIL’s RialTo system at work on a robot arm trying to open a cabinet. | Source: MIT CSAIL

MIT CSAIL continues work on robot training

While the results so far are promising, RialTo isn’t without limitations. Currently, the system takes three days to be fully trained. To speed this up, the team hopes to improve the underlying algorithms using foundation models.

Training in simulation also has limitations. Sim-to-real transfer and simulating deformable objects or liquids are still difficult. The MIT CSAIL team said it plans to build on previous efforts by working on preserving robustness against various disturbances while improving the model’s adaptability to new environments. 

“Our next endeavor is this approach to using pre-trained models, accelerating the learning process, minimizing human input, and achieving broader generalization capabilities,” said Torne.

Torne wrote the paper alongside senior authors Abhishek Gupta, assistant professor at the University of Washington, and Pulkit Agrawal, an assistant professor in the department of Electrical Engineering and Computer Science (EECS) at MIT.

Four other CSAIL members within that lab are also credited: EECS Ph.D. student Anthony Simeonov SM ’22, research assistant Zechu Li, undergraduate student April Chan, and Tao Chen Ph.D. ’24. This work was supported, in part, by the Sony Research Award, the U.S. government, and Hyundai Motor Co., with assistance from the WEIRD (Washington Embodied Intelligence and Robotics Development) Lab. 

Tell Us What You Think! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles Read More >

Inbolt is helping manufacturers such as Stellantis with vision-guided robots like this one.
Inbolt provides vision guidance in real time for new bin-picking system
Antioch co-founders Harry Mellsop, Alex Langshur, Colton Swingle, and Collin Schlager.
Antioch raises pre-seed funding to accelerate AI robotics testing
A thermal camera can capture data such as this on which synthetic data can build.
How robots learn to handle the heat with synthetic data
Melonee Wise brings her robotics experience to KUKA. Source: KUKA
Melonee Wise to lead KUKA’s new software and AI organization

RBR50 Innovation Awards

“rr
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for Robotics Professionals.

Latest Episode of The Robot Report Podcast

Automated Warehouse Research Reports

Sponsored Content

  • Supporting the future of medical robotics with smarter motor solutions
  • YUAN Unveils Next-Gen AI Robotics Powered by NVIDIA for Land, Sea & Air
  • ASMPT chooses Renishaw for high-quality motion control
  • Revolutionizing Manufacturing with Smart Factories
  • How to Set Up a Planetary Gear Motion with SOLIDWORKS
The Robot Report
  • Automated Warehouse
  • RoboBusiness Event
  • Robotics Summit & Expo
  • About The Robot Report
  • Subscribe
  • Contact Us

Copyright © 2025 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Humanoids
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Business
    • Financial
      • Investments
      • Mergers & Acquisitions
      • Earnings
    • Markets
      • Agriculture
      • Healthcare
      • Logistics
      • Manufacturing
      • Mining
      • Security
    • RBR50
      • RBR50 Winners 2025
      • RBR50 Winners 2024
      • RBR50 Winners 2023
      • RBR50 Winners 2022
      • RBR50 Winners 2021
  • Resources
    • Automated Warehouse Research Reports
    • Digital Issues
    • eBooks
    • Publications
      • Automated Warehouse
      • Collaborative Robotics Trends
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
  • Advertise
  • Subscribe