The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Humanoids
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Business
    • Financial
      • Investments
      • Mergers & Acquisitions
      • Earnings
    • Markets
      • Agriculture
      • Healthcare
      • Logistics
      • Manufacturing
      • Mining
      • Security
    • RBR50
      • RBR50 Winners 2025
      • RBR50 Winners 2024
      • RBR50 Winners 2023
      • RBR50 Winners 2022
      • RBR50 Winners 2021
  • Resources
    • Automated Warehouse Research Reports
    • Digital Issues
    • eBooks
    • Publications
      • Automated Warehouse
      • Collaborative Robotics Trends
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
  • Advertise
  • Subscribe

NVIDIA teaches dexterity to a robot hand

By Brianna Wessling | December 7, 2022

NVIDIA hand

The setup for NVIDIA’s DeXtreme project using a Kuka robotic arm and an Allegro Hand. | Source: NVIDIA

Robotic hands are notoriously complex and difficult to control. The human hands they imitate consist of 27 different bones, 27 joints and 34 muscles, all working together to help us perform our daily tasks. Translating this process into robotics is more challenging than developing robots that use legs to walk, for example. 

Methods typically used to teach robot control, like traditional methods with precisely pre-programmed grasps and motions or deep reinforcement learning (RL) techniques, fall short when it comes to operating a robotic hand. 

Pre-programmed motions are too limited for the generalized tasks a robotic hand would ideally be able to perform, and deep RL techniques that train neural networks to control robot joints require millions, or billions, of real-world samples to learn from.  

NVIDIA, instead, used its Isaac Gym RL robotics simulator to train an Allegro Hand, a lightweight, anthropomorphic robotic hand with three off-the-shelf cameras attached, as part of its DeXtreme project. The Isaac simulator is able to run more than simulations 10,000 times faster than the real world, according to the company, while still obeying the laws of physics. 

With Isaac Gym, NVIDIA was able to teach the Allegro Hand to manipulate a cube and match provided target positions, orientations or poses. NVIDIA’s neural network brain learned to do all of this in simulation and then the team transplanted it to control a robot in the real world. 

Training the neural network

In addition to its end-to-end simulation environment Isaac Gym, NVIDIA used its PhysX simulator, which simulates the world on the GPU that stays in the GPU memory while the deep learning control policy network is being trained, to train the hand. 

Training in simulations provides a number of benefits for robotics. Besides NVIDIA’s ability to run simulations much faster than they would play out in the real world, robot hardware is prone to breaking after a lot of use. 

According to NVIDIA, the team working with the hand often had to stop to repair the robotic hand, things like tightening screws, replacing ribbon cables and resting the hand to let it cool, after prolonged use. This makes it difficult to get the kind of training the robot needs in the real world. 

To train the robot’s neural network, NVIDIA’s Omniverse Replicator generated around five million frames of synthetic data, meaning NVIDIA’s team didn’t have to use any real images. With NVIDIA’s training method, a neural network is trained using a technique called domain randomization, which changes lighting and camera positions to give the network more robust capabilities. 

All of the training was done on a single Omniverse OVX server, and the system can teach a good policy in about 32 hours. According to NVIDIA, it would take a robot 42 years to get the same experience in the real world. 

About The Author

Brianna Wessling

Brianna Wessling is an Associate Editor, Robotics, WTWH Media. She joined WTWH Media in November 2021, after graduating from the University of Kansas with degrees in Journalism and English. She covers a wide range of robotics topics, but specializes in women in robotics, autonomous vehicles, and space robotics.

She can be reached at [email protected]

Tell Us What You Think! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles Read More >

The HistoSonics system.
HistoSonics raises $250M for commercial expansion of Edison system
headshot of chris finger and the podcast logo.
Designing space robots: Harmonic Drive shares history, looks ahead
a drone flys down a dark hallway with outline of military troops carrying weapons in the background.
XTEND secures U.S. DoW contract for autonomous drones
headshot of Dr Robert Ambrose and Dhaval Jadav, and the podcast logo.
The power of STEM: Shaping the next generation

RBR50 Innovation Awards

“rr
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for Robotics Professionals.

Latest Episode of The Robot Report Podcast

Automated Warehouse Research Reports

Sponsored Content

  • Supporting the future of medical robotics with smarter motor solutions
  • YUAN Unveils Next-Gen AI Robotics Powered by NVIDIA for Land, Sea & Air
  • ASMPT chooses Renishaw for high-quality motion control
  • Revolutionizing Manufacturing with Smart Factories
  • How to Set Up a Planetary Gear Motion with SOLIDWORKS
The Robot Report
  • Automated Warehouse
  • RoboBusiness Event
  • Robotics Summit & Expo
  • About The Robot Report
  • Subscribe
  • Contact Us

Copyright © 2025 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Humanoids
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Business
    • Financial
      • Investments
      • Mergers & Acquisitions
      • Earnings
    • Markets
      • Agriculture
      • Healthcare
      • Logistics
      • Manufacturing
      • Mining
      • Security
    • RBR50
      • RBR50 Winners 2025
      • RBR50 Winners 2024
      • RBR50 Winners 2023
      • RBR50 Winners 2022
      • RBR50 Winners 2021
  • Resources
    • Automated Warehouse Research Reports
    • Digital Issues
    • eBooks
    • Publications
      • Automated Warehouse
      • Collaborative Robotics Trends
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
  • Advertise
  • Subscribe