The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Exoskeletons
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Markets
    • Agriculture
    • Healthcare
    • Logistics
    • Manufacturing
    • Mining
    • Security
  • Financial
    • Investments
    • Mergers & Acquisitions
    • Earnings
  • Resources
    • Careers
    • COVID-19
    • Digital Issues
    • Publications
      • Collaborative Robotics Trends
      • Robotics Business Review
    • RBR50 Winners 2022
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • Healthcare Robotics Engineering Forum
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
    • Leave a voicemail

Robot hand uses machine learning to detect wearer’s intention

By The Robot Report Staff | February 2, 2019


A Korean research team at Seoul National University created a wearable hand robot called the Exo-Glove Poly II that can aid the disabled who lost hand mobility. The robot can detect user’s intention by collecting the behaviors with machine learning algorithm.

The research team at the Soft Robotics Research Center (SRRC) in Seoul is led by Prof. Sungho Jo from the Korea Advanced Institute of Science & Technology (KAIST) and Kyu-Jin Cho at Seoul National University. Collaborators include Daekyum Kim and Jeesoo Ha from KAIST, as well as Brian Byunghyun Kang, Kyu Bum Kim, Hyungmin Choi from Seoul National University.

The SRRC team has proposed a new intention-detection paradigm for soft wearable hand robots. It predicts grasping and releasing intentions based on user behaviors, enabling spinal cord injury (SCI) patients with lost hand mobility to pick and place objects.

The researchers developed a method based on an algorithm that predicts user intentions for wearable hand robots by using a first-person-view camera. Their development is based on the hypothesis that user intentions can be inferred through the collection of user arm behaviors and hand-object interactions.

The machine learning model used in this study, Vision-based Intention Detection network from an EgOcentric view (VIDEO-Net), is designed based on this hypothesis. VIDEO-Net is composed of spatial and temporal sub-networks, where the temporal sub-network is to recognize user arm behaviors and the spatial sub-network is to recognize hand-object interactions.

An SCI patient wearing Exo-Glove Poly II, (Video of previous version) a soft wearable hand robot, successfully pick-and-place various objects and perform essential activities of daily living, such as drinking coffee without any additional helps.

robot hand

The vision-based machine learning algorithm. (Credit: Soft Robotics Research Center, Seoul National University)

Their development is advantageous in that it detects user intentions without requiring any person-to-person calibrations and additional actions. This enables the wearable hand robot to interact with humans seamlessly.

The research was published in the 26th issue of Science Robotics as a focus article on January 30, 2019. The research team explained more about the system.

How does this system work?
This technology aims to predict user intentions, specifically grasping and releasing intent toward a target object, by utilizing a first-person-view camera mounted on glasses.(Something like Google Glass can be used in the future). VIDEO-Net, a deep learning-based algorithm, is devised to predict user intentions from the camera based on user arm behaviors and hand-object interactions. With Vision, the environment and the human movement data is captured, which is used to train the machine learning algorithm.

Instead of using bio-signals, which is often used for intention detection of disabled people, we use a simple camera to find out the intention of the user. Whether the person is trying to grasp or not. This works because the target users are able to move their arm, but not their hands. We can predict the user’s intention of grasping by observing the arm movement and the distance from the object and the hand, and interpreting the observation using machine learning.

Who can benefit from this technology?
As mentioned earlier, this technology detects user intentions from human arm behaviors and hand-object interactions. This technology can be used by any people with lost hand mobility, such as spinal cord injury, stroke, cerebral palsy or any other injuries, as long as they can move their arm voluntarily. This concept of using vision to estimate the human behavior

What are the limitations and future works?
Most of the limitations come from the drawbacks of using a monocular camera. For example, if a target object is occluded by another object, the performance of this technology decreases. Also, if user hand gesture is not able to be seen in the camera scene, this technology is not usable. In order to overcome the lack of generality due to these, the algorithm needs to be improved by incorporating other sensor information or other existing intention detection methods, such as using an electromyography sensor or tracking eye gaze.

To use this technology in daily life, what do you need?
In order for this technology to be used in daily life, these devices are needed: a wearable hand robot with an actuation module, a computing device, and glasses with a camera mounted. We aim to decrease the size and weight of the computing device so that the robot can be portable to be used in daily life. So far, we could find compact computing device that fulfills our requirements, but we expect that neuromorphic chips that are able to perform deep learning computations will be commercially available.

Editor’s Note: This article was republished from Seoul National University.

Tell Us What You Think! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles Read More >

tiny robot on penny
Researchers create walking robot half a millimeter wide
CMU ATV
Carnegie Mellon researchers gather data to train self-driving ATVs
DeepMind’s open-source version of MuJoCo available on GitHub
MIT graphic
MIT researchers help robots navigate uncertain environments

2021 Robotics Handbook

The Robot Report Listing Database

Latest Robotics News

Robot Report Podcast

State of robotic perception with RGo Robotics' Amir Bousani
See More >

Sponsored Content

  • Meet Trey, the autonomous trailer (un)loading forklift
  • Kinova Robotics launches Link 6, the first Canadian industrial collaborative robot
  • Torque sensors help make human/robot collaborations safer for workers
  • Roller screws unlock peak performance in robotic applications
  • Making the ROS development cycle manageable

RBR50 Innovation Awards

Leave us a voicemail

The Robot Report
  • Mobile Robot Guide
  • Collaborative Robotics Trends
  • Field Robotics Forum
  • Healthcare Robotics Engineering Forum
  • RoboBusiness Event
  • Robotics Business Review
  • Robotics Summit & Expo
  • About The Robot Report
  • Subscribe
  • Advertising
  • Contact Us

Copyright © 2022 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Exoskeletons
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Markets
    • Agriculture
    • Healthcare
    • Logistics
    • Manufacturing
    • Mining
    • Security
  • Financial
    • Investments
    • Mergers & Acquisitions
    • Earnings
  • Resources
    • Careers
    • COVID-19
    • Digital Issues
    • Publications
      • Collaborative Robotics Trends
      • Robotics Business Review
    • RBR50 Winners 2022
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • Healthcare Robotics Engineering Forum
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
    • Leave a voicemail