Telerobotics — the area of robotics focused on controlling robots from a distance — will play an increasingly important role in society in the wake of the COVID-19 pandemic. Telerobotics lets a remote human operator program, control, and monitor one or more robots. This means a surgeon in New York can perform a robot-assisted surgery on a patient in India, or an engineer in London can program and monitor a robotic workcell in Shenzhen, China.
The ability to decouple a human (the brain) from a robot (the body can physically interact with its environment) can be transformative, especially as we look to maintain productivity while keeping a safe distance from one another and travel is highly restricted.
Telerobotics solutions in healthcare
The healthcare challenges of early 2020 highlighted the need for telerobotics. Many countries were caught unprepared at the peak of the outbreak, with several hospitals being overrun by COVID-19 cases, struggling with shortages of qualified healthcare providers and personal protective equipment (PPE).
Telerobotic solutions, had they been available, could have been a force multiplier. Hospital-based telerobotic systems controlled by remote healthcare workers would take providers off the front line and out of reach of the virus. Such systems could be used for medical material handling and diagnostics, such as taking a patient’s vital signs or administering vitally needed COVID-19 tests. The more we deploy this technology now, the more prepared we’ll be for the next crisis.
Robotic telehealth solutions are valuable for many medical applications off the frontline as well. One example is remote diagnostic ultrasound, where a robotic system holding an ultrasound transducer is remotely controlled by an ultrasound technician or radiologist.
Energid Technologies worked with Dr. Jeff Soble and Sarah Doherty, co-founders of startup Telehealth Robotics, to develop a system for medical kiosks, where patients could go to a local kiosk to get a carotid ultrasound performed by a remote technician. In the video above, the subject was in Chicago, and the operator was in Boston.
Related content: The Robot Report Podcast: Robot-assisted surgery, Amazon’s security drone, and life-size Gundam
Telerobotics in industry
Though telehealth applications are the first to gain popularity, they certainly won’t be the last. Social distancing is forcing us to rethink manufacturing as well. The days where factories are filled with tightly packed workers are likely coming to an end, and industries must determine how to reduce human density without the commensurate reduction in productivity.
Here again, telerobotics can help. Industrial robots that can be remotely controlled, programmed, and monitored allow manufacturers to maintain productivity even with a significant percentage of the workforce offsite. This allows reduced human density at manufacturing facilities with “smart” automation systems that rely on remote workers for guidance — injecting human intelligence into otherwise “dumb” automation systems.
While replacing human workers with automation often means reducing productivity per square meter — since robots take more floor space than humans — advanced software that allows robots to work in close proximity to one another can bring productivity levels back up to human level.
Automation solutions with multiple robots in overlapping workspaces yields significant floorspace savings for the customer, but also increases challenges for robot motion control. Real-time adaptive motion control software lets robots work collaboratively and avoid collisions, even in dynamic conditions.
To accomplish this, all robots in the shared environment must be aware of each other and the trajectory of each predicted in advance. This isn’t knowable at the time of programming for applications where random parts are presented, such as when picking parts from a bin for a machine-tending operation or sorting parts from a conveyor in preparation for downstream kitting. In these cases, the software schedules motion in a collision-free manner by assessing the trajectories of the other robots and planning motions appropriately in real time.
Modes of telerobotic operation
Telerobotics applications typically fall into one of three high-level modes of operation, with many moving between these modes depending on the application.
During programming of the robot task, for instance, a human often works closely with the robot to set and test waypoints and tasks that will ultimately become the robot program. A technician may be co-located with the robot during these high-touch programming components, but that technician doesn’t need specialized knowledge of the robot or application.
Autonomous operation with monitoring
During normal operation, a human is not required to intervene in a robot’s workflow. Monitoring can typically be a hands-off activity, and well-designed systems can allow one remote operator to monitor several automation systems from a distance.
Semi-autonomous control
With semi-autonomous control, a robot performs automatic motions but periodically engages a remote operator for decision making or pattern recognition. An example is an inspection robot that sends low-confidence images back to a human for review.
Manual control
With manual control, a human takes direct control of the robot to perform a task. An example is a robot used for diagnostic ultrasound, where a remote radiologist directly controls a robot equipped with an ultrasound transducer.
Often, all three of these modes are used in coordination. Manual mode may be used to program the robot and to intervene when automated motion gets stuck, for instance. And remote operators may check diagnostics periodically and be alerted when an automated decision-making algorithm (i.e., for inspection) gets confused.
Telerobotics technology enablers
Developing a telerobotic solution requires careful planning and some key technology enablers.
Safe robots
Robots designed for remote operation have less onsite support to fix problems, apply preventive maintenance, and reset controllers. The robots may also have contact with humans who are not familiar with them, such as with patients and technicians in telehealth situations, or workers in a manufacturing facility.
Robots with built-in safety mechanisms protect nearby humans as well as the robot itself and the equipment around it. Starting with a robot that is fundamentally safe — such a collaborative robot or cobot that already has extensive safety features built in –reduces the chance of a catastrophic system failure and makes building the system less complex.
Software can add another layer of safety for remotely operated robots by allowing the operator to create virtual “keep-out” zones. The keep-out zones prevent a remote operator from unintentionally moving the robot into an area containing fragile equipment, for instance. This software can also be used to make sure the robot can’t collide with itself — something that often happens during remote programming and less-structured applications.
Companies in the remote healthcare space are developing safe, intuitive telerobotic applications, using real-time adaptive motion control software that provides mechanisms for creating these virtual keep-out zones, adding a layer of safety for remote robots being guided in manual control mode.
Remote-control infrastructure
Controlling a robot remotely requires an infrastructure that supports two-way communication between the remote operator, the robot, and any tools required for the procedure. Before high-speed networks, connectivity and bandwidth issues made direct remote control of a robot virtually impossible.
Today’s modern infrastructure makes telerobotics much more viable, and upcoming 5G will provide a step function in capability for wireless networks. Wireless telerobotics add a new level of flexibility and portability not achievable with dedicated wired networks. 5G won’t just improve bandwidth but will greatly reduce latency as well.
Acceptable network latency, jitter and bandwidth are highly application-dependent. In some applications, like telesurgery, haptic (or force) feedback is often conveyed back to the remote operator along with 3D video, providing a highly immersive experience to the remote surgeon. Such applications are among the most demanding since they require high bandwidth (for at least two, often HD, synchronized video streams), low latency and jitter to keep the haptic feedback loop from becoming unstable.
Below is a telesurgery architecture developed for a project with the U.S. Army using custom “smart” codec to provide low-latency, low-jitter, and synchronization for video, haptic feedback, and audio channels. The architecture was part of a prototype telesurgery system — intended for use on the battlefield — that used high-intensity focused ultrasound (HIFU) to rapidly cauterize internal wounds.
Applications like this typically require latency below 10 ms and jitter below 2 ms to ensure stability of the haptic feedback loop and a natural immersive sensation for the remote operator. 5G is expected to improve 4G latency by 30 to 50 times, having a latency goal of 1 ms with minimal jitter.
Another equally important component of the infrastructure is the software layer that translates high-level operator commands to low-level robot and tool commands. The software must be flexible enough to support all the modes of operation for a telerobot and should be capable of presenting data to the operator to give sufficient situational awareness of the environment. This typically means streamed high-resolution video, at a minimum. Often multiple cameras from multiple vantage points are used with a telerobotic system so the operator can see the workcell from different vantage points.
Intuitive operator interface
Early telerobot systems required operators to turn a series of knobs on the input device to move each joint in the remote robot. Such systems were difficult to use and didn’t scale well, especially when the robot had more than a few joints. Today, software can translate simple, intuitive, human gestures into natural and meaningful robot motions.
Energid got its start developing software to more intuitively control advanced robots for NASA and the U.S. Department of Defense. In one project, we worked with the U.S. Army to develop a telerobotic solution for the detection of chemical/biological agents, including toxic gases and the chemical components of improvised explosive devices (IEDs). We needed to iterate many times on the control strategies to ensure that the operator had immediate, intuitive control and feedback.
The same is true for surgical robots, where surgeons need the interface to feel as natural and intuitive as performing the surgery without a robot. Ideally, manipulating a tool — even indirectly through a robot — should feel natural.
The operator interface, therefore, can only be as good as the robot control strategies allow it to be. A robot may have many joints or just a few, but in almost every case the kinematics (i.e., the articulated structure) of a robot differs materially from that of a person. This makes mapping a person’s control intentions to the motion of a robot no easy task.
The upside is that when the control mapping is done well, telerobotic systems can have advantages over traditional methods beyond remote operation. Since a robot can be controller precisely and repeatably, it can augment the abilities of the human operator in many cases.
Given an enabling robot control system, attention needs to be paid to developing the actual operator interface. The operator interface is a critical component of a telerobotic system, as this is what the remote operator will interact with day in and day out. The operator presumably knows how to perform the task being teleroboticized, but not much about robotics. Because of this, successful telerobotic interfaces are almost always designed specifically for each unique application.
Successful operator interfaces come together through significant testing and iteration — ideally with the operators that will be using the system (i.e, surgeons in the case of telesurgery systems).
Simulation is an important tool that developers use to create remote interfaces. A comprehensive simulation environment can reproduce what the remote operator will see, how the robot will move, and even the latency inherent in the remote connection. This cuts down significantly on development time and reduces risk in complex telerobotic applications.
The simulation environment offered by advanced real-time adaptive software has been used to simulate many telerobotic systems, including space-based and undersea applications, where latency and automatic collision detection play a critical role.
Adaptive real-time software enables the enablers
In addition to simulating telerobotic systems, real-time adaptive software can also allow a physical remote robotic workcell to be replaced by an interactive graphical simulation, allowing developers to test the operation of their telerobotic application without the need to procure or fabricate hardware.
Advancements with this enabling technology, and an increased knowledge of the capabilities offered, should encourage the implementation of telerobotics across myriad industries as the pandemic continues — and long into the future.
About the Author
Neil Tardella is the CEO of Energid Technologies Corp. He provides leadership and strategic direction for the company. Tardella has spearheaded commercial initiatives in the energy, medical, and collaborative robotics sectors.
Prior to joining Energid, Tardella held key research and development and management roles at Northrop Grumman Norden Systems, Sikorsky Aircraft Corp., and Technology Service Corp.. He has a B.S. in electrical engineering from the University of Hartford and an M.S. in computer science from the Polytechnic Institute of New York University.
Editor’s Note: The opinions expressed in this article are those of the author alone. They are not necessarily shared by The Robot Report. To submit a guest column to The Robot Report, e-mail Senior Editor Eugene Demaitre.
Tell Us What You Think!