Vision systems have been employed in manufacturing for parts inspection, parts alignment, quality control, part identification, and part picking for many years. Now, new vision technology helps provide safety for industrial robots to work alongside humans.
Robotics standards outline four different methods of collaboration: safety-rated monitored stop, hand guiding, power and force limiting (PFL), and speed and separation monitoring (SSM). The most commonly understood form of collaborative robotics in manufacturing applications are PFL robots, often known as “collaborative robots” or “cobots.”
With PFL, the robot system controls hazards by limiting the power or force the robot can exert before stopping. PFL has had a major impact on how we think about collaborative manufacturing, but the technique is fundamentally limited. A stop is triggered only in response to a collision detected in the robot hardware.
This approach only works for smaller, slower, lightweight robots that won’t harm a person by coming in contact. Even a small, lightweight robot carrying a sharp object would still be hazardous, so PFL robots are also limited in end-effector designs and types of payloads.
Speed and separation monitoring — a perception challenge
SSM, as defined by ISO/TS 15066, is another form of collaboration that has great promise and addresses some of the limitations of PFL. SSM works with standard industrial robots and has fewer limitations on end effectors, speed, and payloads.
With SSM, no contact is allowed between the robot and human while the robot is moving. A moving robot is assumed to be hazardous; a stationary robot is assumed to be safe. SSM requires a protective separation distance (PSD) between the robot and human so that it is always possible to bring the robot to a stop before contact with a human.
The PSD must take into account the time the robot takes to stop and the distance it will travel during that time, as well as the distance that the humans can move while the robot is stopping.
SSM is fundamentally a perception problem because it relies on understanding where humans and robots are in the scene. The system needs to identify the position of each robot joint as well as all the places the robot could reach before it is brought to a stop. It must also understand the location of any humans in the proximity of the robot and where they could move.
Challenges of creating safe vision systems
Not only is SSM a perception problem; it is also a safe perception problem. Systems that provide safeguarding functionality in industrial robot workcells, such as FreeMove, the 3D safeguarding solution from Veo Robotics, are required to comply with functional safety standards as described in ISO 13849.
These standards require that no single hardware failure can lead to an unsafe situation and that both hardware and software development follow a structured process with traceability from requirements to testing, including for third-party software.
Reliable data and algorithms
To create a safe perception system, we need reliable data and reliable algorithms. FreeMove uses 3D time-of-flight sensors that are positioned on the periphery of the workcell to capture rich image data of the entire space. The architecture of the sensors ensures reliable data with novel dual imagers that observe the same scene so the data can be validated at a per-pixel level.
With this approach, higher level algorithms will not need to perform additional validation. This 3D data can then be used to identify key elements in the workcell, including the robot, workpieces, and humans.
Accounting for occupancy and occlusion for safety
In addition to using reliable data, the data must be processed with safety in mind. Most algorithms that use depth images from active infrared (IR) sensing identify regions of space as either empty or occupied.
However, this is inadequate for a safety system because safety requires that humans be sensed affirmatively: a part of a human body not showing up in sensor data does not mean there isn’t a human there.
Because all active sensing requires some amount of return to detect objects, variability in reflectivity of surfaces can cause systems to output false negatives. Dark fabrics, for example, sometimes have very low reflectivity, so active IR sensors may not be able to “see” the legs of someone wearing dark jeans.
This is unsafe, so FreeMove classifies spaces as one of three states: empty (something can be seen behind it), occupied, or unknown. When examining volumes of space, if the sensors do not get a return from a space but cannot see through the space, that space is classified as unknown and treated as occupied until the system can determine it to be otherwise.
This approach also addresses static and dynamic occlusions. In a workcell with a standard-size six-axis robot arm moving workpieces around, there will always be some volumes of space that are occluded from or outside of the field of view of all of the sensors, either temporarily or permanently.
Those spaces could at some point in time contain a human body part, so they are also treated as occupied for SSM purposes. A human could be reaching their arm into a space near the robot that none of the sensors can observe at that moment.
Human until proven otherwise for safety
Humans excel at identifying humans in images — even if the image is blurry or only shows a human body part. Although there are many advanced computer vision algorithms that can label humans in images, they are not necessarily reliable for safeguarding applications.
To resolve this issue, we turn the problem around: All volumes are considered human until proven otherwise. Workcells are carefully designed and controlled spaces, so unless specific objects are labelled as workpieces during commissioning, any observed object that is large enough to contain a human is treated as a human. This “human-until-proven-otherwise” approach ensures that the system never has the chance to fail to recognize a human.
Next-generation SSM
SSM is not only a perception problem, but it is also a control problem. PSDs are affected by the robot’s reaction and stopping time, so robot controllers with low latencies and faster stopping times enable smaller PSDs, which in turn enable closer human-machine interaction and more efficient and effective collaboration.
To improve the next generation of SSM systems for collaborative applications in manufacturing automation, perception solutions providers will need to work closely with robot manufacturers to optimize robot controllers for vision-based SSM. Improving SSM through optimizing robot control will truly enable the two most flexible resources in a factory, humans and robots, to work safely and dynamically together in the same space.
About the author
Clara Vu has been building autonomous robots for over 20 years. She began her career at iRobot Corp. in its early days, where she developed robots for oil well exploration and wrote the programming language behind Roomba.
After iRobot’s IPO, she went on to found Harvest Automation, where she led software development for their autonomous agricultural materials handling system. This was the world’s first product that combined fully autonomous mobility and manipulation in an unconstrained environment. Vu connected with Patrick Sobalvarro through Rethink Robots, where she became interested in the challenge of human-robot interaction for manufacturing.
As chief technology officer of Veo Robotics Inc., Clara leads the Waltham, Mass.-based company’s advanced technology development and product roadmap planning to solve fundamental problems in durable goods manufacturing. Veo Robotics won a 2020 RBR50 innovation award for FreeMove, and Vu will be speaking as part of RoboBusiness Direct.
Tell Us What You Think!