There’s plenty of room for improvement in robot grasping, notes UC Berkeley Prof. Ken Goldberg. He gives a preview of his session, in which he will describes an approach with wide commercial application.
There’s a host of different robot grasping solutions on the market, but most are rigid in terms of dexterity and their ability to handle a wide range of objects, according to Ken Goldberg, a professor at the University of California, Berkeley. This has limited robots to factories and warehouses, when they could be helping more in hospitals, homes, and other dynamic environments, he said.
The market potential for robotics depends on improving sensing, manipulation, and mobility technologies. The global market for end-of-arm tooling will experience a compound annual growth rate (CAGR) of 9% between 2016 and 2020, according to TechNavio, which noted that the market is fragmented by application.
Goldberg is an expert in developing and refining robot grasping, having developed the first provably complete algorithms for part feeding and part fixturing and the first robot on the Internet. He is the chair of UC Berkeley’s Industrial Engineering and Operations Research Department.
Goldberg is also director of the CITRIS People and Robots Initiative and the university’s AUTOLAB, where he and his students conduct research into machine learning for robotics and automation.
At RoboBusiness 2018 next week, Goldberg will deliver a keynote address on “The New Wave in Robot Grasping Technology” in the Robo Design and Engineering Forum. He and Ph.D. student Jeff Mahler recently spoke with Robotics Business Review about the challenges of improving robot grasping, their own commercialization efforts, and RoboBusiness.
Q: What grasping models do most commercial robots use today?
Mahler: A lot of robots in industry today use explicit knowledge of the exact shape and position of objects and execute pre-programmed motions. You often have human “fixers” who will adjust the position of parts that robots will then pick off a conveyor belt.
Q: What do developments in deep learning promise to accomplish?
Mahler: They promise to make robots better able to operate in less-structured environments such as in warehouses to deliver a wide variety of products. We need robots to manipulate objects they haven’t seen before or that are in the corners of bins.
Technology is on the way, but existing systems today don’t have that level of adaptability.
Some robots can manage pallet boxes, with planned motions based on planar surfaces or images. There are piece-picking robots that promise to jump forward, but machine learning is not widely put into practice yet.
Q: How does the “new wave” of robot grasping research differ from previous approaches?
Mahler: Our idea is to combine previous ideas from research. There are two categories, the first of which is analytic. In this wave of robot grasping, robots are given precise knowledge of an object’s shape, friction, and other characteristics, based on physics and geometry.
The second category of robot grasping includes purely empirical approaches, like the Google “arm farm.” [The researchers] try out random grabs and using the outcomes of physical trials to learn representation to generalize to new objects. This deep learning approach requires data from many examples.
The new wave is combines scalability and interpretability of analytics with the generality of the empirical approach.
We can generate training data using the cloud and GPU kernel processing within a few days rather than waiting years. This hybrid method can generate data of millions of grasps of thousands of objects.
Q: Can you give an example of how the cloud can make grasping robots more dexterous?
Mahler: The cloud is important to the way in which we’re leveraging current research in distributed computing. We can launch analyses to generate training data in thousands of virtual instances in the cloud, with millions of data points for robots to learn how to pick up a variety of objects.
We’re also studying the ability for robots to share data. Different robots operating in different environments can create a representation to learn better in the future.
We have a number of robots in the lab, varying from industrial arms to mobile manipulators. Some are essentially prototype home robots that can move around and pick up objects in a room.
There are differences between the robot grasping of two different types of platforms.
Also, there’s the issue of how enforce security and reduce latency with cloud robotics, with has led to the new idea of “Fog Robotics.”
In Fog Robotics, we’re using edge computing to do some of the continuous learning without communicating all the data back up to the cloud.
Goldberg: We’re excited to get the word out about Fog Robotics at RoboBusiness. We’ll be talking about cloud robotics and edge computing, building on a number of new ideas, especially in cases of continuing learning.
Q: What are the biggest challenges that remain in applying big data analytics to robot grasping and manipulation?
Goldberg: Speed — you want to increase the number of picks per hour. This depends on the speed of the sensing, computing, robots, and the sense success rate or reliability of the algorithm.
All have the potential to be improved. Humans can make 400 to 600 picks per hour in e-commerce order fulfillment. Getting robots to approach this level is an upper bound that we aspire to, and we’re using mean picks per hour, or MPPH, as a performance metric.
We’re not talking about replacing humans, who have extremely sophisticated manipulation skills. But rather, we’re working to support humans.
For robot grasping, we want to increase reliability, range, and the rate of speed. We understand that in industry, they’re comfortable with the “three Rs.”
Recent innovations include composite policies for a dual-arm robot that coordinates both suction grasp or single-point policies with parallel-jaw grasp, or two-point policies.
We’re also benefiting from ultra-high-resolution 3D sensing.
Q: The human element in robot design is often overlooked. How do your backgrounds as artists inform your work?
Goldberg: Mostly, it helps me think about things from a variety of perspectives. Artists have to challenge assumptions; that’s what’s essential for good scientists and researchers. I think of them as quite aligned.
In our lab, we’re always trying to buck the trend and test assumptions. Jeff’s work on Dex-Net [the Dexterity Network; see video above] is very much in that spirit.
The idea behind the Dex-Net dashboard is using deep learning and training robot grasping on synthetic data sets. We’ve had surprisingly effective performance because of the technical decisions we’ve made.
Mahler: Being a musician, I understand going against the status quo. As researchers, this is how big innovations can be made.
I’ve been working on this project for four years. We’re working to make the solution more extensible to cameras, grippers, and other commercial applications.
Q: What do you look forward to seeing at RoboBusiness?
Goldberg: I’m always interested to see the range of new companies and products at the show. I’ve made good contacts in the past, and I’m interested in seeing cutting-edge robotics.
We’re also very open to finding partners for Ambidextrous Laboratories, our new startup for commercializing the technology behind Dex-Net. We’re looking to talk with some of the bigger companies that are looking at moving into deep learning and manipulation.
We believe there’s a variety of applications for universal picking with robots. Being able to grasp and transport a wide variety of objects without having to train robots could be useful for e-commerce, logistics, and services.