Listen to this article
A famous viral video (watch below) about the DARPA Robotics Challenge shows all sorts of humanoid robots clumsily falling down. Bipedal movement is rather unstable, which is not only a problem for a robot trying to complete its task, but also because falling can damage a very expensive piece of machinery.
Roboticists across the globe are tackling this problem in a myriad of ways. While some look to add a series of corrective steps after a robot becomes off-balance, much like a person stumbling after tripping, Duke University’s Kris Hauser wants robots to be able to use the environment around them.
“If a person gets pushed toward a wall or a rail, they’ll be able to use that surface to keep themselves upright with their hands. We want robots to be able to do the same thing,” says Hauser, associate professor of electrical and computer engineering and of mechanical engineering and materials science at Duke. “We believe that we’re the only research group working on having a robot dynamically choose where to place its hands to prevent falling.”
While such decisions and actions are second nature to us, programming them into a robot’s reflexes is deceptively difficult. To streamline the process and save computation time, Hauser programs the software to focus only on the robot’s hip and shoulder joints. Hauser demonstrates this technique in the video above using a ROBOTIS Darwin Mini humanoid robot, Raspberry Pi 3 microcomputer, Adafruit BNO055 IMU and ROBOTIS TS-10 sensor.
As long as the robot isn’t twisting as it falls, this creates only three angles that the stabilization algorithm has to take into account—the foot to the hip, the hip to the shoulder, and the shoulder to the hand. The robot must identify nearby surfaces within reach and then quickly calculate the best combination of angles to catch itself.
The final solution minimizes impact when the robot’s hands make contact, and also minimizes the chance of its hands or feet slipping. The algorithm takes its best guess and then progressively optimizes it using a method called direct shooting. You can read more about this technique in the research paper “Realization of a Real-time Optimal Control Strategy to Stabilize a Falling Humanoid Robot with Hand Contact.”
After fall stabilization, the robot will remain in a steady state and can either wait to be relocated by human to start a new gait or recover to an upright position by pushing off of the wall. This approach uses a flexing motion of the elbow to allow the robot to gain sufficient momentum to recover a standing posture.
In its current state, the robot has information about its environment fed to it and can’t navigate on its own. But in the near future, Hauser plans to upgrade to a larger robot with its own camera sensors to let it see its surroundings.
“Hopefully by the end of the year we should be doing experiments with the robot actually working in a live obstacle course,” Hauser said. “Then we’ll be trying to have the robot both dynamically map what’s around it and reason about how to protect itself from falling in arbitrary environments.”
Editor’s Note: This article was republished with permission from Duke University’s Pratt School of Engineering.