Nanyang Technological University, Singapore (NTU Singapore) scientists have developed a robot that can autonomously assemble an IKEA chair without interruption.
Designed by Assistant Professor Pham Quang Cuong and his team from NTU’s School of Mechanical and Aerospace Engineering, the robot comprises an IDS Imaging Ensenso N35 3D camera and two Denso robotic arms equipped with grippers to pick up objects. The team coded algorithms using three different open-source libraries to help the robot complete its job of putting together the IKEA chair.
It assembled IKEA’s Stefan chair in 8 minutes and 55 seconds. Prior to the assembly, the robot took 11 minutes and 21 seconds to independently plan the motion pathways and 3 seconds to locate the parts.
Asst Prof Pham said, “For a robot, putting together an IKEA chair with such precision is more complex than it looks. The job of assembly, which may come naturally to humans, has to be broken down into different steps, such as identifying where the different chair parts are, the force required to grip the parts, and making sure the robotic arms move without colliding into each other. Through considerable engineering effort, we developed algorithms that will enable the robot to take the necessary steps to assemble the chair on its own.
“We are looking to integrate more artificial intelligence into this approach to make the robot more autonomous so it can learn the different steps of assembling a chair through human demonstration or by reading the instruction manual, or even from an image of the assembled product.”
The NTU team of Asst Prof Pham, research fellow Dr. Francisco Suárez-Ruiz and alumnus Mr. Zhou Xian believe that their robot could be of greatest value in performing specific tasks with precision in industries where tasks are varied and do not merit specialized machines or assembly lines.
How it works
The robot is designed to mimic the genericity of the human “hardware” used to assemble objects: the ‘eyes’ through the Ensenso 3D camera and the ‘arms’ through industrial robotic arms that are capable of six-axis motion. Each arm is equipped with parallel grippers to pick up objects. Mounted on the wrists are force sensors that determine how strongly the “fingers” are gripping and how powerfully they push objects into contact with each other.
The robot starts the assembly process by taking 3D photos of the parts laid out on the floor to generate a map of the estimated positions of the different parts. This is to replicate, as much as possible, the cluttered environment after humans unbox and prepare to put together a build-it-yourself chair. The challenge here is to determine a sufficiently precise localization in a cluttered environment quickly and reliably. The Ensenso N35 works by the “projected texture stereo vision” principle (Stereo Vision), which mimics human vision. Two cameras take images from the one scene at two different positions. Although the cameras view the same scene, there are different object positions according to the cameras’ projection rays.
The N35 uses special matching algorithms to compare the two images, search for corresponding points and visualize all point displacements in a Disparity Map. Then, the Ensenso software determine the 3D coordination for each image pixel or object point, in this case the chair components.
Next, using algorithms developed by the team, the robot plans a two-handed motion that is fast and collision-free. This motion pathway needs to be integrated with visual and tactile perception, grasping and execution.
To make sure that the robotic arms are able to grip the pieces tightly and perform tasks such as inserting wooden plugs, the amount of force exerted has to be regulated. This is challenging because industrial robots, designed to be precise at positioning, are bad at regulating forces, Asst Prof Pham explained.
The force sensors mounted on the wrists help to determine the amount of force required, allowing the robot to precisely and consistently detect holes by sliding the wooden plug on the surfaces of the work pieces, and perform tight insertions.
An example of successful autonomous dexterous manipulation
The robot developed by the NTU Singapore scientists is being used to explore dexterous manipulation, an area of robotics that requires precise control of forces and motions with fingers or specialised robotic hands. As a result, the robot is more human-like in its manipulation of objects.
So far, autonomous demonstration of dexterous manipulation has been limited to elementary tasks, said Asst Prof Pham.
“One reason could be that complex manipulation tasks in human environments require many different skills. This includes being able to map the exact locations of the items, plan a collision-free motion path, and control the amount of force required. On top of these skills, you have to be able to manage their complex interactions between the robot and the environment,” he explained.
“The way we have built our robot, from the parallel grippers to the force sensors on the wrists, all work towards manipulating objects in a way humans would,” he added.
Now that the team has achieved its goal of demonstrating the assembly of an IKEA chair, they are working with companies to apply this form of robotic manipulation to a range of industries.
The team is now working to deploy the robot to do glass bonding that could be useful in the automotive industry, and drilling holes in metal components for the aircraft manufacturing industry. Cost is not expected to be an issue as all the components in the robotic setup can be bought off the shelf.
The research which took three years was supported by grants from the Ministry of Education, NTU’s innovation and enterprise arm NTUitive, and the Singapore-MIT Alliance for Research & Technology.
Editor’s Note: This article was republished from the NTU News Hub page.
Tell Us What You Think!