Automatica 2018 was all about smart automation and robotics for a fast-changing industry. I visited Automatica in Munich, the leading exhibition for smart automation and robotics in Europe. This year, the trade fair had 890 exhibitors and more than 46.000 visitors. These are my main takeaways.
For successful automation, robots must become more flexible, easier to train and set up, and the need for complicated, custom-made jigs or magazines must be reduced. This will enable small and medium-sized enterprises to produce smaller production batches more efficiently, and frequently improve or updated end-products.
Small, flexible and safe cobots will offload dull, dangerous and repetitive tasks, and enable new types of human-robot collaboration. To solve increasingly complex tasks, and to make sure robots can be easily trained to handle new products, they require more processing power and they need to see the world as we see it.
Cobots, cobots, cobots
At Automatica, it was clear that collaborative robots (cobots) are hot! Cobots are robots designed to work together with humans without the need for the traditional safety fence. Cobots typically have built-in proximity sensors to slow down or stop operations to avoid collisions with humans.
Cobots do not necessarily replace humans, instead they can complement humans during monotonous or complex tasks. In that way, the robot can do what it is good at, while a human can intervene to carry out parts of the operation requiring higher motoric precision or human judgment.
Practically every major robot vendor displayed cobots at the fair. Franka Emika brought its Panda robot, Universal Robots displayed the new e-Series, as well as demoes with ABB’s single-arm YuMi, Cobotta from Denso, and Doosan Robots.
One cobot I found particularly interesting was from Kassow Robots, a Danish startup company founded by Kristian Kassow (also the co-Founder of Universal Robots). Kassow exhibited a 7-DoF cobot with 10 kg payload and a maximum joint speed of 225°/s.
Moon Gravity – the future of robot programming?
During “the future of robot programming” organized by Euclid Labs on the first evening of Automatica, some of the ways robots will be programmed in the future were demonstrated.
Franka Emika’s Panda cobot learns new tasks as the human operator simply pulls the robot arm around. In “moon gravity,” the robot behaves in a natural way by having the joints fall towards the ground, while operators can still be push the joints around with relatively gentle movements. Franka Emika also demonstrated the high sensitivity and safety of the robot’s force torque sensors by stopping the robot with a balloon.
Programming robots was demonstrated with Artiminds simple drag-and-drop elements. With basic motions combined with tasks wizards, even complex tasks can be solved without writing a single line of code. Artiminds also demonstrated how robots follow paths on a CAD model, e.g. for polishing, gluing, grinding applications.
Finally, Euclid Labs showed a demo on how to use VR goggles and hand controllers to easily teach the robot a task in a virtual world, which could then be deployed to the real life robot.
We’ll see drastically reduced time from installation to production readiness and robot adoption as companies are able to teach robots operations, and program and simulate tasks easily.
Smarter robots
To make robots more flexible and to enable them to work amidst humans, they are typically equipped with numerous sensors. To analyze, reason and make decisions based on such sensor data, the robots also need improved processing or brains.
One trend at Automatica was the increased use of AI and machine learning. In some cases the intelligence was embedded directly into the product, in other instances, the processing ran on an external computer.
Recent advancements in deep learning algorithms, along with the availability of powerful commodity hardware such as graphics cards used by the gaming industry, open up a range of new possibilities in robotics and machine vision. Objects can now reliably be recognized in cluttered scenes, defective products can be detected and sorted out, and the robot can learn which grasps that work best when doing a pick & place operation and improve over time.
Robots sensing the world in 3D
An essential element to achieve higher flexibility is to equip the robots with human-like vision. Traditional 2D cameras have already been used for a while to equip robots with vision abilities. However, this only provides a flat projection of the world. In order for the robots to see and interact with a three-dimensional world, they also need 3D vision.
A large number of companies were exhibiting solutions with 3D vision sensors for robots at Automatica. Examples of tasks requiring vision solutions include picking and placing of randomly organized items (out of containers, totes or shelves) as well as a range of inspection and quality control tasks.
The demo shows ABB Yumi robot performing bin-picking of small and shiny parts scattered in a small container combination located using the Zivid One 3D camera.
Editor’s Note: This article was republished with permission from Zivid Labs’ blog.
About the Author
Øystein Skotheim is co-founder and senior software architect of Zivid Labs, which was founded in 2015. With more than 60 years of in-house experience within optical sensors and 3D machine vision, Zivid Labs enables new and existing applications to be automated, and we help our customers to improve efficiency and reduce cost in areas like quality control, bin-picking, logistics, and inspection.
Tell Us What You Think!