Earlier this week, Micropsi Industries, a Berlin, Germany-based robotics software startup founded in 2014, closed $6.08 million in Series A funding. This brought the company’s funding to a total of $9 million over six rounds. Micropsi, which at press time had 15 employees, said ABB, NVIDIA, OnRobot and Universal Robots are some of its key partners.
We did not know much about Micropsi prior to the funding news. So we reached out via email to co-founder and CEO Ronnie Vuine to learn more about the company, its robot control system, the benefits of trainable robots, learning from demonstration and more.
Tell us about Micropsi Industries
Micropsi Industries is a VC-backed robotics software company with offices in Berlin, Germany (R&D) and New York City. Founded in 2014, the company is at the forefront of innovation in robotic automation for manufacturing, with a particular emphasis on assembly tasks.
What is MIRAI?
MIRAI is Micropsi Industries’ unique robot control system, giving hand-eye coordination to industrial robots. It allows robotic arms to perform flexible, fully sensor-driven movements: Instead of following pre-defined trajectories, MIRAI-controlled arms react dynamically to the world and can handle variance and unpredictable dynamics.
This is made possible by neural networks that make fast decisions (15-30 a second) on how to move the arm based on input from cameras and force sensors.
MIRAI skills aren’t programmed like classic robot movements but trained. Human trainers give repeated demonstrations, guiding the robot by the wrist. MIRAI picks up on the intended movement and learns quickly how to solve the task without help. The key advantage of MIRAI skills is that they can handle variance and changes in the environment at execution time. They’re also much easier and cheaper to create than hand-engineered computer vision-based algorithms.
From a technical standpoint, how does it enable robots to learn new skills?
It’s a simple, external add-on to the robot’s control hardware that knows how to evaluate cameras and other sensors, fast. The additional controller also collects the data for learning. During the learning process, a human demonstrates movements by guiding the arm while MIRAI watches with the cameras and trains a deep neural network to do what the human does.
This allows MIRAI-driven robotic arms to perform movements that would be difficult or impossible to hand-code. MIRAI skills aren’t programs, they are more like collected intuitions on how a good movement looks. We humans don’t execute a sequence of explicit, discrete, named movement commands when we, say, screw in a light bulb. We know how it looks and feels to do it, moment to moment. That’s what MIRAI replicates for robots.
How is this different than robot learning from demonstration?
It is learning from demonstration. The question is, what is being learned? So far, in industrial robotics, when people said “learn from demonstration,” they meant “remember the exact same trajectory.” Simply record the movements and replay them. This is enough for tasks where nothing ever varies, and nothing moves unpredictably. Most tasks aren’t like that.
How is the neural network trained?
Touching a powered-on robot used to be a dangerous thing. Traditional industrial robots are immensely powerful and have no perception at all. Once told to make a movement, they will execute that exact movement, regardless of what is in their way. Thus, the only way to be safe around them was to power them down.
For some years now, robot makers have started to offer robotic arms that are slightly slower, a lot lighter, and more perceptive of forces encountered during movement. This allows them to stop before they do any harm to a human – and that means being close to them is an option now. And this is what enables the training paradigm that MIRAI uses – touching the robot and guiding it repeatedly. And from these demonstrations, the robot learns.
Who are your target customers?
Factories that do assembly work, for instance in electronics, white goods, appliances, tools. We either work with the factories directly or with their automation solution partners.
What are the benefits to end users of trainable robots?
Being able to automate assembly or test stations that previously couldn’t be automated – without writing a single line of code or having to know anything about AI.
What types of robotic applications work best for MIRAI?
Insertions, as well as all kinds of positioning tasks that have to deal with variance.
What are the limits to the complexity of tasks MIRAI can handle?
We generally recommend a “divide and conquer” approach. Identify the parts of a solution where you really need sensor-based positioning, do the rest with classic point-to-point teaching. The resulting skills typically only move the robot in small areas – where it gets fiddly. MIRAI is not designed to do any reasoning or take large decisions.
Why is ease of use vital to increasing the adoption of cobots?
Ease of use is vital for the adoption of pretty much everything. Collaborative robots are no exception. They already are the ease-of-use variant of traditional robotic arms. While good engineers can handle hard-to-use machinery, they go for an easier-to-use alternative just like anybody else if such an alternative is available – if that easier to use alternative delivers the same reliability and performance.
What are some other ways cobots are becoming easier to use?
Pose teaching – configuring robot poses manually, then interpolate movements between these poses – was a major step forward. We now see a trend towards graphical, building-blocks based user interfaces for programming larger sequences of movements. One thing that isn’t happening yet, but will at some point, is the move towards a unified robot programming language. The scripting languages used today are clunky, vendor-specific and horrible.
Is MIRAI robot-agnostic?
At its core, yes. It supports all Universal Robots models and some ABB robots. It’s fairly easy for us to add support for more robots, we do this based on customer requests.