Mixed reality offers untrained users an easier way to control multiple robots with precision, according to a researcher at NYU. He describes the challenges that still remain as robots reach into our everyday lives.
In conventional industrial automation, robots are programmed to perform very specific tasks, such as on an automobile assembly line. More recently, machine learning has emerged to help robots and self-driving cars to react more flexibly to their environments. For robots that interact directly with people, other control techniques such as voice recognition and virtual reality are options. What does “mixed reality” offer for human-machine interaction?
As robots spread to public spaces such as airports, retail stores, and homes, ease of use becomes critical. What if an average person could control a robot simply by touching buttons on an augmented or mixed reality smartphone app?
Dr. Vikram Kapila, a professor at the New York University (NYU) Tandon School of Engineering, is exploring how mobile mixed reality can enable users to operate multiple robots without prior training. His research involves 3D augmented reality graphics for virtual objects and touchscreen gestural commands that allow people to intuitively command robots to precisely move objects.
Kapila, a senior member of the Institute of Electrical and Electronics Engineers (IEEE), responded to Robotics Business Review‘s questions about mixed reality and robotics controls.
How is mixed reality easier to use than other control methods? How can it reduce costs?
Kapila: Mobile mixed reality can be used to create human-robot interfaces on mobile devices that are intuitive and natural for human operators. Such interfaces can allow human operators to work in a shared environment with robots. This method can help operators have direct sight of the robot, when needed. Hence, it can improve their understanding of the robot’s configuration and its environment.
Instead of using some complex teach pendant-like device, whose controls may be non-intuitive, to control an industrial manipulator for controlling its individual joints or links, mixed reality can be used to render human-robot interfaces that permit the human operator to provide task-level direction to the robot.
Specifically, in a mixed-reality environment, the objects in the environment may be rendered as manipulable, augmented-reality components. As the human operator manipulates and moves around the objects from a home to a goal location in the mixed-reality world, the physical robot performs the same task with the physical objects in the real world.
Such an approach can reduce cost by eliminating customized robot-control hardware with commercial-grade mobile devices. Similarly, such an approach can reduce training costs for robot operators.
Finally, in educational settings, mixed-reality frameworks can be created to render intuitive and natural hands-on learning experiences in engineering labs.
What were the challenges in developing mixed reality to control multiple mobile robots? Your video mentioned computational load — is there an ideal number of inputs or reference robots?
Kapila: When using our mobile mixed-reality approach to control mobile robots, one needs to be able to select and locate a reference-coordinate frame in the world with respect to which other robots are to be located, navigated, driven, etc. This problem is not challenging if the camera capturing the scene is fixed in the world.
However, in our case, the camera of the mobile device itself is used to capture the mobile robots and objects in the world. In such a case, one can fix a subset of four robots in the team of robots and use their corresponding bounding box to create a fixed plane and a reference frame to determine the location and pose of individual robots and objects.
Alternatively, one can relax the assumption of four fixed robots by keeping any one robot fixed — while it is not to perform a task — and use the inertial measurements from the mobile device to render a reference frame relative to which one finds the location and pose of individual robots and objects.
As the operator moves around with the mobile device, it is possible to switch the non-moving reference robot to another robot in the scene. The main challenge in this approach is that only the objects and robots visible through the mobile device camera are manipulable at a given instance.
Of course, the operator can move around in the scene to gain control of a different subset of robots, if needed. Our current approach requires at least one mobile robot to serve as a reference, but the role of reference robot can be switched according to the task at hand.
One limitation of our approach is revealed when the reference robot is to be assigned for control and there are no other robots in view available to become the reference robot.
Another limitation is that the plane occupied by the robots must be assumed [to be] horizontal since mobile device attitude is estimated relative to the direction of gravity.
Would it be possible to manipulate multiple robots simultaneously with this approach?
Kapila: Yes, this already possible. An operator interacts with Object 1 to move it from home to goal position, this will cause the algorithm to select one robot to perform the task.
Immediately after manipulating Object 1 on the mobile device, the operator interacts with Object 2 to move it from its home to goal position. This will cause the algorithm to select another robot to perform the corresponding task. Now two robots will be moving at the same time to move Objects 1 and 2.
What would be needed to scale up this technology for a factory, warehouse, or hospital?
Kapila: We have done a pilot test with uninitiated users for a warehouse situation. Here, instead of using multiple robots, we considered a single robot in the view of operator. The operator commands the robot to go to an object, pick it up, drive it to a box, and deposit it in the box.
Use of a single robot complicates the situation since now there is no fixed robot to serve as a reference. So we had to consider fusion of proprioception and image-based methods.
Our prior mobile mixed-reality interfaces for swarm robotics used small robots in an unstructured environment. As we scale our work to factories, warehouses, or hospitals, we may need to consider the possibility that the operators and robots are not in a shared environment.
We may also need to consider conducting user studies to learn from them and adapt our mobile mixed-reality interfaces so that they are responsive to users in different application domains.
[note style=”success” show_icon=”true”]More on Robotics Controls and Uses for Mixed Reality:
- AI Applications Can Provide Business Benefits – Now and Quickly
- Why Robotics Controls and Components Should Matter to You
- Robot Experience the Future of Business, According to Disney
- Pick and Place for Profit: Using Robot Labor to Save Money
- Robotics Advances Affect Geopolitics, Cyber Security, and Retail
- The Future of Cobots: Adaptive Thought Control
- Robotic Developments Take Flight, but Fear Still a Factor
- Food Industry Use of Robotics to Grow Sharply
- Robot Design Must Consider Automation Limits, Human Skills
- Top 5 Robotics and AI Trends for Businesses to Look for at CES 2017
What’s the next step for your mixed reality and robotics research?
Kapila: Currently, we are considering the issue of swarm robots, where distributed agents are not all simultaneously viewable by the mobile device held by the operator.
For such a situation, to provide the operator situational awareness about all the assets in the swarm, we are considering various image-based approaches wherein each agent carries its own camera and can visualize fiducial markers of other agents.