Unity is showcasing how its AI and machine learning capabilities could benefit industrial robotics. The new demo, called Object Pose Estimation, demonstrates how synthetic data can help robots learn rather than be programmed.
Training data is collected in Unity and used to train a deep neural network that predicts the pose of a cube. This model is then deployed in a simulated robot pick-and-place task. The Object Pose Estimation demo succeeds the release of Unity’s URDF Importer, an open-source Unity package for importing a robot into a Unity scene from its URDF file that takes advantage of enhanced support for articulations in Unity for more realistic kinematic simulations, and Unity’s ROS-TCP-Connector. This reduces the latency of messages being passed between ROS nodes and Unity, allowing the robot to react in near real-time to its simulated environment.
Unity said the demo build on prior work by showing how its computer vision tools, and the recently released Perception Package, can be used to create synthetic, labeled training data to train a deep learning model to predict a cube’s position. The demo provides a tutorial on how to recreate this project, which can be extended by applying tailored randomizers to create more complex scenes.
“This is a powerful example of a system that learns instead of being programmed, and as it learns from the synthetic data, it is able to capture much more nuanced patterns than any programmer ever could,” said Dr. Danny Lange, senior VP of AI, Unity. “Layering our technologies together shows how we are crossing a line, and we are starting to deal with something that is truly AI, and in this case, demonstrating the efficiencies possible in training robots.”
Object Pose Estimation, and its corresponding demonstration, come on the heels of recent releases from Unity that are aimed at supporting the Robot Operating System (ROS), the popular open-source robotics framework.
“You can develop the control systems for an autonomous vehicle, for example, or for highly expensive robotic arms, without the risk of damaging equipment or dramatically increasing cost of industrial installations,” added Lange. “To be able to prove the intended applications in a high-fidelity virtual environment will save time and money for the many industries poised to be transformed by robotics combined with AI and Machine Learning.”
Tell Us What You Think!