SAN JOSE, Calif. — Thanks to cloud robotics, both autonomous vehicles and robots should be able to learn from one another and become more intelligent collectively, said James Kuffner, chief technology officer at the Toyota Research Institute.
During his keynote address at RoboBusiness 2016 here this week, Kuffner outlined his past research on humanoid motion planning, from 1995 at Stanford University through his work at Google in 2009.
In January, Kuffner joined TRI, and Toyota announced a $1 billion investment in research located near MIT, Stanford University, and the University of Michigan.
“We’re developing technology for automotive safety and robotics,” Kuffner said. “We’re also looking at helping the aging societies of Japan and elsewhere.”
He described progress in artificial intelligence, sensors, manipulation, and cloud computing as following historical patterns and enabling safer, cheaper, and more reliable devices.
Cloud robotics to be more agile, autonomous
“In the next 50 years, we’ll see robot intelligence be useful beyond factory automation,” Kuffner said. “Autonomy is important in logistics, in the home, and for maintaining the quality of life for aging in place.”
“Search-based AI,” such as Google’s AlphaGo and route-planning algorithms for robots, are important steps toward autonomy, he said.
“Planning dynamic actions isn’t just for bipeds; it includes a compound motion such as a leap and jump for quadruped robots,” Kuffner said.
“The big question is, ‘When are we going to have an intelligent robot personal assistant?’” he asked. “Three things are holding that back: capability, cost — but sensor technology is rapidly progressing, as well as safety and reliability.”
Overcoming obstacles by moving to the cloud
“In teleoperation, humans provide a ‘remote brain’ for robots,” Kuffner explained. “We can now move that intelligence instead to the cloud.”
“Cloud robotics enables cheaper, lighter, and smarter robots,” he said. “My four kids have spilled drinks many times, and I wish I could have transferred the knowledge between their brains instead of waiting for each one to learn in turn.”
“Big data and machine learning will usher in a new era of advancing robot and vehicle autonomy,” Kuffner added. “Thanks to the cloud, small companies can access modern compute and data-storage resources.”
It’s not yet possible to include all the machine-learning capacity desired within a robot or an autonomous vehicle, but smartphones and the cloud “open possibilities,” he said.
“Smartphones turn over every 18 months, and they include everything you need for a robot — IMU [inertial motion unit], GPS, 3G, Wi-Fi antennas, camera on a chip, and storage,” Kuffner said. “With ‘Build Your Own Cellbot,’ we made it easy to program and hack robots.”
“Who would have thought that you could stream video to phones?” he said. “Moore’s Law pales in comparison to improvements in wireless broadband speed. That opens possibilities for cloud robotics.”
According to Kuffner, the potential benefits of cloud robotics include:
- A shared knowledge database, useful for object recognition and mapping
- Offloading heavy computing tests to the cloud and easier updates of software
- A skills/behavior database, allowing for gathering and data mining of collective experiences, and a library of actions to help all robots, such as in speech and language translation
“There’s no reason why robots can’t understand all languages,” Kuffner said, showing a slide of Star Wars‘ C-3PO.
Advances in image recognition, deep learning, and 3D models of objects can help with grasping and manipulation, he said.
“Human crowdsourcing can scale hard semantic and quality-control problems globally, so we can enable ‘robotsourcing’ with large-scale deployment of data-sharing robots,” Kuffner said.
He referred to Toyota’s announcement at January’s Consumer Electronics Show of its Mobility Teammate concept for distributed map generation.
“By sharing and maintaining data in real time, you can have an updated map of the road surface scalable to all vehicles,” Kuffner said.
One attendee noted that many people in Japan are skeptical of cloud robotics.
“We have to persuade people all over the world that it will be safe for developing robotics,” Kuffner responded. “If we can demonstrate good capability, people will embrace it.”
“What happens if the network goes down?” asked another conference attendee.”
“All safety-critical functions must be able to run locally, like balance for a walking robot,” Kuffner said. “If a smartphone doesn’t have a data connection, you can still make a call.”
“I remember one professor in graduate school who said, ‘There’s no way we’ll ever have finance on the Internet,’ but I’m very optimistic that we can solve security and privacy problems,” Kuffner said.
An ‘app store’ for cloud robotics
Thanks to the cloud, robots should be able to acquire skills more easily through sharing, Kuffner said.
“With an app store for robots, you can have expandable capabilities like a smartphone,” he explained. “This leads to a general-purpose robot.”
Kuffner cited the scene in The Matrix when Trinity said, “I need a pilot program for a … helicopter” and was able to download the skill.
“There’s a pendulum swing between distributed and embedded computing,” Kuffner explained. “We have a hybrid world of local vs. cloud, with Web-based applications.”
“Consumer robots are poised to explode,” he said. “Going forward, we’ll see broad diversification and rapid growth of new application areas.”
“We’re supporting the developer community with a $1 million gift to the Open Source Robotics Foundation for the Robot Operating System [ROS],” Kuffner said.
TRI readies HSR for elder care
“Toyota’s Human Support Robot is built to be flexible as a mobile manipulator, with omnidirectional movement,” Kuffner said. “The human support robot also has a ROS navigational stack and API [application programming interface].”
The Human Support Robot (HSR) also has a telescoping body and large workspace reach, and its arm is designed to be compliant for safety, like many current collaborative robots.
“Our team is working to make the team safe and reliable,” Kuffner said. “Its expandable sensor suite includes an IMU sensor and a laser range sensor.”
The HSR’s manipulators include a suction gripper and a wide-angle claw. In the video Kuffner shared at RoboBusiness, the simple user interface had a red button/dial.
“It’s lightweight and small, capable of doing many tasks in the home, such as opening a door, picking up an object from the floor, and getting a drink,” he said.
HSR includes several power and data ports and currently requires magnetic tape to mark boundaries.
Toyota’s robot will be the standard for the July 2017 RoboCup in Nagoya, Japan. Competitors can rent one for $900 per month.
Service robots coming soon, thanks to sensors
“How soon will we have such robots?” asked Jeff Burnstein, president of the Robotic Industries Association and the Association for Advancing Automation. He recently testified to Congress about robotics.
“I’m an optimist — I think in the next five to 10 years,” Kuffner responded. “It’s already starting to happen. I’m really excited by sensor technology.”
“Back in the 1980s and ’90s, we had sonar, then GPS helped keep robots from getting lost, then lasers for indoor use,” he said. “For digital cameras, within 10 years, we’ve gone to having a camera on a chip; the only thing we couldn’t reduce is the lens.”
“No matter how good your logic and planning are, if you have poor perception, you’ll still have problems,” Kuffner said. “Machine learning used to be hard, but it’s now going open source. Now we just need good data.”
Open standards and partnerships
Toyota is betting on its support of open source programming and strategic partnerships to build its market and cloud robotics capabilities.
Kuffner mentioned the OpenRAVE open-source virtual environment for robotics and automation.
In May, Toyota Motor Corp. signed a memorandum of understanding with Uber to explore leasing and ride-sharing.
Toyota is also partnering with Deka Research and Development, founded by Segway inventor Dean Kamen, on the iBot, a wheelchair capable of climbing stairs.
In August, TRI said it will invest $22 million in the University of Michigan for research into AI, robotics, and autonomous vehicles over four years.
The company also gave $100,000 to LightHouse Innovation Lab to develop accessible robotics and provide STEM (science, technology, engineering, and mathematics) career assistance for the blind.
“We are hiring!” Kuffner said.
More on Cloud Robotics and Autonomy:
- Canvas Technology Develops ‘Automotive-Level Autonomy’ for Goods Delivery
- Nvidia and Baidu, Uber and Volvo Couple Up on the Highway to the Future
- Elder Care Robots Are Necessary and Imminent, Say European Experts
- Automotive Robotics Prompts Detroit-China Partnership
- Mazda Taps the Brakes on Self-Driving Car Hype
- Self-Driving Cars Get More Fuel From Big Automakers
- Deep Learning Leads to More AI Investments
- Webcast: Robotics Is the Next General-Purpose Technology
- Toyota Staffs Up for AI, Robotics Research
- Toyota’s New AI Hire Discusses Research at CES 2016
Infrastructure key to cloud robotics, cars
All the major automakers are pursuing self-driving cars, but Kuffner said that cars should be able to communicate with one another and the infrastructure to be safer.
“Toyota is working with the other automakers and the government to develop standards,” he said at the RoboBusiness Expo Theater stage. “Healthcare and privacy are also tricky issues.”
“New technologies will require partnerships between academia, business, and the government to devise policies,” Kuffner said. “For automated cars, the National Highway Traffic Safety Administration is trying to think ahead with its framework.”
Kuffner also told Robotics Business Review that Toyota expects to make automated safety features — if not fully self-driving cars — standard soon.