Autonomous vehicles are in the media spotlight, but the complexity of driving outdoors may delay a real commercial roll-out of self-driving cars for years. A company working in “stealth mode,” Canvas Technology, is already demonstrating “automotive-level” autonomous driving and goods delivery – indoors.
Canvas Technology is a team of engineers with robotics experience from Google, Toyota, and Kiva Systems (now Amazon Robotics). Their mission is to provide end-to-end autonomous materials handling. They’re tackling the indoor challenge first: moving items through factories and warehouses before they hit the streets.
“We started inside factories and warehouses because there is significant demand for a better system, and it’s an easier problem to solve first,” said Jonathan McQueen, CEO of Canvas Technology. “The experience we gain indoors will pave the way for a successful launch of outdoor transportation, which our technology also enables.”
Market for goods delivery still growing
Factories and warehouses are increasing their use of automation in response to the demands of lean manufacturing practices and the ongoing acceleration of e-commerce.
The worldwide market for material handling systems will grow to $134.8 billion by 2020, predicts Global Industry Analysts Inc. Within that, the market for automated equipment could grow from $26.02 billion in 2015 to $44.68 billion in 2022, according to Markets and Markets.
Current material moving solutions are constrained in utility due to their 2D sensors’ inability to deal with complex dynamic environments, according to the company. Autonomous cars and trucks, on the other hand, use a suite of 3D sensors to provide the high-fidelity view necessary to operate intelligently and safely.
That level of perception is critical for indoor driving as well, said Canvas. To accurately capture the diverse environments of factories and warehouses requires a high-fidelity 3D sensor.
To accomplish this, Canvas has developed its own proprietary camera solution. When paired with sophisticated algorithms, it can power systems that can be more autonomous, accurate, flexible, and safe, said the company.
Developing vision-based autonomy
“We’ve developed state-of-the-art cameras to drive vision-based autonomy because it’s the best solution at the moment,” McQueen said. “But we can incorporate the best sensor technologies over time. If dense 3D lidars come down in cost, size, and power consumption, we can incorporate them.”
Nima Keivan, an engineer and co-founder of Canvas Technology, spent the past decade researching and developing computer vision algorithms and is one of a very few SLAM (simultaneous localization and mapping) experts worldwide.
“Facets of vision-based autonomy include rich perception abilities, knowing where you are, and context-sensitive behavior, especially in a dynamic environment,” he noted.
“Cameras provide advantages that other autonomous technologies don’t have,” Keivan told Robotics Business Review. “We get an extremely high-fidelity view of the world that allows us to better detect obstacles and people and to better understand where we are in the environment”
“Our system sees the world in 3D, with a lot of context to recognize objects and people – things that are hard to disambiguate in lidar,” he added. “Cameras give us a huge advantage.”
Unlike other autonomous systems, Canvas’ systems are constantly mapping. That means they are continually watching the environment, can immediately sense when something has changed, and can act accordingly. Both temporary and permanent changes are detected.
If racks or walls are moved over the weekend, for instance, these changes would quickly be noticed and incorporated into the map – without human intervention.
Or, if a temporary disturbance occurs, such as a group of people taking a factory tour, they would be avoided automatically, with normal operation ensuing once they’ve gone, Keivan said.
Canvas Technology said its autonomous goods delivery system is already operating in pilots and will launch commercially soon, but it hasn’t released any details publicly yet.