Listen to this article
|

Multiple cameras provide 3D vision. Source: Light
Unlike other mobile robots, autonomous vehicles need to see far ahead to react safely to traffic conditions at highway speeds. Light today launched its Clarity perception platform. The company said Clarity is able to see any 3D structures in the road from 10 cm to 1 km (3.9 in. to 0.62 mi.) away — three times the distance of current lidar sensors — using passive cameras.
Founded in 2013, Light said it combines breakthroughs in computational imaging with multi-camera calibration and advanced machine learning. Co-founder and CEO Dave Grannan is a serial entrepreneur who developed speech-recognition technology later acquired by Nuance Communications, and co-founder and Chief Technology Officer Rajiv Larioa helped develop the foundation of LTE 4G wireless communications. The Redwood, Calif.-based company said its technology provides accurate depth at both near and far distances in real time.
Clarity uses multiple cameras for high-def 3D vision
Unlike conventional lidar or radar, which sends out signals and defines objects based on their reflections, or stereoscopic vision, which relies on two cameras that may be close together, Light’s Clarity uses multiple cameras and derives depth and distance information for each pixel.
“The processing power for recalibrating every frame 30 times per second was missing from autonomy stacks since 2004. We founded Light to address the problem of calibration with computational imaging,” Grannan told The Robot Report. “Light uses multiple cameras to perceive the world and objects for human-like vision.”
A test van used four cameras, which resulted in a pixel depth superior to that of 32-channel lidar costing $8,000 to $10,000, he said.
“The best lidar today has a range of 250 meters, not 1,000 meters. Lidars provide 72,000 points, and our system provides 1.8 million points,” Grannan said. “Our depth is continuous.”
Clarity is able to generate up to 95 million data points every second, 20 times greater than any perception system currently available, claimed Light. Its depth is also domain-independent, meaning Clarity does not need to be trained to recognize the specific objects it may encounter on the road in order to derive 3D structure.
“Normally, you have to fuse the data, overlaying the lidar and picture, plus machine learning,” Grannan said. “In our case, we are domain-agnostic. We leave object identification and classification to the next steps in machine vision. With Tesla and Mobileye, the machine learning in the stack would have to go through a neural network to identify an object.”
“We recognize that some of the algorithms have to be done on the hardware. Our solution includes our own dedicated silicon,” he added. “It is part of our roadmap to eventually build our own object identification capability.”

Clarity obtains depth information for each pixel. Source: Light
Light system also has potential for ADAS
Light said Clarity’s range and level of detail will contribute to safer advanced driver-assist systems (ADAS) and autonomous vehicles by enabling them to detect and react to potential obstacles more quickly. This is also useful for adaptive suspension systems, Grannan said.
For every 100 meters of added perception, a vehicle gains an additional four seconds of time to slow down, change lanes, or alert the driver to take over, which is important for hand-offs from autonomous systems and for heavier vehicles such as fully loaded Class 8 trucks. “A half-loaded truck doesn’t have the friction and needs more than 250 m to stop,” he noted.
“Automakers have invested billions of dollars and decades of research to make safe, reliable ADAS and self-driving cars a reality. But so far, even the best perception systems on the market miss objects and obstructions in the road — some as big as a semi-truck,” Grannan stated. “Any system that powers a vehicle needs to be equipped with comprehensive, measured depth alongside the type of visual information that cameras provide, in order to make smart decisions that make driving safer.”
“There is nothing else like the Clarity platform with its combination of depth range, accuracy, and density per second. It enables a new generation of vehicles that can be made safer, without having to compromise on cost, quality, or reliability,” said Prashant Velagaleti, chief product officer of Light. “Rather than only minimizing the severity of a collision, having high-fidelity depth allows any vehicle powered by Clarity to make decisions that can avoid accidents, keeping occupants safe as well as comfortable.”
In addition, Clarity’s ability to spot open parking spaces more than 100 meters away could help drivers save time, fuel, and frustration, said Light.

Clarity provides continuous depth information. Source: Light
Clarity intended for sensor fusion, competitive on cost
Light is meant to supplement rather than replace lidar, radar, or ultrasonic sensors, Grannan said. “We’re not religious around the sensor suite,” he said. “Combining data sources makes sense for fault tolerance and redundancy, particularly as we get to Level 5 autonomy.”
How does Clarity handle inclement weather? “Our system works well in low light, and our algorithms work the same when we use near-IR [infrared],” replied Grannan. “To deal with rain and heavier fog, we’re starting to look at shortwave IR.”
The Clarity platform uses off-the-shelf cameras sourced from existing automotive supply chains, keeping costs low, he said. Light said it can take advantage of constant innovations in camera technology and algorithms.
“Our clear advantage that we provide lidar-like accuracy for the cost of cameras,” Grannan said. “It costs tens of thousands of dollars for lidar, while our cameras and ASIC [application-specific integrated circuit] costs OEMs $250 to $260. Since ADAS like lane keeping already uses cameras, they’d need to calibrate them to our specifications but may not need to spend much more.”
“For a 360-degree view, it may cost only $1,000 to the automaker,” he said. “We’re in discussions with design groups, which see camera placement as a simpler problem than for lidar. In the front, two cameras could be placed in the A pillars [around the windshield] and one behind the rearview mirror. We’re also looking at the B and C pillars in the back or in the bumpers.”
Light is in communications with full-stack autonomous vehicle providers, as well as Tier 1 suppliers and the automotive OEMs themselves, Grannan said.
“The automakers say the tolerances for vibration and XYZ cameras are well within their capabilities,” he said. “There’s also applicability for affordable platforms for lower levels of driving assistance.
“We want to be in the next-generation Level 4 and Level 5 platforms that will be tested next year,” said Grannan. “Perception problems are not solved. There’s nothing that solves misidentification problems. … The industry needs to get out of small ring-fenced trials to scale to market. We have the safety to make people comfortable. If we get to regulations for perception, it would be a failure of the industry, which will probably come up with a framework on its own.”
“There’s a place for lidar, radar, and ultrasonic sensors, but there was a missing piece,” he said. “We’ve got a unique offering — three-dimensionality, with size, distance, and velocity — for safety in the self-driving stack.”
Light is hiring and expects to conduct a fundraising round later this year or early next, said Grannan.
Tell Us What You Think!