Listen to this article
For autonomous vehicles to be safe, they need to perceive as much of their environments as possible. Immervision is applying its expertise in image processing and optical design to provide lidar sensors with a wide-angle view for autonomous vehicles at SAE Level 3 and higher. The Montreal-based company said it has 15 years of experience and numerous Ph.D.s on staff working on new approaches to vision for vehicles, robots, and aerial drones.
Immervision said its lens design can increase the amount of pixels collected by 30%, resulting in a 180-by-180-degree field of view. The company also claimed that its pixel density provides more reliable data for autonomous navigation and object detection.
Immervision develops new lens design
“Our mission is to bring vision to machines,” said Patrice Roulet Fontani, co-founder and vice president of technology at Immervision. “”One of my main motivations as an engineer was to solve vision impairment. Back in the late 1990s, computer vision engineers started playing with wide-angle lenses, but the lenses at that time had a lot of drawbacks.”
“We noticed in 2003 or 2004 that existing wide-angle lenses were symmetrical,” he told The Robot Report. “The image quality was decent at the sensor, but it had aberrations at the edges. We’ve invented a new freeform lens design to overcome these issues and create lenses more like human eyes. They enable a wide field of view and super-high resolution, even at the periphery.”
Immervision provided its freeform module to the automotive industry in 2009 to magnify the field of view for backup cameras, and in 2011, it applied the technology to smartphones.
“More recently, we discovered a few years ago, when lidar companies started to create systems other than conventional rotating flash lidar sensors, they had a 120-degree horizontal field of view while squeezing only 20 to 30 degrees in the vertical field of view,” Fontani said. “We had focused on RGB cameras up to 2016, but we realized that the design techniques developed for the past 18 years was applicable to lidar.”
Reducing the number of cameras, helping AI
“When we designed the first ‘eyes’ for a pipe-inspection robot in 2006, the lens was very big, but it replaced multiple cameras with one camera,” said Fontani. “More recently, we’re working on a project for flying drones, developing computer vision for low-light conditions.”
Immervision views its technology as wide-angle perception leading to computer vision. “Our mission is to prepare the highest-quality pixels for the rest of the pipeline,” Fontani said. “We’re not just focused on developing algorithms.”
“We published a conference paper last summer on how panamorphic technology with different magnification zones can increase the accuracy of a neural network in comparison with a traditional lens or sensor design,” he said. “Now we are also benchmarking different ways to prepare the lens and the image-processing algorithm to detect which pre-processing will deliver human-level vision, which is important for autonomous vehicles.”
Applications growing for freeform anamorphic lens
“We’re seeing more and more demand for wide-angle imaging,” he said. “We are multi-industry, with security, automotive, robotics, IoT [the Internet of Things], and aerospace applications.”
Last month, the U.S. Department of Defense Defense Innovation Unit (DIU) awarded Immervision’s InnovationLab a contract to develop low-light vision systems for small unmanned aerial systems (sUASes) for the Blue UAS Framework.
In September, the company announced JOYCE, a humanoid robot and development platform for computer vision.
“JOYCE is part of our commitment to contribute to science and provide a platform for academics, developers, and big corporations,” said Fontani. “We have a lot of ambitions for JOYCE, which we hope will help solve problems of perception. We also collaborate with the University of Laval and fund research in optical design and computer vision.”