Among the challenges facing autonomous vehicle developers is the need to gather and process vast amounts of data gathered by cameras quickly and efficiently. Last month, BlinkAI Technologies Inc. announced its RoadSight product, which is designed to improve camera performance in low-light conditions.
While some autonomous vehicle makers are using multiple cameras rather than more expensive lidar technology, that approach has raised safety concerns. The National Traffic Safety Board recently found that a contributing factor to the fatal Uber crash in 2018 was that the automated driver system did not recognize a jaywalking pedestrian in a low-light setting.
BlinkAI spun out from the MIT-Harvard Martinos Center of biomedical imaging and emerged from “stealth mode” over the past few months. The company said RoadSight uses machine learning algorithms to achieve up to 500% higher illumination than conventional image signal processor (ISP) to improve roadside object detection and other computer vision tasks in challenging scenarios.
The technology is currently in trials with leading automotive manufacturers and Tier 1 suppliers for both autonomous and traditional vehicles, said BlinkAI. Bo Zhu, co-founder and chief technology of BlinkAI in Charlestown, Mass., discussed RoadSight with The Robot Report.
How do BlinkAI’s algorithms help self-driving cars see better?
Zhu: We’re seeing significant improvements to downstream perception for autonomous driving and facial detection. Feeding the algorithms high-quality data is particularly challenging in low-light conditions, and RoadSight puts a brain behind the sensor to enhance camera performance.
In low-light or other situations, there are two main challenges for artificial intelligence. The first is getting information you care about when there just isn’t enough signal to begin with. With an absence of signals or zero pixels, how can you predict?
The second challenge is that it becomes incredibly important to reduce the noise of the sensor for the signal you do have. These two factors feed upon each other. For example, thermal noise and other noise sources in electronics can corrupt a signal, which becomes more apparent in low-light scenarios.
Noisy signal that doesn’t have the information you care about, yet we humans do it all the time — input to our eyes is very noisy. Our eyes aren’t as sensitive to light as some smartphone cameras, but we can see better in the dark than a smartphone in video mode. It’s not because we’re picking up more light, but our brain able to process raw noisy information better.
With our neural network and algorithms, RoadSight is now able to do that task.
What led to this approach?
Zhu: This tech came out of the medical imaging space. I did my postdoc Harvard Medical School, where I led development using deep learning for image reconstruction of medical images, such as CT and MRI scans. We were dealing with raw data that’s noisy.
I saw huge opportunities in non-medical areas as well, but the regulatory issues are different from the medical space. Cameras are having a great impact in everyday life, especially as computer vision becomes important to society with smartphones and autonomous and driver-assistance safety.
How does BlinkAI measure the improvement over image signal processors?
Zhu: At what point do you get a good signal-to-noise ratio? One way to do a comparison — increase the gain of camera to a point. We also handle noise amplification to de-noise the image so that it looks as if camera taking in a much brighter scene.
What about oncoming headlights or rain? How does RoadSight deal with them?
Zhu: Those are a couple of applications about which we’re getting interest from automotive companies, with the front cameras adjusting to environment. It’s a high dynamic range issue, and one of the features of our system is that we can provide HDR ability.
We’re also developing methods that can address rain and snow. The challenge is to do it efficiently on low-powered devices. Automotive Tier 1 OEMs want more capable resources on a car than on a phone, really pushing the limit.
A real challenge is getting these onboard systems to market and deployed in cars. Can you do this well in extremely limited compute scenarios?
With BlinkAI developing processing behind the camera lens, so to speak, how do you manage design challenges?
Zhu: The need to see better in the dark, along with the space and cost limitations, underscore the importance of computational approaches such as ours.
Adding lots of sensors is expensive, and it’s not feasible for consumer cars. There are also styling guidelines, and we’ve heard strict maximums of 1/2 in. diagonal for cameras, so you’re limiting information capture even further.
Is there a desired resolution for the images?
Zhu: Right now, most cameras are 1 to 2 megapixels. The automotive Tier 1 OEMs want higher resolutions in the future — 8 to 12 megapixels, but we’re still limited to small sensors and an even smaller area per pixel. We’re at the point where it’s hard from a physical perspective to get the information we need.
The more intelligent computational approaches leverage certain patterns, using visual features efficiently found in the data even when it’s not quite there, to get to higher confidence.
With a smartphone, say Google Pixel 4 or Apple Night Mode, you can get nice pictures of starry skies. People ask us how does that overlap with what we do? We are extrapolating from a few photons or pixels, but the primary differentiator from a smartphone is that that they can compensate by boosting exposure times to two to five seconds. You can’t do that in a car, where the photons must be captured in 1/30th or 1/60th of a second and processed immediately.
Where do you stand in the debate about lidar versus vision for autonomous vehicles?
Zhu: I tend toward a complementary approach. Different modalities of imaging are important if it’s feasible to have them all there, for safety, redundancy, and bandwidth for sensing. Each is optimal for certain things. There’s some overlap, but makes sense to have as much data as possible.
The practical challenges to having lidars are for cost and mechanical robustness issues. These are considerations that have to be made, and we do see that cameras are a fairly robust, reliable technology.
Can your computer vision approach improve the effective range of visual cameras?
Zhu: That has more to do with the particularities of a sensor. However, the interaction between the lens and the maximum resolution of of the sensor — that’s our limit.
There are things like super-resolution that we’re investigating. Our technology has some ability to do that as an SNR [signal-to-noise], data-interpolation problem with machine learning.
For processing capacity, we making sure that our computational burden is as small as it can be to provide the necessary input to vision algorithms. One of the main features of our product is how lightweight it is.
How far along are you with partners and clients? Given that OEMs are sensitive to pricing, do you have a price point?
Zhu: We are in private trials, and AI is disrupting the automotive market.
We’re talking with everyone in the imaging pipeline. For automotive, it seems for us a sweet spot would be the Tier 1s, producers of actual technology that goes to the OEMs, Because we’re a software solution– we’re hardware-agnostic regarding compute platform — RoadSight can be pretty easily integrated into existing solutions.
BlinkAI raised $1.2 million in April. What are your growth plans?
Zhu: We have 10 people now, and we’re looking to grow. We’re also in the process of our next fundraising.
Tell Us What You Think!