“Gentlemen, start your engines!” This motorsport call-to-arms has rung out at races from Le Mans to Indy for decades, but with future vehicles having neither an engine nor a human behind the wheel, the well-worn phrase is headed for the archives.
At the end of 2015, Denis Sverdlov, CEO of auto manufacturer Kinetik, announced a joint venture between Formula E and Kinetik to create the Roborace self-driving race series within a year. Roborace will feature 20 identical cars allocated to 10 teams. They will run on the same circuits as Formula E, except without drivers.
The cars won’t be remote-controlled, either, they’ll be fully autonomous, using the NVIDIA Drive PX 2 supercomputer to run the software. All cars will be mechanically identical so that the winning team’s success will depend on the best artificial intelligence (AI).
In creating the new series, Sverdlov hopes to showcase self-driving cars guided by AI and powered by electricity. While this is a race series that will probably only have 20 entrants, Kinetik believes the day is not far off when self-driving cars will be the norm thereby improving the environment and road safety.
Nevertheless, the biggest challenge self-driving cars will have to overcome on the road is being able to react to the randomness of traffic flow, other drivers, and the fact that no two driving situations are ever the same.
AI will outmaneuver human drivers
According to Danny Shapiro, senior director of automotive at NVIDIA, the latest autonomous technology is adept at handling this type of diverse environment. By using deep learning and sensor fusion, it’s possible to build a complete three-dimensional map of everything that’s going on around the vehicle to empower the car to make better decisions than a human driver ever could.
However, this requires massive amounts of computing to interpret all the harvested data, because normally the sensors are “dumb sensors” that merely capture information. Before being actioned the information has to be interpreted. For example, a video camera records 30 frames per second (fps), where each frame is an image, made up of several color values, and thousands of pixels.
There is a massive amount of computation required to be able to take these pixels and figure out, “is that a truck?” or “is that a stationary cyclist?” or “in which direction does the road curve?” It’s this type of computer vision coupled with deep neural-network-processing that is required by self-driving cars.
Deep learning adds context to AI
Moving toward true AI, deep learning is a set of algorithms in machine learning that attempt to model high-level data concepts by using architectures of multiple non-linear transformations. Various deep learning architectures such as deep neural networks (DNNs), convolutional neural networks (CNNs), and deep belief networks are being applied to several fields such as computer vision, automatic speech recognition, natural language processing, and music/audio signal recognition where they have proven to be astoundingly responsive and accurate.
NVIDIA’s DriveWorks is one such DNN that has been trained to understand how to drive. In the past, self-driving vehicles, such as the ones competing in the DARPA challenge, have relied on manually-coded algorithms to track a desired route and control the vehicle. Now, applying DNNs, a car can navigate freeways, country roads, gravel driveways, and drive in the rain after only 3,000 miles of supervised driving.
comma.ai, a startup created by iPhone hacker George Hotz, has built a self-driving car almost entirely with CNNs that train the car how to drive. They drive the car around, and the car learns from the humans driving it what to do when it sees things in the field of view.
To help in this training, they also give the car a LIDAR that provides an accurate 3D scan of the environment to more absolutely detect the presence of cars and other users of the road. When it is time to drive, the network does not get the LIDAR data, however it does produce outputs of where it thinks the other cars are, allowing developers to test how well it is seeing things.
Related: What is LIDAR and How Does it Help Robots See?
AI and dashcams for smart vehicles
While early self-driving car companies, including Google, use expensive LIDAR sensors to visually understand what’s going on around a vehicle, Palo Alto-based NAUTO uses the type of image sensors found in “prosumer” cameras. When used in combination with motion sensors and GPS systems, accurate situational awareness can be achieved at a significantly reduced cost.
Using AI to interpret the information streamed from relatively cheap dashcams, NAUTO’s systems can detect what’s happening on the road ahead of a driver and within the vehicle.
These new technologies not only make roadway object-recognition possible, but they also improve the human-machine interface by opening up possibilities in facial and gesture recognition.
Legislators recognize AI as licensed driver
To eliminate the uncertainty around the intent of legislators to move this technology forward, U.S. vehicle safety regulators have declared the AI piloting a self-driving Google car will be considered a legal driver under federal law.
In a recent letter sent to Google, NHTSA confirmed that it “will interpret ‘driver’ in the context of Google’s described motor vehicle design as referring to the (self-driving system), and not to any of the vehicle occupants.”
The stage is set for AI to dominate our roads, and not only in racecars on closed circuits.
About the Author
Peter Els, an automotive engineer by profession, is a freelance writer who informs and entertains industry professionals and car enthusiasts alike. Check out more of Peter’s musing on cars at his Writing About Cars blog.