NVIDIA currently has the only packaged third-party autonomous car solution in market that is both complete (short of the car itself) and comparatively mature. I say “comparatively” because, until autonomous cars are on the road at scale, no solution can be mature. And we need at least a Level 4 car in production and on the road before we can get there. Regardless, currently there is no company ahead of NVIDIA for autonomous driving much like there is no firm ahead of IBM when it comes to the development of artificial intelligence (AI). With the latter, IBM determined the greatest risk to the success of AI is bias and they focused on and created a critical tool to detect it. NVIDIA in turn determined the biggest impediment to autonomous cars was safety as it would determine both regulatory and customer acceptance of the technology.
To address safety with NVIDIA’s leading tool, they developed a massive simulation platform which allows virtual autonomous cars to travel thousands of test miles in minutes facing increasingly unusual driving edge cases. Their focus on safety, however, is showcased in a recent report called the Self-Driving Safety Report.
In that report the most interesting part, at least to me, is Pillar 4 on safety titled: Best-In-Class Pervasive Safety Program.
Let’s chat about that.
The Importance Of Safety
When something is new or different there is a much higher focus on it. For instance, Tesla cars tend to survive crashes better than other vehicles and its Autopilot system has been found to be generally safer than human drivers by a significant margin. However, when there is a crash—particularly one involving Autopilot—the focus on that crash is at a national level (how many Ford or GM crashes get that kind of coverage?), making the technology look untrustworthy even though it is safer. That last link is from the New York Times, how many crashes that don’t involve celebrities does that paper pick up?
The same thing will happen with autonomous cars. Initially, they could be 2 magnitudes safer than a human driver. For instance, there have been a handful of Tesla accidents that have been fatal, but there are 40K fatal accidents out of 6M accidents a year in the US. If each had the same coverage that a Tesla crash had we’d have 104 front page car death stories a day and that is deaths, actual accidents would be closer to 16K reports a day (and I rounded down).
Right now, it is believed that autonomous cars will reduce the frequency of accidents, particularly deadly ones, by at least two magnitudes. That would be down to 160 accidents a day with around 1 death. But that still would be one major event a day that would likely still initially get national coverage while the 103 folks a day that wouldn’t die would likely be forgotten. Even if we dropped these two more magnitudes; that would still be several deaths a year that would still get national coverage.
Even elevators, at 26 deaths a year, would be a bar too low. The cars will need to be almost two magnitudes safer than elevators and that may be impossible as long as they share the road with human drivers. This is one of the reasons I think that, once autonomous cars get to critical mass, human drivers will be forced off the road.
The Insanity of Ethics Testing
To give you an even better sense of this, today I was reading a story in the Washington Post about the ethical testing being done for autonomous cars. One test was the car comes over a rise, the brakes are out, it is a two-lane road, there is a crashed school bus with kids in front of the bus in one lane and kittens in the other, which does the car hit? A variant still has the kids in lane one but a crashed truck and driver in the way in lane 2. Now, let’s stop. A human driver wouldn’t even have time to react in a situation like this and would likely just be able to complete the thought, crap, my brakes don’t work before randomly hitting one or the other obstacle.
And, how often does something like this happen anyway? Out of the 16K accidents a day, how many fit any of these scenarios? None. Right, the odds of these things ever happening are extremely remote and, were they to happen to a human, the human wouldn’t have time to react. Now an electric autonomous car could hit the emergency brake, could put the motors on maximum regeneration or even reverse them, and would pull from the data of thousands of cars like it to determine the most likely path to avoid both obstacles (and while a human would never train for this, simulations are set up for edge cases and this would be one, given the coverage, that would likely be in the database).
In the end autonomous cars may need a safety record that would be impossible if it weren’t for AI, which can think and act thousands of times faster than a human.
That is what makes NVIDIA’s effort so critical. If they don’t perform at superhuman levels, the autonomous car effort could be set back years—if not decades. And, given we are trying to save the clear majority of 40K lives a year that’s problematic.
Thus, NVIDIA’s massive effort to address safety. The bar is almost impossibly high, so making sure that bar is met is the goal of this effort. They currently benchmark their efforts against the auto industry’s highest standards, but—as noted above—that won’t be enough.
They go beyond that impressive number of tests to advocate full redundancy, something we don’t do for human drivers (right now if a human driver is impaired while driving there is no requirement that another driver take over in real-time, only laws that require an already impaired driver not drive in the first place). For instance, if a human driver has a heart attack, or passes out, there is no system to even easily allow a passenger to take the wheel. Failover is built into NVIDIA’s autonomous car requirements.
There are also required mechanisms to ensure against system misuse. For instance, if someone wanted to use the automated car as a weapon or set speeds at unsafe levels. And NVIDIA is actively working with both domestic and foreign governments to refine the laws and regulations that do, and will, govern this space.
One of the strongest overarching elements of this is NVIDIA’s efforts on simulation where they can test against extremely unusual situations like brake failure and facing a choice between hitting kittens and students. They could even have one where the car faced a choice between hitting a North Korean delegation (resulting in nuclear war), or a delegation from outer space (resulting in the elimination of humanity) which probably is only a tiny bit more remote.
Their efforts, given their technology roots and the risks in this market that currently exist, on combating cyber threats are also pronounced. They recognize that despite all the safety measures, if these cars are hacked the result could be catastrophic. Their security assurance program is in line with what would be used to secure an enterprise or government today.
The bar that has been set for autonomous cars is almost impossibly high. The only way it is even remotely possible to achieve this kind of result is through an AI. NVIDIA’s focus on safety and security is arguably market leading, but it is also critical to the success for the segment. It is virtually certain there will be autonomous vehicle accidents and deaths that will be blown out of proportion even though almost all of them will more likely be caused by human rather than machine error. Assuring that machine error is as close to zero as possible will help assure the success of this segment.
In the end it is efforts like NVIDIA’s that may give us autonomous cars in a few years rather than a few decades.
- HP’s Threat Report – New Threats, Bigger Problems - February 23, 2024
- Lenovo Partners with Anaconda for AI to Stretch its Workstation Leadership - February 19, 2024
- AI and the Supply Chain: Making More Resilient Companies - February 9, 2024