We are already concerned about autonomous cars and trucks being hacked and becoming effectively weaponized—either killing their passengers or being rammed into buildings or people. But we aren’t talking about the ability to do this intentionally. Having someone take a large truck and loading it with gas, explosives, or even fertilizer (recalling the biggest domestic terrorist attack in the US) and using it like you’d use a cruise missile with equal or greater impact (because you steal or hijack the trucks and attack multiple targets at once) is a very real hostile use for this technology, and there is currently no good defense for this. We don’t even have a great solution for someone just hacking into an existing self-driving vehicles yet because secure platforms like BlackBerry’s QNX are not yet the rule in these implementations.
Let’s talk about what would need to be done to ensure hostile entities couldn’t weaponized autonomous cars and trucks, and turn them into land-based cruise missiles.
Connected
Number one, the notion that all of the autonomous vehicles be connected to each other has to quickly become a requirement. The reason for the connectivity is not only to avoid accidents but to create a redundant defense en masse for a rogue truck or car. Now, care will need to be taken so the connectivity link can’t be compromised and one vehicle can’t contaminate others. More importantly, if any vehicle sees another which isn’t communicating it can alert regarding the anomaly to a central service and the non-communicating vehicle can be identified as a potential threat.
Centrally Controlled
Much like we have air traffic controllers for airplanes who oversee all planes in a region we’ll need controllers, which can be automated, that do the same thing for land regions. These will largely be set to assure accidents are avoided and to make sure that any vehicle that is acting strangely is identified and neutralized long before it can get to its intended target. If these things can be identified early and limited to remote areas of a town or city, then their utility as a weapon is reduced dramatically and there is a far higher chance they won’t be used as a weapon in the first place.
Active Defense
Much like we have police to stop a crime and apprehend the criminals we’ll need vehicles that are designed to surround and capture a rogue autonomous car and limit its ability to do harm. Ideally they might come in several forms. Small fast vehicles that could surround, isolate (block external signals), and use something like an EMP to render any electronics inoperable, then a larger armored conveyance that could capture the car or truck, vent any explosion upwards, and convey the vehicle to a safe location where it could be scanned and disarmed. You might even have higher speed drones that could respond even more rapidly, attach themselves to the vehicle, and use a variety of means to render it inoperable or misdirect it (by altering the GPS signal for instance).
The small drones and vehicles would be on constant patrol that is randomized and has enough coverage to get to any potential threat in enough time to prevent an attack on a high-value target. These could also be used as additional eyes and ears for law enforcement looking for crimes in progress providing dual use. The centralized AI would be programed to look for crime trends that appear designed to pull the cars away from a possible route for potential hostile automated attack.
The Requirement of a Shared Route
Particularly for trucks, much like planes need to file a flight plan, the vehicles would have to report their routes to the centralized control. Any route that was near high value target would be flagged and checked and any vehicle that moved away from its filed route would be identified as a potential threat with immediate response. This would provide the necessary extra layer of security and potential early warning.
Prevention Against Root Kits
Much like BlackBerry provides a layer of software below Android so their phones can uniquely protect against root kits, there needs to be a layer of software below the AI that can report if the AI has been compromised or is behaving unusually. This can be done in a number of ways but, done right, it would make reprograming the AI or compromising it far more difficult and provide the final layer in a program to assure that autonomous cars and trucks couldn’t be turned into weapons without a massive level of work, forcing an attacker to shift to a more conventional delivery method.
Wrapping Up
When you give anything a brain you create the potential of a smart weapon, one that can take itself accurately to a target and deliver a variety of ordinance. Were this successfully done it wouldn’t just do a ton of damage but it could make autonomous vehicles illegal because the one thing governments do poorly is mitigate anticipated problems, but the one thing they do really well is assign blame and overreact. This is why, after 911, we all have to jump through a massive number of hoops to get on planes few of which actually had anything to do with preventing another similar attack and most of which almost put the airlines out of business.
Sadly, we have to worry about how these wonderful new tools could be misused but if we actually do something to prevent this misuse perhaps both this new industry, and the folks we care about, will survive what is coming. Do I think this will happen? That’ll depend on how many of us point out the critical need that it must.
Of course my personal plan is not to work or live near any high value targets largely because of the confidence I have that the government will get this right.
- Things to Look for at Microsoft Ignite - November 18, 2024
- IBM Power: Doing Hybrid by Choice - November 13, 2024
- How to Build the Perfect AI Workstation - November 5, 2024