Tesla autopilot

The Difference Between Tesla Autopilot and Future Self-Driving Cars: Intelligence

There has been a lot of coverage about the recent Tesla crash (<href=”#15506d301d3c”>and now a more recent one) including some unfortunate snarky tweets by Tesla founder and CEO Elon Musk (I think there are some folks that likely should stay off Twitter) suggesting he may need to rethink his priorities. This has cast a shadow on self-driving cars, which are still in test and are far different than the Tesla Autopilot function which is more like enhanced cruise control. Because of the coverage that self-driving technology has been getting, it is easy to see why some Tesla drivers have become confused—reminding me of the old cruise control story.

That story was of a family renting a motor home and having cruise control explained by comparing it to autopilot. Once on the freeway the dad put on cruise control and went back into the motorhome to fix a drink and that didn’t end well. Then there was the story early on with GPS where a driver tried to drive from Germany to England not realizing there wasn’t a bridge and that the GPS was guiding him to a ferry which wasn’t at the dock in the middle of the night. He discovered his new car made a lousy submarine.

This is all to showcase that problems like this have occurred in the past but what makes this different is we do actually have self-driving coming. Let’s talk about the difference between Tesla Autopilot and self-driving.

Tesla Autopilot

Currently the Tesla cars (S and X) are arguably the most advanced cars on the road today. Fully electric they are effectively rolling computers. They are near constantly linked wirelessly to Tesla for autonomous monitoring and diagnostics and—should you have a problem—there is a pretty good chance Tesla will know before you do.

If they have a catastrophic problem, they typically will guide you to the side of the road and let you out as opposed to a gas car that tends to fail more catastrophically thanks to the nature of internal combustion engines. Finally, they are pretty much built like a tank and the sedan is the only car that has ever broken the Consumer Reports accident testing setup.

Autopilot is basically a smarter cruise control. It integrates things like lane keeping, accident avoidance, and blind spot monitoring into a system which seems like self-driving but isn’t. Think of it like putting your car on rails where the rails aren’t particularly reliable and trains can change tracks almost anyplace. Like an engineer you don’t really have to watch where the train is going every second and it is safer to enjoy the scenery but, also like an engineer, you can’t go take a nap or watch a movie because something could cross the track and you may need to take control.

The system uses cameras and sensors each of which have range limitations and, compared to a true self-driving car, it isn’t very intelligent so it’s decision capability is limited. This is why at freeway speeds stopped objects are a risk. By the time the system sees the obstruction the car may be too close to stop.

So Autopilot is basically a set of linked systems you can find in other cars hooked up to a relatively simple computer and thus better and more capable than cruise control, but not yet capable of driving while you nap.

Self-Driving

At the heart of a self-driving car is a very sophisticated computer. A few years back the computers used for self-driving test cars filled the entire car. Currently the leading platform for this is from NVIDIA, called Drive PX, and it represents one of the most advanced deep-learning computers that has ever been developed. It needs this power so it doesn’t just “see” an object—it can determine what that object is and then make a calculated decision on what to do about it.

For instance, Autopilot would likely only identify a child running into the road when the child got in front of the car and it would see that child as an object. A self-driving car would, through a variety of sensors, see an object approaching the road, determine it was a child, and begin breaking or swerving before the child actually crossed into the street. In most cases the car would be able to see the child through some obstacles due to a use of Lidar and infrared cameras. So Autopilot can generally only respond to any obstacle generically and only when it comes into its limited field of view, while a self-driving car would actually determine what the obstacle was and it would be able to take corrective action well before a human driver would. There are still limitations. Even though a self-driving car can see better than we can and respond more quickly, it won’t be infallible but it should almost always perform better than a person would because it can see better and respond more quickly.

Now let’s take the common crashed school bus scenario and run it against both systems. The scenario goes like this, a school bus has crashed and the students have exited the bus, the car will either hit the bus killing the driver or the students killing some of them what does it do? With Autopilot the car would likely hit whatever was in front of it regardless of whether it was a bus or students being unable to tell the difference and just seeing an obstacle. The self-driving car would not over-drive its sensors, so it either would be driving slow enough to stop or see the problem before it became a crisis and stop in time. With Autopilot the driver sets the car speed, with self-driving the car does and if the driver overrides the self-driving limitations effectively that system is compromised and any accident would be the result of what the driver did.

This is the difference between a system that thinks and one that simply reacts.

Wrapping Up

Autopilot is an enhanced cruise control and as long as you keep that in mind you’ll likely not get into trouble. Self-driving technology should drive the car better than you do. It is a true thinking system, and what makes it safer is it—unless overridden–won’t do stupid things we do all the time. It won’t drive faster than is safe (for instance in fog it won’t drive faster than it can react and it will see better than you do), it won’t become distracted, it won’t get stoned or drunk, and it is designed to multitask in nanoseconds so it can communicate and drive while, generally, for humans that can be a more serious distraction.

With Autopilot, if there is a problem, it turns control back over to the driver—which is why the driver must remain alert. With self-driving, if there is a problem, the system will deal with it or pull to the side of the road and stop. This is because a lot of us realized that if a super-computer couldn’t deal with something, handing it over to a driver who might be napping or watching a movie wouldn’t end well. That’s why the industry is thinking about getting rid of driver controls long term with self-driving cars. This is because with Tesla Autopilot the driver is part of the solution, but with a self-driving car the driver may be the least safe component.

Given that currently <href=”#64bf796d4839″>dying in a car accident is the 4th most likely way you’ll cease to be (getting shot is 9th) perhaps not driving at all and moving dying in a vehicle accident to the end of the list would be a good thing.

2 thoughts on “The Difference Between Tesla Autopilot and Future Self-Driving Cars: Intelligence”

  1. I do think that in the near future, networked self-driving cars will
    more or less take over the roads. And while I like driving in the best
    of circumstances, I look forward to the auto-auto future because it is
    virtually guaranteed to be safer, more efficient, and less of a hassle. I wonder what car ownership will look like when it happens. I could see it moving, to some degree, to more of a shared/service model.

    “Okay Google, I want to be at the grocery store at 6 pm.”

    (An auto-auto drops someone off in my neighborhood, skips over to my house, takes me to the store, charges my Google or PayPal account)

Comments are closed.

Scroll to Top