Skip to content
April 19, 2016 / Michael Yaroshefsky

Dr. Miklos Kiss or: How I Learned to Stop Worrying and Love Autonomous Driving (Part 2)

While we often hear about the obvious obstacles to autonomous cars — technology, regulation, insurance — the biggest obstacle may well be us.

Behind the wheel of an autonomous (or “piloted”) car, you’ll quickly grow frustrated by other drivers who seem intent to kill you: people who need to fill that safe gap you’re keeping ahead, people veering into your lane while texting or talking on the phone.  The other drivers assume you’ll understand their state (cruising contently, aggressively trying to catch a flight, driving drunk) and goals (keep a safe distance, find every possible gap through traffic, try not to wreck).  This misalignment of assumptions is what caused the most recent accident involving a Google Self-driving car:

Our car had detected the approaching bus, but predicted that it would yield to us because we were ahead of it…. And we can imagine the bus driver assumed we were going to stay put. Unfortunately, all these assumptions led us to the same spot in the lane at the same time. This type of misunderstanding happens between human drivers on the road every day.

This is a classic example of the negotiation that’s a normal part of driving — we’re all trying to predict each other’s movements.

The artificial intelligence that powers these systems is incredibly sophisticated at solving deterministic problems (find the lane and keep the vehicle in it).  But it falls short of our ability to predict probabilistic outcomes and constantly negotiate fluid social negotiations with other drivers on the road.  The key difference is that AI lacks an innate theory of mind, or a sense of other people’s intentions and goals, which is something even toddlers have a strong capacity for.

Imagine this scenario: an aggressive driver is approaching fast in the lane to your right. Your car sees the other driver rapidly approaching your right blind spot and handily flashes the blind spot monitor.  But it’s not yet smart enough to assume that the driver is about to cut you off using that small gap between you and the car he’s rapidly approaching to your 1 o’clock.

A human might anticipate this aggressive driver making the maneuver and slow up to create a safer gap (or speed up to close it).  An autonomous car will suddenly see the other driver less than 10 feet from your front bumper (at roughly 70mph or 100 feet/second) and assume “car judgement day” has arrived — slamming on the brakes.  By the time your car uncovers its eyes and asks, “Is it over?” the aggressive driver has sped off, and there’s a train of cars behind you wondering why this jerk (you) just totally overreacted and slammed on his brakes.

Situations like this are common, particularly at the “edge” — potentially dangerous situations.  Part of the solution is improving the cars’ ability to understand us.  But part of the solution is our recognition that autonomous cars both see — and perceive — the world in a totally different way.  To safely share the road with autonomous cars,  we may have to slightly alter our own driving habits.

But how will we know which cars are being driven autonomously?   It might make sense during this interim phase to standardize some indicator when a car is being driven partly or wholly by AI.  Just like the “Student Driver” signs on cars driven by student drivers — but less dorky.  Perhaps this?


You knew the Hoff meant business when he drove around in KITT, his car that was also his witty sidekick.