In Pursuit of Autonomous Driving

In March 2018 an Uber vehicle, operating in autonomous mode with a safety driver at the wheel, struck and killed a pedestrian who was walking her bike across the road. This was a tragic incident, resulting in the loss of one life and no doubt extreme psychological damage to another

Car-related deaths are all too common, and the combination of speed, hefty machines and squishy humans means that we’ll inevitably have to tolerate some tears.

So what do autonomous vehicles have to offer? Perhaps two primary benefits: elimination of human-caused vehicle crashes; and elimination of the drudgery of driving. It’s hard to argue these are not goals worth striving for. It is easy to argue about the best way to get there.

 Who's at fault? Investigations continue. (Image source:  IEEE Spectrum )

Who's at fault? Investigations continue. (Image source: IEEE Spectrum)

The problem with developing autonomous vehicles in real-world conditions, is that it’s always 90% done with 90% to go. We instinctively over-estimate the challenge of starting and under-estimate the challenge of finishing. This is especially so in applications of artificial intelligence. It is utterly trivial to give a machine the object recognition capabilities of a toddler but excruciatingly difficult to give it the scene assessment capabilities of a 5 year old. Yet we naturally are utterly amazed when a computer can tell a pineapple from a banana, but at the same time quite nonchalant by our own ability to deduce that a pineapple is sharp to touch and a banana smooth. We’re primed to be startled when a machine reacts to stimuli in a way a human would, automatically assuming its reaction is due to a similar intellectual pathway as our own. Assessing the sophistication of software is fraught with difficulty. We’re not particularly amazed when a bulldozer opens a door by flattening a wall, but have sharp existential reactions when a robot does it using a clumsy analogue of a human hand.

Our faulty self-assessment ability became particularly evident when initial reactions to the Uber crash were dominated by claims that "no human would have avoided that" - a reaction primarily motivated by flawed assessment that humans would routinely be doing 40MPH with only 1.5s of forward visibility. In fact, there was far more opportunity to avoid the crash than the official video would suggest.

Which is why the dangerous response to the Uber incident is the natural one. It feels like it just needs a fix, a software patch, and autonomous vehicles will be here. 

 So close, yet so far. (Image source:  The Economist )

So close, yet so far. (Image source: The Economist)

But this tendency to see a problem-solve a problem is flawed. The Agile project management technique of continuous delivery feature by feature, is wonderful for developing webpages, but when Silicon Valley interacts with the physical world the cracks start appearing. What if the solution to these “flaws” is not to whack-a-mole and wait for the next one? What if the problem is actually the assumption that technology companies should use commercial techniques to aim for autonomous cars? Sooner or later we need to decide whether we want programmers playing computer games on our roads.

Taking a Step Back

In the ultra-competitive race to reach the nirvana of a Level 5 autonomous vehicle, we’ve lost sight of what we hope to gain. Instead, we’ve seen bitter legal battlesoutrageous claims and investments that would make small nations weep.

The introduction of seat-belts followed extensive laboratory testing and sought to raise the bar on occupant safety in vehicles - proving highly successful in that aim with very little collateral damage. The same can be said for successively more sophisticated developments in automotive technology - anti-lock braking, vehicle stability control, climate control, reversing cameras, radar assisted cruise control, even parking assist. All have had demonstrably positive effects on automotive safety and comfort. And they have done so because they make a very specific distinction between those aspects that machines excel at and those aspects humans excel at. Climate control removes the drudgery of adjusting settings as conditions change, but leaves the human to decide what temperature they find comfortable. Anti-lock braking takes action on super-human timescales to improve braking performance, but allows the human to decide when to brake.

Driver assistance technologies have made a tremendous improvement on the safety and comfort of driving. 

Autonomous driving on the other hand requires a human to remain inactive, but alert - a feat we know that humans have no hope of achieving. It requires machines to interpret the intentions of humans and take action that weighs up human factors like politeness, comfort, negotiation and self-preservation - all traits we know machines have never demonstrated a capability for.

Autonomous technologies have a place in the lab, or in other domains where unassuming humans don’t interact with uninformed humans, such as planes and trains.

Why jeopardise the progress of driver assistance techniques for the moonshot of autonomous techniques? 

At what point should we come to terms with the fact that the relentless pursuit for commercial superiority is turning into a inhumane race to play God? That even the humans corporations were established to serve have become collateral in the race to control? Why should we wish for machines to be not just better machines - a trait they have demonstrated an extraordinary capacity to achieve - but to be better humans - a trait which currently not only has no evidence of coming to fruition, but also seems to benefit so few.