Autopia - Jon Bentley - E-Book

Autopia E-Book

Jon Bentley

0,0
11,49 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Cars are one of the most significant human creations. They changed our cities. They changed our lives. They changed everything. But in the next thirty years, this technology will itself change enormously. If Google get their way, are we all going to be ferried around in tiny electric bubble-cars? Or will we watch robots race a bionic Lewis Hamilton? And what about the future of classic cars? In Autopia, presenter of The Gadget Show and former executive producer of Top Gear Jon Bentley celebrates motoring's rich heritage and meets the engineers (and coders) who are transforming cars forever. From mobile hotel rooms to electric battery technology; from hydrogen-powered cars to jetpacks, Autopia is the essential guide to the future of our greatest invention. Fully designed with illustrations and photographs, this will be the perfect Christmas gift for car and technology enthusiasts everywhere.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Veröffentlichungsjahr: 2019

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



 

 

 

 

‘Entertaining, thought-provoking and well researched, Autopia is an essential read for anyone interested in the future of cars.’

Jason Dawe, former presenter of Top Gear and motoring journalist

 

 

‘A riveting and quirky look at the latest in car technology. Perfect for geeks, petrolheads and geeky petrolheads.’

Rick Edwards, presenter of !mpossible

 

 

 

Jon Bentley is a presenter on Channel 5’s The Gadget Show. He was the producer and executive producer of Top Gear for many years and has a bend named after him on the programme’s test track. He is a committed car enthusiast and has written for multiple car publications.

 

 

 

First published in hardback in Great Britain in 2019 by

Atlantic Books, an imprint of Atlantic Books Ltd.

Copyright © Jon Bentley, 2019

The moral right of Jon Bentley to be identified as the author of this work has been asserted by him in accordance with the Copyright, Designs and Patents Act of 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of both the copyright owner and the above publisher of this book.

1 2 3 4 5 6 7 8 9

A CIP catalogue record for this book is available from the British Library.

Hardback ISBN: 978 1 78649 634 8

E-Book ISBN: 978 1 78649 636 2

Printed in Great Britain

Atlantic Books

An Imprint of Atlantic Books Ltd

Ormond House

26–27 Boswell Street

London

WC1N 3JZ

www.atlantic-books.co.uk

CONTENTS

Introduction: Why the Car Matters

1. Connected and Autonomous: the Rise of the Robot Cars

2. Sparking Innovation: Alternative Power and the Future of the Internal Combustion Engine

3. Designing the Future: How the Shape of Cars will Change

4. Speeding Ahead: the Future of Performance Cars and Motorsport

5. Hackers and Crash-Test Dummies: Safety and the Age of Automation

6. The Future of the Past: Classic Cars and Enthusiast Drivers

Conclusion: Is the Car Dead?

Image Credits

Index

INTRODUCTION

WHY THE CAR MATTERS

Since its invention nearly 130 years ago the car has been an extraordinary global success story, extending its influence over most of the planet and transforming vast tranches of human life. There are now about 1.3 billion of them scurrying round the world’s roads. By 2035 there’ll likely be more than 2 billion – a growth rate comfortably exceeding that of the human population. The car has revolutionised patterns of human settlement, reshaped the geography of cities and countryside and transformed the working lives of hundreds of millions of people.

Cars are so much more than a means of getting from A to B. From the beginning to the end of our lives, they’re a powerful psychological force. A few weeks after my seventeenth birthday, I passed my driving test and experienced that amazing sense of freedom that only the car can give. Suddenly I had the means to go wherever I wanted with minimal effort. Like countless drivers before and after me, I embraced an explosion of new opportunities for employment, travel and (yes) romance.

People may say that taking your driving test is no longer the universal rite of passage that it used to be back then in the late 1970s, but I can vividly recall my seventeen-year-old daughter on her return from her first fully independent drive saying it was the most liberating thing she’d ever done. More soberingly, I can remember the look of crestfallen disappointment on my late mother’s face when I told her she should give up driving on account of her all-consuming Alzheimer’s. That was the end of the road in more ways than one.

As objects go, cars have an incredible capacity to inspire loyalty and affection in their owners. They can rival our pet cat or dog because, like our favourite animal companions, they also have their own loveable quirks. There’s the noise of the engine, the sensation of acceleration, the rich variety of dynamic experiences through steering, cornering and handling characteristics plus the tactile sensations of the controls, the ways the doors open and shut, the weight and feel of throttle and clutch. All of these create the potential for a truly satisfying relationship between driver and machine.

Your choice of motor says a lot about you. For a nineties sales rep, embroiled in British company-car culture, fretting over whether the badge on the boot-lid was an L or a GL was practically part of the job description. Boy racers of the era agonised in meticulous detail over the specs of a VW Golf GTI versus a Peugeot 205 GTi. But the psychology isn’t solely an individual affair. There’s an intense social dimension to car ownership. Banger racers, Vintage Sports-Car Club members, muscle-car enthusiasts and Austin Allegro owners are all part of a developing network of driving tribes. People make friends all over the world through their ownership of a particular type of car.

The last fifty years have witnessed the golden age of the car, with the once-spluttering invention dominating the globe and shaping all our lives. But is this set to change? Along with its power to transform the world for the better, mass car ownership has also had a profoundly negative legacy. Doctors talk of how emergency departments took on a whole new level of gruesome in the 1960s. Previously quiet Saturday nights became bloodfests. Pollution and resource depletion became a concern shortly afterwards and, in recent years, the internal combustion engine has been pilloried for its contributions to climate change. A whole catalogue of lesser problems include ever more congestion, road rage, the loss of rural idylls to commuting and tourism and, let’s not forget, the increasing nightmare of finding anywhere to park.

Rather than representing freedom and opportunity, are cars going to become an expensive anachronism that take up too much of our money and time? Are they a burden in an age when urban populations are far more concentrated?

There is reason to hope that all is not lost. Most cars today still have four wheels and run on fossil fuels, but that doesn’t mean that we haven’t already seen incredible breakthroughs in speed, safety, efficiency and – of course – gadgetry. And, while the technology has remained fundamentally the same for decades, industry insiders insist that we are on the cusp of a revolution.

I’m writing this book to explore such innovations and see where they might lead in the years to come. From self-driving trucks to cars that can see round corners, I’ll examine the mind-blowing challenge of successfully designing artificial intelligence to take the place of human drivers. I’ll explore the options available for powering the cars of the future and question whether electric cars will ever get over their bugbears of range anxiety and sluggish charging. I’ll ask whether hydrogen will at last realise its potential and whether diesel really deserves to be demonised.

By talking to designers young and old, I’ll discover the ways in which future cars will retain their allure – and their speed. I’ll ask whether cars will continue to get safer or if increased technology will make the car into an unpredictable lethal weapon as hackers bypass their often shamefully lax security. If the car embarks on a radical new career of change, what will happen to all the existing ones – not least those classic cars lovingly cherished by millions of enthusiasts? Finally, I’ll consider some of the more unlikely future scenarios for personal transport and if the car will remain relevant in the face of fierce potential competition.

I’ve been fortunate to spend my whole working life engaged with my two childhood obsessions – cars and technology. I worked with the iconic TV series Top Gear for fifteen years, many of those as its producer. Red-letter days came thick and fast. I can remember getting 190 mph out of a Porsche 959 on the autobahn entirely legally near Stuttgart, driving my first 911 – my first Ferrari – and my first drives in a whole spectrum of iconic classics, like an XK120, D-Type and Aston Martin DB4 and 6. And there was the time I drove Alan Clark up and down the Italian Alps in his Rolls-Royce Silver Ghost.

As well as my car passion I can barely recall a time when I wasn’t fascinated by gadgetry and technology. I used to build radios, mend TVs and practise old-fashioned analogue photography, including developing and printing, and I have dabbled with computers for as long as they’ve been around. In recent years I have presented The Gadget Show, the UK’s most popular consumer-technology programme.

Dr Ian Robertson, BMW’s global sales chief, said in June 2017 that ‘The car industry has done the same thing for over a hundred years. In the next five to seven years the car will change enormously. We’re at the tipping point in an industry worth $2 trillion a year.’ It’s an incredibly exciting time, but also a crucial one. Avid customers no longer queue round the block for a fresh model, like they did in the 1960s with a Ford Mustang. Instead, they are far more likely to camp outside an Apple store, waiting for the latest iPhone. If the car is to stay relevant, it needs to evolve.

‘Connected’, ‘autonomous’ and ‘zero emission’ are the industry buzzwords right now. According to a designer I spoke to from Jaguar Land Rover, every single one of the future cars in their studios is battery-powered and self-driving. But are our favourite brands on the right track? Or will our love of being behind the wheel win out? Will we really have the patience with battery-powered cars? Certainly it’s hard to see us giving up on cars as a whole but, in a world of rapid technological change, it feels like it’s time for the car industry to catch up. Let’s start with perhaps the most talked about and the most ambitious of the predicted changes: autonomy – or the car that drives itself.

ONE

CONNECTED AND AUTONOMOUS

the Rise of the Robot Cars

The bright-red Golf GTI weaved in and out of cones at the very limits of its grip. We took the sharp corner ahead and I was thrown to the left as the driver took a perfect racing line and began to accelerate out of the turn. I was being hurled round the test track at VW’s rather remote facility in Wolfsburg and feeling in awe of the test driver’s remarkable command of the course.

What made it more impressive was that the driver wasn’t even human. In fact, there was no visible driver at all. The steering wheel, accelerator and brake were all magically moving entirely of their own accord. This wasn’t even a Google car: it was 2006, when autonomous aspirations were yet to hit the mainstream. You can understand why I found the effect so stunning.

Car automation has a surprisingly long history. For decades, scientists have sought to slash the death toll on our roads by replacing the fallible human driver with a more capable technological alternative. Until recently such aspirations were confined to science fiction, their real-world potential thwarted by practicalities of technology and cost. But now, thanks to recent improvements in computer power, artificial intelligence, machine learning and sensor technologies, the impossible is becoming possible.

The driverless journey started with a radio-controlled car that hit the streets of New York in 1925. Inventor Francis Houdina fitted a brand-new Chandler with a radio receiver and ‘apparatus’ attached to the steering column. This turned in response to signals from a radio transmitter in a car following behind. According to a contemporary report in the New York Times, the car drove ‘as if a phantom hand were at the wheel’.

The initial unveiling didn’t go well. After making wildly uncertain progress down Broadway, the car narrowly missed a fire engine and crashed into a car full of news cameras recording the whole operation. Police instructed Houdina to abort the experiment. Even more bizarrely, the similarly named Harry Houdini became irritated by Houdina’s efforts and accused him of ‘using his name unlawfully in the conduct of business’. The famous magician broke into the Houdina Radio Control Co. and vandalised the place – a misdemeanour for which he was later summoned to court.

The automation journey stuttered on with ‘magic motorways’, which were first shown at General Motors’ ‘Futurama’ exhibit at the 1939 World’s Fair in New York. A brainchild of designer Norman Bel Geddes, the concept featured electromagnetic propulsion and guidance systems built into the road. Embedded circuits in the road were also behind experiments to guide cars by the American electronics company RCA. It started with model cars in 1953 and graduated to real ones in 1958. Sensors in the front bumpers picked up signals from a buried cable that provided information on roadworks and stalled cars ahead; the system would apply the brakes or change lanes as required. The company thought self-driving cars would be widespread on highways by 1975. The British government’s Road Research Laboratory (later the Transport Research Laboratory, or TRL) came up with a hands-free Citroën DS prototype a year or two later that worked in a similar way – and it too predicted that by the 1970s all motorways would feature a lane offering hands-free driving. Like many who followed, its claims were wildly optimistic.

The autonomy spectrum

There are six levels of automation as defined by the Society of Automotive Engineers:

Level 0 No automation.

Level 1 The most basic level of automation, whereby just one function of the driving process is taken over. The car might have lane centring or adaptive cruise control but not both.

Level 2 In which multiple functions are controlled – both lane centring and adaptive cruise control, for example.

Level 3 So-called ‘conditional automation’, where the car can take control of safety-critical functions but still needs a driver to be permanently paying attention in case intervention is necessary. A Level 3 car might take over driving in a low-speed traffic jam, for instance.

Level 4 Whereby cars are autonomous but only in controlled areas – say, a robotaxi operating on a housing estate. Level 4 cars do not need steering wheels or pedals. (Some wags have suggested that horses are a Level 4 autonomous vehicle.)

Level 5 The ‘fully autonomous’ stage. The car can take over completely and doesn’t require special lane markings or any other dedicated infrastructure; it really can self-drive anywhere, and the ‘driver’ can go to sleep or do anything they wish.

The first real stand-alone autonomous vehicle appeared in Japan in 1977, but it was far from being really roadworthy. Instead of buried electronics it relied on a huge computer that occupied most of the dashboard and the passenger footwell. Using information gleaned about its environment from inbuilt cameras, it could follow white lines on the tarmac – though only at a rather pedestrian 20 mph. Nevertheless, this was one of the first vehicles to move beyond level 0 on today’s autonomy spectrum, as defined by the American organisation SAE International, formerly known as the Society of Automotive Engineers.

German aerospace engineer Ernst Dickmanns upped the levels of speed and artificial intelligence with the help of a boxy Mercedes van. The VaMoRs prototype was tested successfully in 1986 and drove itself at 60 mph on the autobahn a year later. It led the pan-European research organisation EUREKA to launch the painfully named PROgraMme for European Traffic of Highest Efficiency and Unprecedented Safety, or PROMETHEUS project. With a significant injection of ¤749 million, researchers at the University of Munich developed camera technology, software and computer processing that culminated in two impressive robot vehicles: VaMP and VITA-2, both based on the Mercedes S-Class. In 1994, these piloted themselves accurately through traffic along a 600-mile stretch of highway near Paris at up to 80 mph. A year later, they clocked up 108 mph on a journey from Munich to Copenhagen that included a 98-mile stretch without human assistance.

Many manufacturers started developing limited autonomous features around this time, but they were strictly aimed at driver assistance and certainly couldn’t contend with the vast range of hazards we encounter all the time on the road. This would soon change when a new player entered the game: the US military. At the dawn of the twenty-first century, they sponsored the DARPA Grand Challenges, in which a $1 million prize was promised to the team of engineers whose vehicle could navigate itself fastest around a 150-mile obstacle course. Although no vehicles finished the inaugural event in 2004, it generated hype and helped spur innovation. Five vehicles finished the next year’s challenge, with a team from Stanford nabbing the $2 million prize.

The Stanford team caught the eye of a certain technology company called Google and the rest is history. In 2010, Google announced that it had been secretly developing and testing a self-driving car system with the aim of cutting the number of car crashes in half. The project, which would later be renamed Waymo, was headed by Sebastian Thrun, director of the Stanford Artificial Intelligence Laboratory, and its goal was to launch a vehicle commercially by 2020.

Six Toyota Priuses and an Audi TT comprised the initial test fleet. Equipped with sensors, cameras, lasers, a special radar and GPS technology, they were completely interactive with their environment rather than restricted to a prescribed test route. The system could detect hazards and identify objects like people, bicycles and other cars at distances of several hundred metres. A test driver was always in the car to take over if necessary.

Google’s involvement prompted an explosion of interest in the subject. Investment by established brands in the technology and automotive industries ballooned, along with a bevy of new start-ups. According to American think tank The Brookings Institution, $80 billion was spent on self-driving car attempts between 2014 and 2017. This may prove to be a giant capitalist mistake that’ll make the South Sea Bubble, tulip mania and the subprime mortgage meltdown seem positively rational by comparison.

As usual, the targets of when full-scale autonomy would really be achieved were often overly ambitious. It becomes easier to see why when you appreciate how these wonders of technology are actually supposed to work.

Sensing the road

This brave new world of genuinely intelligent cars requires a diverse array of hardware with which the car tries to gain an accurate perception of its environment.

The most expensive, spectacular and distinctive sensors on a self-driving car are LiDAR, which stands for Light Detection and Ranging, usually housed in a roof pod. These systems bounce low-powered invisible laser beams off objects to create extremely detailed and accurate 3D maps of their surroundings. Their field of view can be up to 360 degrees and, because powerful lasers are used, LiDAR has the advantage of working in any lighting conditions.

Scientists have been using lasers to measure distances since the 1960s, when a team from the Massachusetts Institute of Technology (MIT) accurately logged the distance to the moon by measuring how long the light took to travel there and back. Its pioneering use in cars began with an experiment carried out in 2007 by an audio-equipment company called Velodyne. Five vehicles equipped with the company’s revolutionary new sensor successfully navigated a simulated urban environment.

Around 2016, LiDAR could cost around $75,000 per car. As of 2019 this sum has fallen to around $7,500 for a top-of-therange unit. That needs to fall further and Ford is targeting approximately $500 as a cost for the component in the future. At present, most cars use one LiDAR unit, which creates a 360-degree map by either rotating the whole assembly of lasers or by using rapidly spinning mirrors. Many researchers think a key requirement of lowering the cost will be to create solid-state designs with few or no moving parts, eliminating the need for such spinning mechanisms.

Mirrors could possibly be eliminated by so-called phased arrays, which use a row of laser emitters. If they all emit in sync the laser travels in a straight line, but by adjusting the timing of the signals the beam can shift from left to right. Flash LiDAR is another possibility. This operates more like a camera. A single laser beam is diffused to illuminate an entire scene in an instant. A grid of tiny sensors then captures the light bouncing back from various directions. It’s good because it captures the entire scene in one moment, but it currently results in more noise and less accurate measurement.

Laser-powered eyes on the road. LiDAR sensors are getting smaller and cheaper.

There are other stumbling blocks. Once most cars on the road have LiDAR they could soon start interfering with each other. Systems normally fire their lasers in a straight line and use a super accurate clock. They could be easily upset by lasers on other cars operating in the same range. Similarly, sceptics worry about the ability of the system to cope in awful weather. Lastly, to avoid eye damage, the lasers are fairly weak and currently limited in range to about 150 metres. For a car to accelerate and join a stream of fast-moving traffic, the laser range needs to be at least 300 metres. LiDAR manufacturers are working on increasing the laser frequency to allow stronger output with a beam that is further from the visible light range. As the systems improve, it is likely other shortcomings will be dealt with too. The technology already functions decently in snow and rain, and it is getting better at avoiding interference.

While LiDAR allows the car to ‘see’ over short distances, a different solution is needed for longer distances. This is where radar comes in. Many new cars already have radar sensors, used for adaptive cruise control, blind-spot protection and automatic emergency-braking systems. Their field of view is about 10 degrees and they’re relatively cheap at between £80 and £120 per sensor.

Traditionally radar’s main advantage is the ability to perceive distance and velocity. It can measure speed from a long way away and it’s a well-proven technology. Radar can even see round things. Its wavelengths are relatively long so there’s significant diffraction and forward reflection – you can ‘see’ objects behind other ones. On YouTube there’s a video, taken inside a car driving along, which shows radar in action when the car’s automatic emergency-braking system suddenly activates and the brakes are applied. The view ahead is showing nothing out of the ordinary; but half a second later the car in front rear-ends the car ahead of it. The car’s radar was able to see that the (optically hidden) car two cars ahead had braked suddenly, and then braked hard itself to avoid a crash.

Radar’s big disadvantage, and why it needs to be supplemented by other sensors, is that it can’t perceive detail. Everything’s just a blob. It’s no good at distinguishing between a pedestrian and a cyclist even though it can tell whether they’re moving or stationary. A Waymo’s LiDAR, on the other hand, can not only tell the difference but can also tell which way the pedestrian or cyclist is facing. Ultrasonic sensors are used to measure the position of objects very close to the vehicle too. We’re accustomed to them in those bleeping parking sensors.

They were invented in the 1970s and the first volume production car they appeared on was the 2003 Toyota Prius. Their range might be a mere 10 metres or so, but they are very cheap and provide extra essential information in low-speed manoeuvring and about adjacent traffic.

High-resolution video cameras are an important part of a self-driving car’s equipment. They are used to recognise things like traffic lights, road markings and street signs – objects that offer visual clues but no depth information. Cameras can also detect colour, which LiDAR can’t, and they’re better at discerning differences in texture. When in stereo they can also help calculate an object’s distance – although this effect diminishes the further away something is, which limits the technique’s usefulness in high-speed driving. They are relatively cheap at around £150 per car but they are greatly affected by prevailing light conditions and visibility. Infrared can help here to some extent.

You might think that GPS would also be critical for getting around. However, it is normally accurate to only a few metres and not consistently so, with the signal being easily interrupted by tall buildings and tunnels; its role in the autonomous car is therefore somewhat limited. It can, however, be useful in combination with other sensors. MIT, for example, has built a self-driving Prius that manages its way round back roads pretty well using just GPS, LiDAR and accelerometers.

The brain of the car

Of course, all these sensors would be useless without something to interpret their data. Processing the vast quantity of incoming information – and processing it sharpish – requires a very powerful computer with far more heft than the average PC. Even current cars like Waymos are thought to generate up to 150 gigabytes of data every 30 seconds. That’s enough to fill many laptop hard drives and is equivalent to 19 terabytes per hour. The cars also need to store these colossal quantities of information in case of later enquiries, crashes and disputes. This takes its toll on energy consumption, with the typical prototype needing 5,000 watts of power – or about the same as forty typical desktop PCs. That’s enough to have a serious hit on fuel consumption or battery range. Imagine switching off the self-driving to conserve the battery and get you to your destination. That wouldn’t feel like progress.

There’s a lot of effort going into making processors that are more suited to the demands of the self-driving car. Google has developed new chips especially designed for self-driving tasks, called Tensor Processing Units. They fire up only those bits of the chip necessary for a given task, which allows more operations per second and better power efficiency. They’re suited to machine learning of many kinds because they can handle a lot of relatively low-power processing tasks at once.

Nvidia, which is best known for making graphics cards on computers, has come up with a chip called Xavier. It allows 30 trillion operations per second (about 500 times more than a decent laptop) on a chip that consumes just 30 watts of power. It packs an amazingly powerful and efficient punch, and it’s the most complex system on a chip ever created, with an 8-core CPU and 512-core GPU, a deep learning accelerator, computer vision accelerators, and 8K video processors. ‘We’re bringing supercomputing from the data centre into the car,’ says the bloke leading their automotive work, Danny Shapiro. Still, there’s a long way to go until the systems are sufficiently energy- (and cost-) efficient for the mainstream. And you can have the most powerful computers in the world, but without the right software they are just expensive black boxes.

Artificial intelligence

If you thought the hardware was complicated, the software needed to make self-driving cars an everyday reality is hundreds of times more brain-defyingly baffling. This is possibly the greatest test of artificial intelligence the world has ever seen.

The idea is the car takes the various bits of information from all the sensors, and combines them to determine where it is, exactly what surrounds it and how those surroundings will change – and then plots a course of action through the space. This all has to be done within milliseconds and to unfailingly high levels of accuracy.

The car uses a technology called deep learning. I’m oversimplifying, but essentially the on-board computer turns all the information from the sensors into a vast matrix containing billions of bits of digital information. It then searches for known patterns within the data, which it can use to select the right behaviours. This is sometimes termed a neural network. Once patterns and necessary behaviours are detected and decided upon, this is translated into physical form through the accelerator, brakes and steering, as well as by using other systems like the lights and even the horn.

Extra intelligence

Self-driving isn’t the only way in which cars will become brainier.

In a straight rip-off from mobile phones, facial recognition could soon be the way you unlock and start your car. Another phone feature coming to your car is a voice assistant. Siri, Alexa and the like will get much better at playing your music and answering life’s pressing questions. Electric-car users will increasingly be able to choose what noise their car makes – a sort of ring-tone selection for cars. BMW has already recruited Blade Runner 2049 composer Hans Zimmer to provide an attractive tone to warn pedestrians of its near-silent cars’ presence.

Your new motor will soon be clever enough to receive your deliveries wherever you are. Services will be able to unlock it and leave parcels or even your cleaned laundry inside.

Self-parking will come well before self-driving. Summoning your car a few metres by an app on your smartphone or letting it take over parallel-parking tasks will soon seem very old school. Instead your car will know in advance where available spaces are and navigate itself around a car park to park itself. Then you’ll be able to retrieve it automatically when required.

Sound management will get better, with improvements in noise cancellation and vocal enhancement tailored to different seats in the car so you can talk to fellow passengers more easily. Furthermore, increasing proportions of the interior will be plastered with screens so you can customise it with your favourite dashboard style or interior graphics. Augmented reality displays will help streamline GPS guidance and put useful instructions on your windscreen.

In short, the car operates by detecting patterns in data and responding to them. It’s the same sort of technology that’s helped to predict things ranging from earthquakes to heart disease and has developed the capacity to analyse cancer scans, identify eye disease and muscle damage – all more quickly and accurately than human doctors. It’s also proved very useful, if not creepily disturbing, in facial recognition. Episodes like the recording of biometric information about concert-goers when they snap themselves in selfie booths and people being mistakenly arrested for shoplifting merely because they share similar features with criminals have brought facial recognition into some disrepute, and show how deep learning can fall short.

Identifying patterns on the road is far more demanding in terms of the variety of things that need to be recognised and the short time available to do so. Creating software that then knows how to respond to these perceived environments is even more of a nightmare. The most obvious way the car is taught how to behave is through ‘behavioural cloning’ – accumulating data from how (decent) human drivers behave. Then the system can practice and learn, improving its own driving as it gets better at making decisions itself, while being watched by a real driver in case things go wrong – which they will.

At its most basic, a system might recognise an open road from vast experience of seeing pictures of a clear road ahead, translating that into an action to accelerate up to a predetermined speed limit. At the more challenging end of the scale, it has to cope with a busy, previously un-tackled junction, packed with other cars, trucks, pedestrians and cyclists. It has to predict how they’re all going to behave and respond accordingly.

Sebastian Thrun thinks accurate perception is the most difficult challenge. In the early days of Google’s autonomous- vehicle project, he recalls, ‘our perception module could not distinguish a plastic bag from a flying child’. As already indicated, it’s getting better. At a Google conference in 2018, Waymo showed examples of a pedestrian carrying a door-sized plank of wood, a construction worker poking halfway out of a manhole cover and even people in inflatable dinosaur costumes. In each case the pedestrian’s profile was obscured but the Waymo car correctly identified them as pedestrians.

There are issues with any deep-learning system. In effect, every situation it faces will need to have been experienced before in some way; otherwise it won’t know how to react. One problem is termed ‘overfitting’. The system can draw correlations between totally irrelevant attributes. One could imagine trying to predict the score of a dice depending upon the time of day or even its colour. An artificial-intelligence program will always try to construct a hypothesis as to why the scores are occurring based on whatever factors it has at its disposal. The problem gets worse if more factors are being considered. Imagine I turn left – the system may intuit that I did so because I’m 200 metres from a cyclist and I happen to have done similar several times before when I’m in this part of town at this time of day. This is all a gross oversimplification, but it helps to illustrate the scale of the AI challenges in relation to driving when there are so many variables.

‘Underfitting’ is the opposite. The AI system can’t always capture the correlations we want it to. For example, it might not know where the edge of the road is or perhaps it can’t deduce from the laser and camera data that a pedestrian or cyclist is ahead. The usual way to avoid underfitting is to feed the system more data or to give it more experience of correlation between the AI system and the real world. Self-driving car companies have thousands of people manually tagging images with useful information to help avoid underfitting, supplementing neural networks with known real-world data.

‘Generalisation’ is another problem. If we know what a mouse looks like and what a gerbil looks like, humans can interpret a hamster as something between the two – another mammalian rodent. Artificial intelligence finds this difficult. It struggles to take what it already ‘knows’ and come up with something different that’s sensible. It either doesn’t recognise the new object at all or it creates constantly varying descriptions. This is why chatbots really aren’t very good at speaking yet. They don’t give you the impression they’re understanding anything; they’re just repeating things they’ve heard before where they gauge it to be appropriate.

Watching AI at work. The Nvidia chips generate multicoloured oblongs or other confidence-inspiring highlights as they identify familiar shapes like cars, people and bicycles.

Self-driving car software is what AI experts call a ‘black box’ system. You know what the inputs are. You know what the outputs are. But how the system derives the outputs from the inputs is a mystery. We don’t really understand how the algorithms work or how cars ‘think’. Nvidia has attempted to visualise this in a self-driving context by highlighting the parts of the image from a car’s sensors that are involved in decision-making. Reassuringly the results show that the chips are focusing on the edges of roads, lane markings and parked cars – exactly the things that human drivers would be attending to.