Machines Behaving Badly - Toby Walsh - E-Book

Machines Behaving Badly E-Book

Toby Walsh

0,0

Beschreibung

Can we build moral machines? Artificial intelligence is an essential part of our lives – for better or worse. It can be used to influence what we buy, who gets shortlisted for a job and even how we vote. Without AI, medical technology wouldn't have come so far, we'd still be getting lost in our GPS-free cars, and smartphones wouldn't be so, well, smart. But as we continue to build more intelligent and autonomous machines, what impact will this have on humanity and the planet? Professor Toby Walsh, a world-leading researcher in the field of artificial intelligence, explores the ethical considerations and unexpected consequences AI poses. Can AI be racist? Can robots have rights? What happens if a self-driving car kills someone? What limitations should we put on the use of facial recognition? Machines Behaving Badly is a thought-provoking look at the increasing human reliance on robotics and the decisions that need to be made now to ensure the future of AI is a force for good, not evil.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern
Kindle™-E-Readern
(für ausgewählte Pakete)

Seitenzahl: 334

Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:

Android
iOS
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



 

 

 

Published 2022 by arrangement with Black Inc.

First published in Australia and New Zealand by La Trobe University Press, 2022

FLINT is an imprint of The History Press

97 St George’s Place, Cheltenham,

Gloucestershire, GL50 3QB

www.flintbooks.co.uk

© Toby Walsh, 2022

The right of Toby Walsh to be identified as the Author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without the permission in writing from the Publishers.

British Library Cataloguing in Publication Data.

A catalogue record for this book is available from the British Library.

ISBN 978 1 8039 9084 2

Cover design by Tristan Main

Text design and typesetting by Tristan Main

Cover illustrations by Lemonsoup14 / Shutterstock, Mykola / Adobe Stock

Printed and bound in Great Britain by TJ Books Limited, Padstow, Cornwall.

eBook converted by Geethik Technologies

To A and B, my A to Z

But remember, please, the Law by which we live,

We are not built to comprehend a lie,

We can neither love nor pity nor forgive.

If you make a slip in handling us you die!

We are greater than the Peoples or the Kings—

Be humble, as you crawl beneath our rods!—

Our touch can alter all created things,

We are everything on earth—except The Gods!

Though our smoke may hide the Heavens from your eyes,

It will vanish and the stars will shine again,

Because, for all our power and weight and size,

We are nothing more than children of your brain!

FROM ‘THE SECRET OF THE MACHINES’, BY RUDYARD KIPLING

CONTENTS

AI

Strange intruders

Warning signs

Breaking bad

The People

The geeks taking over

The sea of dudes

The godfathers of AI

The crazy Valley

The shadow of Ayn Rand

Techno-libertarians

Transhumanists

Wishful thoughts

The Tenderloin

Project Maven

The Companies

The new titans

Nothing ventured

Super-intelligence

The climate emergency

Bad behaviour

Corporate values

Google’s principles

IBM’s thinking

Rethinking the corporation

Autonomy

A new challenge

The rubber hits the road

The upside

The downside

High stakes

How self-driving cars drive

Magnificent machines

Trolley problems

Moral machines

Killer robots

Laws banning LAWS

The rules of war

Humans v. Machines

Life 1.0

The demon in the machine

Emotions

Pain and suffering

Robot rights

Sophia the puppet

Human weaknesses

Ethical Rules

The last invention

Fictional rules

Responsible robots

The academy speaks

Europe leads

The ethics bandwagon

Human, not robot rights

This isn’t the first time

Medical lessons

Powerful concerns

Fairness

Mutant algorithms

Predictive policing

Sentencing

Prediction errors

The Partnership

Alexa is racist

Alexa is sexist

Your computer boss

Insuring fairness

Algorithmic fairness

The future

Privacy

The history of privacy

Privacy and technology

Predicting the future

Intrusive platforms

Face recognition

The ‘gaydar’ machine

Trees in the forest

Analogue privacy

A private future

The Planet

Green AI

On the edge

Big Oil

Climate action

AI for good

The Way Ahead

Moral machines

Trusting AI

Transparency

Technical fixes

Regulatory fixes

Educational fixes

The gift of the machines

Epilogue: The Child of Our Brains

31 December 2061

About the Author

Acknowledgements

Notes

AI

You surely know what artificial intelligence is. After all, Hollywood has given you plenty of examples.

Artificial intelligence is the terrifying T-800 robot played by Arnold Schwarzenegger in the Terminator movies. It is Ava, the female humanoid robot in Ex Machina that deceives humans to enable it to escape from captivity. It is the Tyrell Corporation Nexus-6 replicant robot in Blade Runner, trying to save itself from being ‘retired’ by Harrison Ford.

My personal favourite is HAL 9000, the sentient computer in 2001: A Space Odyssey. HAL talks, plays chess, runs the space station – and has murderous intent. HAL voices one of the most famous lines ever said by a computer: ‘I’m sorry, Dave. I’m afraid I can’t do that.’

Why is it that the AI is always trying to kill us?

In reality, artificial intelligence is none of these conscious robots. We cannot yet build machines that match the intelligence of a two-year-old. We can, however, program computers to do narrow, focused tasks that humans need some sort of intelligence to solve. And that has profound consequences.

If artificial intelligence is not the stuff of Hollywood movies, then what is it? Oddly enough, AI is already part of our lives. However, much of it is somewhat hidden from sight.

Every time you ask Siri a question, you are using artificial intelligence. It is speech recognition software that converts your speech into a natural language question. Then natural language processing algorithms convert this question into a search query. Then search algorithms answer this query. And then ranking algorithms predict the most ‘useful’ search results.

If you’re lucky enough to own a Tesla, you can sit in the driver’s seat, not driving, while the car drives itself autonomously along the highway. It uses a whole host of AI algorithms that sense the road and environment, plan a course of action and drive the car to where you want to go. The AI is smart enough that, in these limited circumstances, you can trust it with your life.

Artificial intelligence is also the machine-learning algorithms that predict which criminals will reoffend, who will default on their loans, and whom to shortlist for a job. AI is touching everything from the start of life, predicting which fertilised eggs to implant, to the very end, powering chatbots that spookily bring back those who have died.

For those of us working in the field, the fact that AI often falls out of sight in this way is gratifying evidence of its success. Ultimately, AI will be a pervasive and critical technology, like electricity, that invisibly permeates all aspects of our lives.

Almost every device today uses electricity. It is an essential and largely unseen component of our homes, our cars, our farms, our factories and our shops. It brings energy and data to almost everything we do. If electricity disappeared, the world would quickly grind to a halt. In a similar way, AI will shortly become an indispensable and mostly invisible component of our lives. It is already providing the smartness in our smartphones. And soon it will be powering the intelligence in our self-flying cars, smart cities, and intelligent offices and factories.

A common misconception is that AI is a single thing. Just like our intelligence is a collection of different skills, AI today is a collection of different technologies, such as machine learning, natural language processing and speech recognition. Because many of the recent advances in AI have been in the area of machine learning, artificial intelligence is often mistakenly conflated with it. However, just as humans do more than simply learn how to solve tasks, AI is about more than just machine learning.

We are almost certainly at the peak of inflated expectations in the hype cycle around AI. And we will likely descend shortly into a trough of disillusionment as reality fails to match expectations. If you added up everything written in the newspapers about the progress being made, or believed the many optimistic surveys, you might suspect that computers will soon be matching or even surpassing humans in intelligence.

The reality is that while we have made good progress in getting machines to solve narrow problems, we have made almost no progress on building more general intelligence that can tackle a wide range of problems. Nevertheless, it is impossible to list all the narrow applications that AI is now being used in, but I will mention a few in order to illustrate the wide variety. AI is currently being used to:

• detect malware

• predict hospital admissions

• check legal contracts for errors

• prevent money laundering

• identify birds from their song

• predict gene function

• discover new materials

• mark essays

• identify the best crops to plant, and

• (controversially) predict crime and schedule police patrols.

Indeed, you might think it would be easier to list the areas where AI is not being used – except that it’s almost impossible to think of any such area. Anyway, what this makes clear is that AI shows significant promise for transforming our society.

The potential advantages of AI encompass almost every sector, and include agriculture, banking, construction, defence, education, entertainment, finance, government, healthcare, housing, insurance, justice, law, manufacturing, mining, politics, retail and transportation.

The benefits of AI are not purely economic. Artificial intelligence also offers many opportunities for us to improve our societal and environmental wellbeing. It can, for example, be used to make buildings and transportation more efficient, help conserve the planet’s limited resources, provide vision to those who cannot see, and tackle many of the wicked problems facing the world, like the climate emergency.

Alongside these benefits, AI also presents significant risks. These include the displacement of jobs, an increase in inequality within and between countries, the transformation of war, the corrosion of political discourse, and the erosion of privacy and other human rights. Indeed, we are already seeing worrying trends in many of these areas.

STRANGE INTRUDERS

One of the challenges of any new technology are the unexpected consequences. As the social critic Neil Postman put it in 1992, we ‘gaze on technology as a lover does on his beloved, seeing it as without blemish and entertaining no apprehension for the future’.1 Artificial intelligence is no exception. Many – and I count myself among them – look lovingly upon its immense potential. It has been called by some our ‘final invention’. And the unexpected consequences of AI may be the most consequential of any in human history.

In a 1998 speech titled ‘Five Things We Need to Know about Technological Change’, Postman summarised many of the issues that should concern you today about AI as it takes on ever more important roles in your life.2 His words ring even truer now than they did almost 25 years ago. His first advice:

Technology giveth and technology taketh away. This means that for every advantage a new technology offers, there is always a corresponding disadvantage. The disadvantage may exceed in importance the advantage, or the advantage may well be worth the cost . . . the advantages and disadvantages of new technologies are never distributed evenly among the population. This means that every new technology benefits some and harms others.

He warned:

That is why we must be cautious about technological innovation. The consequences of technological change are always vast, often unpredictable and largely irreversible. That is also why we must be suspicious of capitalists. Capitalists are by definition not only personal risk takers but, more to the point, cultural risk takers. The most creative and daring of them hope to exploit new technologies to the fullest, and do not much care what traditions are overthrown in the process or whether or not a culture is prepared to function without such traditions. Capitalists are, in a word, radicals.

And he offered a suggestion:

The best way to view technology is as a strange intruder, to remember that technology is not part of God’s plan but a product of human creativity and hubris, and that its capacity for good or evil rests entirely on human awareness of what it does for us and to us.

He concluded his speech with a recommendation:

In the past, we experienced technological change in the manner of sleep-walkers. Our unspoken slogan has been ‘technology über alles’, and we have been willing to shape our lives to fit the requirements of technology, not the requirements of culture. This is a form of stupidity, especially in an age of vast technological change. We need to proceed with our eyes wide open so that we may use technology rather than be used by it.

The goal of this book is to open your eyes to this strange intruder, to get you to think about the unintended consequences of AI.

History provides us with plenty of troubling examples of the unintended consequences of new technologies. When Thomas Savery patented the first steam-powered pump in 1698, no one was worrying about global warming. Steam engines powered the Industrial Revolution, which ultimately lifted millions out of poverty. But we are now seeing the unintended consequences of all that the steam engine begat today, both literally and metaphorically. The climate is changing, and millions are starting to suffer.

In 1969, when the first Boeing 747 took to the air, the age of affordable air travel began. It seems to have been largely forgotten, but the world at that time was in the midst of a deadly pandemic. This was caused by a strain of the influenza virus known as ‘the Hong Kong flu’. It would kill over a million people. No one, however, was concerned that the 747 was going to make things worse. But by making the world smaller, the 747 almost certainly made the current COVID-19 global pandemic much deadlier.

Can we ever hope, then, to predict the unintended consequences of AI?

WARNING SIGNS

Artificial intelligence offers immense potential to improve our wellbeing, but equally AI could be detrimental to the planet. So far, we have been very poor at heeding any warning signs. Let me give just one example.

In 1959, a data science firm called the Simulmatics Corporation was founded, with the goal of using algorithms and large data sets to target voters and consumers. The company’s first mission was to win back the White House for the Democratic Party and install John F. Kennedy as president. The company used election returns and public-opinion surveys going back to 1952 to construct a vast database that sorted voters into 480 different categories. The company then built a computer simulation of the 1960 election in which they tested how voters would respond to candidates taking different positions.

The simulations highlighted the need to win the Black vote, and that required taking a strong position on civil rights. When Martin Luther King Jr was arrested in the middle of the campaign, JFK famously called King’s wife to reassure her, while his brother, Robert F. Kennedy, called a judge the next day to help secure King’s release. These actions undoubtably helped the Democratic candidate win many Black votes.

The computer simulations also revealed that JFK needed to address the issue of his Catholicism and the prevailing prejudices against this. JFK followed this advice and talked openly about his religious beliefs. He would become the first (and, until Joe Biden, the only) Catholic president of the United States.

On the back of this success, Simulmatics went public in 1961, promising investors it would ‘engage principally in estimating probable human behavior by the use of computer technology’. This was a disturbing promise. By 1970 the company was bankrupt; it would remain largely forgotten until quite recently.3

You’ve probably noticed that the story of Simulmatics sounds eerily similar to that of Cambridge Analytica before its own bankruptcy in 2018. Here was another company mining human data to manipulate US elections. Perhaps more disturbing still is that this problem had been predicted at the very dawn of computing, by Norbert Wiener in his classic and influential text The Human Use of Human Beings: Cybernetics and Society.4

Wiener saw past the optimism of Alan Turing and others to identify a real danger posed by the recently invented computer. In the penultimate chapter of his book, he writes:

[M]achines . . . may be used by a human being or a block of human beings to increase their control over the rest of the race or that political leaders may attempt to control their populations by means not of machines themselves but through political techniques as narrow and indifferent to human possibility as if they had, in fact, been conceived mechanically.

The chapter then ends with a warning: ‘The hour is very late, and the choice of good and evil knocks at our door.’

Despite these warnings, we walked straight into this political minefield in 2016, first with the Brexit referendum in the United Kingdom and then with the election of Donald Trump in the United States. Machines are now routinely treating humans mechanically and controlling populations politically. Wiener’s prophecies have come true.

BREAKING BAD

It’s not as if the technology companies have been hiding their intentions. Let’s return to the Cambridge Analytica scandal. Much of the public concern was about how Facebook helped Cambridge Analytica harvest people’s private information without their consent. And this was, of course, bad behaviour all round.

But there’s a less discussed side to the Cambridge Analytica story, which is that this stolen information was then used to manipulate how people vote. In fact, Facebook had employees working full-time in the Cambridge Analytica offices in Tucson, Arizona, helping it micro-target political adverts. Cambridge Analytica was one of Facebook’s best customers during the 2016 elections.5

It’s hard to understand, then, why Facebook CEO Mark Zuckerberg sounded so surprised when he testified to Congress in April 2018 about what had happened.6 Facebook had been a very active player in manipulating the vote. And manipulating voters has been bad behaviour for thousands of years, ever since the ancient Greeks. We don’t need any new ethics to decide this.

What’s worse is that Facebook had been doing this for many years. Facebook published case studies from as far back as 2010 describing elections where they had been actively changing the outcome. They boasted that ‘using Facebook as a market research tool and as a platform for ad saturation can be used to change public opinion in any political campaign’.

You can’t be clearer than this. Facebook can be used to change public opinion in any political campaign. These damaging claims remain online on Facebook’s official Government, Politics and Advocacy pages today.7

These examples highlight a fundamental ethical problem, a dangerous truth somewhat overlooked by advertisers and political pollsters. Human minds can be easily hacked. And AI tools like machine learning put this problem on steroids. We can collect data on a population and change people’s views at scale and at speed, and for very little cost.

When this sort of thing was done to sell washing powder, it didn’t matter so much. We were always going to buy some washing powder, and whether advertising persuaded us to buy OMO or Daz wasn’t really a big deal. But now it’s being done to determine who becomes president of the United States. Or whether Britain exits the European Union. It matters a great deal.

This book sets out to explore these and other ethical problems which artificial intelligence is posing. It asks many questions. Can we build machines that behave ethically? What other ethical challenges does AI create? And what lies in store for humanity as we build ever more amazing and intelligent machines?

THE PEOPLE

THE GEEKS TAKING OVER

To understand why ethical concerns around artificial intelligence are rampant today, it may help to know a little about the people who are building AI. It is perhaps not widely recognised how small this group actually is. The number of people with a PhD in AI – making them the people who truly understand this rather complex technology – is measured in the tens of thousands.1 There may never have been a planet-wide revolution before which was driven by such a small pool of people.

What this small group is building is partly a reflection of who they are. And this group is far from representative of the wider society in which that AI is being used. This has created, and will continue to create, fundamental problems, many of which are of an ethical nature.

Let me begin with an observation. It’s a rather uncomfortable one for someone who has devoted his adult life to trying to build artificial intelligence, and who spent much of his childhood dreaming of it too. There’s no easy way to put this. The field of AI attracts some odd people. And I should probably count myself as one of them.

Back in pre-pandemic times, AI researchers like me would fly to the farthest corners of the world. I never understood how a round Earth could have ‘farthest corners’ . . . Did we inherit them from flat Earth times? Anyway, we would go to conferences in these faraway places to hear about the latest advances in the field.2 AI is studied and developed on all the continents of the globe, and as a consequence AI conferences are also held just about everywhere you can think.3

On many of these trips, my wife would sit next to me at an airport and point out one of my colleagues in the distance. ‘That must be one of yours,’ she would say, indicating a geeky-looking person. She was invariably correct: the distinctive person in the distance would be one of my colleagues.

But the oddness of AI researchers is more than skin-deep. There’s a particular mindset held by those in the field. In artificial intelligence, we build models of the world. These models are much simpler and better behaved than the real one. And we become masters of these artificial universes. We get to control the inputs and the outputs. And everything in between. The computer does precisely and only what we tell it to do.

The day I began building artificial models like this, more than 30 years ago, I was seduced. I remember well my first AI program: it found proofs of simple mathematical statements. It was written in an exotic programming language called Prolog, which was favoured by AI researchers at that time.

I gave my AI program the task of proving a theorem that, I imagined, was well beyond its capability. There are some beautiful theorems by Alan Turing, Kurt Gödel and others that show that no computer program, however complex and sophisticated, can prove all mathematical statements. But my AI program didn’t come close to testing these fundamental limits.

I asked my program to prove a simple mathematical statement: the Law of the Excluded Middle. This is the law that every proposition is either true or false. In symbols, ‘P or not P’. Either 282,589,933 –1 is prime or it isn’t.4 Either the stock market will crash next year or it won’t. Either the Moon is made of cheese or it isn’t. This is a mathematical truth that can be traced back through Leibnitz to Aristotle, over two millennia ago.

I almost fell off my chair in amazement when my AI program spat out a proof. It is not the most complex proof ever found by a computer program, by a long margin. But this is a proof that defeats many undergraduate students who are learning logic for the first time. And I was the creator of this program. A program that was the master of this mathematical universe. Admittedly, it was a very simple universe – but thoughts about mastering even a simple universe are dangerous.

The real world doesn’t bend to the simple rules of our artificial universes. We’re a long way from having computer programs that can take over many facets of human decision-making. Indeed, it’s not at all clear if computers will ever match humans in all their abilities: their cognitive, emotional and social intelligence, their creativity, and their adaptability. But the field of AI is full of people who would like life to be a simple artificial universe that our computers could solve. And for many years I was one of them.

THE SEA OF DUDES

One especially problematic feature of the group building these artificial universes has been dubbed the ‘sea of dudes’ problem. This phrase was coined in 2016 by Margaret Mitchell, then an AI researcher at Microsoft Research and who, in 2021, was fired from Google in controversial circumstances. The phrase highlights the fact that very few AI researchers are women.

Stanford’s AI index, which tracks progress in AI, reports that the number of women graduating with a PhD in AI in the United States has remained stable at around 20 per cent for the last decade. The figure is similar in many other countries, and the numbers are not much better at the undergraduate level. This is despite many ongoing efforts to increase diversity.

Actually, Margaret Mitchell might have more accurately described it as a ‘sea of white dudes’ problem. Not only are four-fifths of AI researchers male, they are also mostly white males.5 Black, Hispanic and other groups are poorly represented within AI, both in academia and in industry.

There is little data on the extent of the racial problem in AI, which itself is a problem. However, it is a very visible problem. Timnit Gebru is an AI and ethics researcher who was fired in controversial circumstances by Google in late 2020. As a PhD student, she co-founded Black in AI after counting just six Black AI researchers out of the 8500 researchers attending NIPS, the largest AI conference in 2016.

Even the name of that conference, NIPS, hints at the issues. In 2018, the NIPS conference rebranded itself NeurIPS to distance itself from the sexist and racial associations of its previous acronym. Other nails in the coffin of the conference’s old acronym included the 2017 pre-conference’s ‘counter-culture’ event, TITS, along with the conference T-shirts carrying the dreadful slogan ‘My NIPS are NP-hard’. To understand this geeky joke, you have to know that ‘NP-hard’ is a technical term for a computationally challenging problem. But it doesn’t take a geeky background to understand the sexism of the slogan.

Anima Anandkumar, a California Institute of Technology (Caltech) professor and director of machine-learning research at Nvidia, led the #ProtestNIPS campaign. Sadly, she reported that she was trolled and harassed on social media by a number of senior male AI researchers for calling for change. Nevertheless, pleasingly and appropriately, the name change went ahead.

Racial, gender and other imbalances are undoubtably harmful to progress in developing AI, especially in ensuring that AI does not disadvantage some of these groups. There will be questions not asked and problems not addressed because of the lack of diversity in the room. There is plentiful evidence that diverse groups build better product. Let me give two simple examples to illustrate this claim.

When the Apple Watch was first released in 2015, the application programming interface (API) used to build health apps didn’t track any aspect of a woman’s menstrual cycle. The mostly male Apple developers appear not to have thought it important enough to include. Yet you cannot properly understand a woman’s health without taking account of her menstrual cycle. Since 2019, the API has corrected this oversight.

A second example: Joy Buolamwini, an AI researcher at the Massachusetts Institute of Technology (MIT) has uncovered serious racial and gender biases in the facial-recognition software being used by companies such as Amazon and IBM. This software frequently fails to identify the faces of people from disadvantaged groups, especially those of darker-skinned women. Buolamwini eventually had to resort to wearing a white mask for the face-detecting software to detect her face.

THE GODFATHERS OF AI

Alongside the ‘sea of dudes’, another problem is the phrase ‘the godfathers of AI’. This refers to Yoshua Bengio, Geoffrey Hinton and Yann LeCun, a famous trio of machine-learning researchers who won the 2018 Turing Award, the Nobel Prize of computing, for their pioneering research in the subfield of deep learning.

There is much wrong with the idea that Bengio, Hinton and LeCun are the ‘godfathers’ of AI. First, it supposes that AI is just deep learning. This ignores all the other successful ideas developed in AI that are already transforming your life.

The next time you use Google Maps, for instance, please pause to thank Peter Hart, Nils Nilsson and Bertram Raphael for their 1968 route-finding algorithm.6 This algorithm was originally used to direct Shakey, the first fully autonomous robot, who, as the name suggests, tended to shake a little too much. It has since been repurposed to guide us humans around a map. It’s somewhat ironic that one of the most common uses of AI today is to guide not robots but humans. Alan Turing would doubtless be amused.

And the next time you read an email, please pause to thank the Reverend Thomas Bayes. Back in the seventeenth century, Bayes discovered what is now known as Bayes’ rule for statistical inference. Bayes’ rule has found numerous applications in machine learning, from spam filters to detecting nuclear weapons tests. Without the Reverend’s insights, you would be drowning in junk emails.

We should also not forget the many other people outside of deep learning who laid the intellectual foundations of the field of artificial intelligence. This list starts with Alan Turing, whom Time named as one of the 100 most important people of the twentieth century.7 In 1000 years’ time, if the human race has not caused its own extinction, I suspect Turing might be considered the most important person of the twentieth century. He was a founder not just of the field of artificial intelligence but of the whole of computing. If there is one person who should be called a godfather of AI, it is Alan Turing.

But even if you limit yourself to deep learning, which admittedly has had some spectacular successes in recent years, there are many other people who deserve credit. Back propagation, the core algorithm used to update weights in deep learning, was popularised by just one of this trio, Geoffrey Hinton. However, it was based on work he did with David Rumelhart and Ronald Williams in the late 1980s.8 Many others also deserve credit for back propagation, including Henry Kelley in 1960, Arthur Bryson in 1961, Stuart Dreyfus in 1962 and Paul Werbos in 1974.

Even this ignores many other people who made important intellectual contributions to deep learning, including Jürgen Schmidhuber, who developed Long Short-Term Memory (LSTM), which is at the heart of many deep networks doing speech recognition, and is used in Apple’s Siri, Amazon’s Alexa and Google’s Voice Search; my friend Rina Dechter, who actually coined the term ‘deep learning’; Andrew Ng, who imaginatively repurposed GPUs from processing graphics to tackle the computational challenge of training large deep networks;9 and Fei-Fei Li, who was behind ImageNet, the large data set of images that has driven many advances in this area.

Putting aside all these academic concerns, there remains a fundamental problem with the term ‘godfathers of AI’. It supposes artificial intelligence has godfathers and not godmothers. This slights the many women who have made important contributions to the field, including:

• Ada Lovelace, the first computer programmer ever and someone who, back in the eighteenth century, pondered whether computers would be creative

• Kathleen McNulty, Frances Bilas, Betty Jean Jennings, Ruth Lichterman, Elizabeth Snyder and Marlyn Wescoff, who were originally human ‘computers’, but went on to be the programming team of ENIAC, the first electronic general-purpose digital computer

• Grace Hopper, who invented one of the first high-level programming languages and discovered the first ever computer bug10

• Karen Spärck Jones, who did pioneering work in natural language processing that helped build the modern search engine, and

• Margaret Boden, who developed the world’s first academic program in cognitive science, and explored the ideas on AI and creativity first discussed by Ada Lovelace.

The term ‘godfathers of AI’ also disregards the many women, young and old, who are making important contributions to AI today. This includes amazing researchers like Cynthia Breazeal, Carla Brodley, Joy Buolamwini, Diane Cook, Corinna Cortes, Kate Crawford, Rina Dechter, Marie desJardins, Edith Elkind, Timnit Gebru, Lise Getoor, Yolanda Gil, Maria Gini, Carla Gomes, Kristen Grauman, Barbara Grosz, Barbara Hayes-Roth, Marti Hearst, Leslie Kaelbling, Daphne Koller, Sarit Kraus, Fei-Fei Li, Deborah McGuinness, Sheila McIlraith, Pattie Maes, Maja Matarić, Joelle Pineau, Martha Pollack, Doina Precup, Pearl Pu, Daniela Rus, Cordelia Schmid, Dawn Song, Katia Sycara, Manuela Veloso and Meredith Whittaker, to name just a few.

I very much hope, therefore, that we follow Trotsky’s advice and consign the phrase ‘godfathers of AI’ to the dustbin of history.11 If we need to talk about the people responsible for some of the early breakthroughs, there are better phrases, such as the ‘AI pioneers’.

THE CRAZY VALLEY

Artificial intelligence is, of course, being built around the world. I have friends working on AI everywhere, from Adelaide to Zimbabwe. But one special hothouse is Silicon Valley. The Valley is close to Stanford University, where the late John McCarthy, the person who named the field, set up shop in the 1960s and laid many of the foundation stones of AI.12

The Valley is home to the largest concentration of venture capitalists on the planet. The United States is responsible for about two-thirds of all venture capital funding worldwide, and half of this goes into the Valley. In other words, venture capital funding can be broken into three roughly equally sized parts: Silicon Valley (which has now spread out into the larger Bay Area), the rest of America, and the rest of the world. Each of these three parts is worth around $25 billion per year.13 To put that into perspective, each slice of this venture capital pie is about equal to the gross domestic product of a small European nation like Estonia.

This concentration of venture funding has meant that much of the AI that has entered our lives came out of Silicon Valley. And much of that has been funded by a small number of venture capital firms based on Sand Hill Road. This unassuming road runs through Palo Alto and Menlo Park in Silicon Valley. Real estate here is more expensive than almost anywhere else in the world, often exceeding that in Manhattan or London’s West End.

Many of the most successful venture capital firms on the planet are based on Sand Hill Road, including Andreessen Horowitz and Kleiner Perkins. Andreessen Horowitz was an early investor in well-known companies such as Facebook, Groupon, Airbnb, Foursquare and Stripe, while Kleiner Perkins was an early investor in Amazon, Google, Netscape and Twitter.

Anyone who has spent time in the Valley knows it is a very odd place. The coffee shops are full of optimistic 20-year-olds with laptops, working for no pay, discussing their plans to create ventures such as the ‘Uber for dog walking’. They hope to touch the lives of billions. Given that there are estimated to be fewer than 200 million pet dogs on the planet, it’s not clear to me how Uber Dogs will touch a billion people, but that’s not stopping them.14

I often joke that there’s a strange Kool Aid that folks in the Valley drink. But it really seems that way. The ethos of the place is that you haven’t succeeded if you haven’t failed. Entrepreneurs wear their failures with pride – these experiences have primed them, they’ll tell you, for success the next time around.

And there have been some spectacular failures. Dotcom flops like Webvan, which burnt through half a billion dollars. Or the UK clothing retailer boo.com, which spent $135 million in just 18 months before going bankrupt. Or Munchery, a food delivery website that you’ve probably never heard of before today – it went through over $100 million before closing shop.

No idea seems too stupid to fund. Guess which one of the following companies I made up. The company with a messaging app that lets you send just one word: ‘Yo.’ The company that charges you $27 every month to send you $20 in quarters, so you’ll have change to do your washing. The company that sends you natural snow in the post. Or the company building a mind-reading headset for your dog, which doesn’t actually work.

Okay, I’ll admit it – I was messing with you. None of these companies was made up. All were funded by venture capital. And, not surprisingly, all eventually went bust.

THE SHADOW OF AYN RAND

A long shadow cast over many in the Valley is that of one of its darlings, the philosopher Ayn Rand. Her novel Atlas Shrugged was on the New York Times’ Bestseller List for 21 weeks after its publication in 1957. And sales of her book have increased almost every year since, annually hitting over 300,000 copies. In 1991, the Book of the Month Club and the Library of Congress asked readers to name the most influential book in their lives. Atlas Shrugged came in second. First place went to the Bible.

Many readers of Atlas Shrugged, especially in the tech community, relate to the philosophy described in this cult dystopian book. Rand called this philosophy ‘objectivism’. It rejected most previous philosophical ideas in favour of the single-minded pursuit of individual happiness. Somewhat immodestly, given the rich and long history of philosophy, the author would only recommend the three As: Aristotle, Aquinas and Ayn Rand.15 From Aristotle, she borrowed an emphasis on logical reasoning. While she rejected all religion on the grounds of its conflicts with rationality, she recognised Thomas Aquinas for lightening the Dark Ages by his promotion of the works of Aristotle. And from her own life, she focused on the struggle between the individual and the state that played out from her birth in Saint Petersburg, her emigration from Russia aged 21, to the naked capitalism of New York City, where she settled.

Rand’s objectivism elevated rational thought above all else. According to her, our moral purpose is to follow our individual self-interest. We have direct contact with reality via our perception of the world. And we gain knowledge with which to seek out this happiness either from such perception or by reasoning about what we learn from such perception.

Rand considered the only social system consistent with objectivism to be laissez-faire capitalism. She opposed all other social systems, be they socialism, monarchism, fascism or, unsurprisingly given her birthplace, communism. For Rand, the role of the state was to protect individual rights so that individuals could go about their moral duty of pursuing happiness. Predictably, then, many libertarians have found considerable comfort in Atlas Shrugged.

But objectivism doesn’t just provide a guide to live one’s life. It could also be viewed as an instruction book for building an artificial intelligence. Rand wrote in Atlas Shrugged:

Man’s mind is his basic tool of survival . . . To remain alive he must act and before he can act he must know the nature and purpose of his action. He cannot obtain his food without knowledge of food and of the way to obtain it. He cannot dig a ditch – or build a cyclotron – without a knowledge of his aim and the means to achieve it. To remain alive, he must think.

Putting aside the quotation’s dated sexism, much the same could be said of an artificial intelligence. The basic tool of survival for a robot is its ability to reason about the world. A robot has direct contact with the reality of the world via its perception of that world. Its sole purpose is to maximise its reward function. And it does so by reasoning rationally about those precepts.

It is perhaps unsurprising, then, that Rand’s work appeals to many AI researchers. She laid out a philosophy that described not just their lives, but the inner workings of the machines they are trying to build. What could be more seductive? As a consequence, Ayn Rand has become Silicon Valley’s favourite philosopher queen. And many start-ups and children in the Valley are named after the people and institutions in her books.

TECHNO-LIBERTARIANS

Moving beyond objectivism, we come to libertarianism, and that special form of libertarianism found in Silicon Valley, techno-libertarianism. This philosophical movement grew out of the hacker culture that emerged in places like the AI Lab at MIT, and other AI hotbeds like the computer science departments at Carnegie Mellon University and the University of California at Berkeley.

Techno-libertarians wish to minimise regulation, censorship and anything else that gets in the way of a ‘free’ technological future. Here, ‘free’ means without restrictions, not without cost. The best solution for a techno-libertarian is a free market built with some fancy new technology like the blockchain, where behaving rationally is every individual’s best course of action.

John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace