Faking It - Toby Walsh - E-Book

Faking It E-Book

Toby Walsh

0,0

Beschreibung

'Refreshingly clear-eyed … Faking It is an insightful and intelligent book that's a must for those looking for facts about AI hype.' – Books+Publishing 'AI will be as big a game-changer as the smart phone and the personal computer – or bigger! This book will help you navigate the revolution.' – Dr Karl Kruszelnicki Artificial intelligence is, as the name suggests, artificial and fundamentally different to human intelligence. Yet often the goal of AI is to fake human intelligence. This deceit has been there from the very beginning. We've been trying to fake it since Alan Turing answered the question 'Can machines think?' by proposing that machines pretend to be humans. Now we are starting to build AI that truly deceives us. Powerful AIs such as ChatGPT can convince us they are intelligent and blur the distinction between what is real and what is simulated. In reality, they lack true understanding, sentience and common sense. But this doesn't mean they can't change the world. Can AI systems ever be creative? Can they be moral? What can we do to ensure they are not harmful? In this fun and fascinating book, Professor Toby Walsh explores all the ways AI fakes it, and what this means for humanity – now and in the future.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern
Kindle™-E-Readern
(für ausgewählte Pakete)

Seitenzahl: 305

Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:

Android
iOS
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



FAKING IT

Also by Toby Walsh

It’s Alive! Artificial Intelligence from the Logic Piano to Killer Robots

2062: The World That AI Made

Machines Behaving Badly: The Morality of AI

Cover image: Rembrandt, Herman Doomer (ca. 1595–1650), 1640,

The Met, H. O. Havemeyer Collection, Bequest of Mrs. H. O. Havemeyer, 1929

Author photo: AI-generated portrait courtesy of Pindar Van Arman

 

 

 

Published 2023 by arrangement with Black Inc.

First published in Australia and New Zealand by La Trobe University Press, 2023

FLINT is an imprint of The History Press

97 St George’s Place, Cheltenham,

Gloucestershire, GL50 3QB

www.flintbooks.co.uk

© Toby Walsh, 2023

The right of Toby Walsh to be identified as the Author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without the permission in writing from the Publishers.

British Library Cataloguing in Publication Data.

A catalogue record for this book is available from the British Library.

ISBN 978 1 80399 460 4

Cover design by Beau Lowenstern, based on a concept by Toby Walsh

Text design and typesetting by Beau Lowenstern

Printed and bound in Great Britain by TJ Books Limited, Padstow, Cornwall.

eBook converted by Geethik Technologies

2(A+B)

 

 

‘As a field, artificial intelligence has always been on the border of respectability, and therefore on the border of crackpottery.’

—Drew McDermott, 1976

CONTENTS

Preface

1. What’s in a Name?

2. AI Hype

3. Faking Intelligence

4. Faking People

5. Faking Creativity

6. Deception

7. The Artificial in AI

8. Beyond Intelligence

9. Fake Companies

10. Defaking AI

Image Credits

Thanks

Notes

About the Author

PREFACE

This book is out of date.

Artificial intelligence is advancing at an ever-increasing rate. Therefore, by the time you read this, I can guarantee there will be new applications of AI troubling us in novel ways.

Of course, other technologies have challenged us in the past. But one of the unique characteristics of artificial intelligence is the speed and scale at which it is being adopted.

I suspect it is no coincidence that ChatGPT, the AI bot that captured many people’s imaginations, was the fastest-growing app ever. It was in the hands of a million users after the first week, and 100 million by the end of the second month. It is now in the hands of over a billion people, with access available through Bing, Skype and SnapChat. The SnapChat app even has an AI avatar called My AI. Did you realise you needed your own AI?

I am confident, however, that the issues raised in this book will not be out of date. Indeed, I am sure they will be even more pressing. And this book will be even more useful, as a guide and a warning. We will, for example, be ever more deceived by fake AI and AI fakes.

It is time, then, for concerned citizens to understand – and to act.

1.

WHAT’S IN A NAME?

We all make mistakes. Some of them are spectacular.

In 1999, Larry Page and Sergey Brin offered to sell the Google search engine to the CEO of excite.com for a modest US$1 million. Even reducing the price to US$750,000 didn’t entice him to buy. Today, Google – or rather the leviathan whimsically named Alphabet, into which Google has morphed – is worth north of US$1 trillion, despite the market downturn on technology stocks. That’s over a million times the 1999 asking price. It’s fair to say that turning down Page and Brin’s offer was a spectacularly costly mistake.

Scientists also make mistakes. We can be, and indeed often are, very wrong. We’re only human, after all. But the beauty of science is that it is self-correcting. Mistakes will be identified and corrected. Indeed, the history of science is a long procession of mistakes being corrected. Don’t forget that we once thought a cannonball fell faster than a feather. That the Sun orbited the Earth. And that the Earth itself was flat. All of this was wrong.

For a long time, I thought that one of the biggest mistakes we had made in my particular area of science – artificial intelligence – was calling it ‘artificial intelligence’. As I’ll explain in more detail, AI is a spectacularly poor choice for a name! It has, for example, been a source of considerable misunderstanding, even ridicule. Why would anyone call a serious scientific field something as ridiculous as artificial intelligence?

‘Artificial’ means made by humans, as opposed to occurring naturally. But it also means a copy, an imitation or a sham. And it is this second meaning that, I shall argue, is especially relevant to AI. Artificial intelligence today is often about faking human intelligence. And this fakery isn’t a modern phenomenon. It can be traced back to the very beginning of the field. It is one of AI’s original sins (and we will meet another in the next chapter).

Four decades ago, at the start of my research career, if I told someone I worked in AI, they often assumed I meant artificial insemination. And in the rare situation that they knew about my type of AI – artificial intelligence, not artificial insemination – they might have joked about the robots taking over and then nervously shifted the conversation back to the weather.*

The problems with the name ‘artificial intelligence’ don’t end with the word ‘artificial’. The other word, ‘intelligence’, is also problematic. Science has had great difficulty understanding human intelligence. We don’t, for example, have a very good scientific definition of intelligence itself. It’s not IQ – that’s merely what IQ tests measure. There are many cultural assumptions built into IQ tests that mean they are not actually a measure of intelligence.

What, then, is intelligence? Loosely speaking, intelligence is the ability to extract information, learn from experience, adapt to the environment, understand, and reason about the world. So what, you should ask, is artificial intelligence? Most AI isn’t embodied and situated in the world like our human intelligence, adapting and learning from the environment. How, then, can the intelligence of machines be identified and measured when it is so fundamentally different to human intelligence?

AI researchers do not completely agree on how to define artificial intelligence. But most of us will agree that it is about trying to get computers to do tasks that, when humans do them, we say require intelligence. These include perceiving the world, reasoning about the world and learning from the world.

I frequently get asked to define ‘artificial intelligence’. You can’t believe how depressing it is to begin a media interview by having to define what it is I do. Fortunately, artificial intelligence has become much better known recently, and as a result, when I’m being interviewed these days about some advance in AI, I don’t always have to explain what AI is and what I do.

Much to my own surprise, too, I’ve come to believe that the name artificial intelligence is not a mistake, but a rather good description. That’s because one of the key things about artificial intelligence, I now realise, is that it is artificial – that it is about imitating human intelligence. Or, as the title of this book puts it, it is about faking it.

This book, then, is about the artificiality of artificial intelligence. I will argue that, in many respects, this inauthenticity is in fact a strength. By abstracting intelligence, we can hand over many tasks to machines. But, more problematically, the phoniness of AI is also a great weakness, and something that should be of concern to everyone.

If you want to understand artificial intelligence, you will have to put aside many of your preconceptions. You will need to forget all those fanciful ideas that Hollywood might have given you, especially the bits about humanoid robots. Artificial intelligence isn’t going to be like it is in the movies. You aren’t, I’m afraid, going to have a robot butler anytime soon. And I’m not too worried that robots are going to destroy humanity anytime soon either. We are pretty good at doing that to ourselves. Movies are, and will remain, fantasy worlds.

You will also need to put aside some fanciful ideas that you’ve picked up from being human. You know what intelligence is from being intelligent yourself. But artificial intelligence today isn’t like your human intelligence, and it’s not obvious that it ever will be very much like human intelligence. For one thing, AI isn’t going to have all of your human weaknesses. It isn’t going to think as slowly as you do. Nor is it going to be as forgetful. And it might not be hindered by your emotions, like anxiety and fear, or by your sub conscious biases.

Artificial intelligence is going to be very different. We can already see this in the limited intelligence we have given to machines today. Computers have strengths and weaknesses that are very different to those of humans. And this book is all about that too.

The Turk

To understand artificial intelligence today, it’s important to know something about its history. And that history contains some revealing and troubling stories.

Tellingly, faking it in AI started long before we began trying to build intelligent computers. The remarkable Alan Turing published the first scientific paper about AI in 1950 – it was titled ‘Computing Machinery and Intelligence’.1 But it may surprise you that Turing’s paper, didn’t actually use the words ‘artificial intelligence’ anywhere in its text.

However, that’s to be expected. Turing’s paper was published six years before one of the other founders of the field, John McCarthy, coined the term. ‘I had to call it something,’ he wrote later, ‘so I called it “Artificial Intelligence”, and I had a vague feeling that I’d heard the phrase before, but in all these years I have never been able to track it down.’2 McCarthy introduced the name to describe the topic of a seminal conference held at Dartmouth College in 1956. This meeting brought together many of the pioneers in artificial intelligence for the first time, and laid out a bold and visionary research agenda for the field.3

As I said previously, AI is about getting computers to do tasks that humans require intelligence to do: the tetralogy of perceiving, reasoning, acting and learning. It requires intelligence to perceive the state of the world, to reason about those percepts, to act based on that perception and reasoning, and then to learn from this cycle of perception, reasoning and action. Before the 1950s, there weren’t any computers around on which to experiment, so it was pretty hard to do any meaningful AI research. But that didn’t stop people from faking AI in the centuries leading up to the invention of the computer.

One of the more famous fakes was a chess-playing automaton constructed in the late eighteenth century known as the Mechanical Turk.* This was an impressive device that toured Europe and the Americas from its debut in 1770 at the summer residence of the royal Habsburg family in Vienna, until its unfortunate destruction in a museum fire in Philadelphia in 1854.

Seated cross-legged behind a chest one metre long and half a metre wide and high, the Turk was a life-sized android robot. He had a black beard and grey eyes. He was dressed in Ottoman robes and wore a large turban, and in his left hand was a long pipe. The Turk’s right hand extended to the top of the chest on which the chess board sat. During play, the Turk would pick up and move pieces on the board. There were two doors at the front of the chest, which opened to reveal the intricate clockwork machinery that was powering the chess player. What a magical device!

The Turk won the majority of the games it played during its 84 years in the public eye leading up to its fiery end. It played and defeated many famous challengers, including Napoleon Bonaparte, Frederick the Great and Benjamin Franklin. Early on, the Turk always took the first move, which, as chess players know, carries a slight advantage. But later the Turk would sometimes let its human opponent start, even taking on an additional pawn handicap. Take that, humanity!

The Turk could perform a number of other chess tricks, such as tracing out a knight’s tour from any square of the chessboard.* The knight’s tour is a mathematical puzzle in which you must land the knight on every square on the chessboard exactly once, using only the knight’s moves. Somewhat ironically, not knowing its faked history, I have often given my students the homework task of writing an AI program to solve the knight’s tour. For the Turk’s chess playing, as well as the knight’s tours it traced, was all an elaborate hoax. The Turk was a fake. It wasn’t an artificially intelligent automaton. There was a person concealed inside the chest, who was able to move the chess pieces.

The Turk’s fame led to a succession of other fake chess-playing machines. There was Ajeeb, an Egyptian chess-playing ‘automaton’ made in 1868, which was exhibited at Crystal Palace, on Coney Island and around Europe. It too concealed a person who would make the moves. And then there was Mephisto, a devil-like chess-playing ‘automaton’ made in 1876. This was the first machine to win a human chess tournament. Except Mephisto, like Ajeeb and the Mechanical Turk, was also a fake. A person was directing the moves Mephisto made from another room using an electro-mechanical connection.

The Wizard of Oz

In due course, people stopped building fake chess-playing computers and started building the real thing. Indeed, in the early years of artificial intelligence, getting a computer to play chess was considered one of the natural goals of AI. Surely, the thinking went, significant intelligence is required to play chess well? Getting a computer to play chess was thought to be a good testing ground for AI.

The very first chess-playing program was written in 1948 by Alan Turing and one of his friends at King’s College, the economist and mathematician David Champernowne.* Despite only looking two moves ahead, Turing and Champernowne’s chess-playing program was too complex for computers of the day. This didn’t dissuade Turing – he instead faked it. He simulated the calculations of the program by hand with paper and pen. It must have been rather painful to play a game against the program, as it would take Turing over half an hour to calculate its next move.

It seems fitting that the very first run of what was perhaps the very first AI program was faked. But it wasn’t the last time this would occur. Indeed, having a person pretend to be a computer doing something smart is such an ingrained part of artificial intelligence that it has been given a name. It’s called a ‘Wizard of Oz’ experiment. If it’s been a long time since you saw the film, let me remind you that Toto the dog eventually pulls back the curtain to reveal that the Wizard is a fake.

AI researchers often begin the development of some new AI by first faking it: they will get a person to pretend to be the computer. It’s a good way to see how an AI might work before you’ve actually worked out how to do it.

In the 1970s, researchers at Johns Hopkins and the Xerox Palo Alto Research Center pioneered the use of ‘Wizard of Oz’ experiments to collect data on what language computers needed to understand when people interacted with a computer system. That’s pretty benign, but in more recent times the intent has often been somewhat more deceitful.

Let me give you two examples.

Expensify is a software company founded in 2008 that, as its name might suggest, uses AI to help people manage their expenses. Who likes managing their expenses? AI can automate many such tedious tasks. In 2017, however, it was uncovered that Expensify’s ‘SmartScan’ technology, which was ‘automatically’ processing receipts, was not using artificial intelligence. It was actually rather poorly paid humans who were doing the data transcription.4

The second example is a company by the name of CloudSight, which provides cloud-based software for identifying objects in images. In a 2015 press release, CloudSight promised to give developers ‘the gift of sight’ with their CamFind app. They claimed that the app used deep learning in real time to ‘go deeply into identifying, say, the exact make and model of a car or breed of dog – not just a classification. What sets us apart is that we always provide an answer with a varying degree of detail. It’s not just an exact answer or no answer at all.’5

But what their press release didn’t explain was that the deep-learning model didn’t work all that well. It was mostly low-paid workers in the Philippines who were having to type very quickly the identification of the objects in the images.

These won’t be the last times some tech start-up fakes it till they make it. In 2019, the UK venture capital firm MMC Ventures reported that 40 per cent of 2830 European start-ups purporting to use AI in their survey don’t actually appear to use any AI.6 Presumably, sprinkling the magic words ‘artificial intelligence’ over your company’s products is good for business?

A new kind of intelligence

Back to Alan Turing and his chess-playing program. By 1997, computers were a lot bigger and faster than those Turing had been trying to use. Computer chess programs were also a lot more sophisticated. And so it was that Garry Kasparov, the reigning world chess champion and arguably one of the best chess players to have ever lived, sat across the board from IBM’s Deep Blue computer program on the 35th floor of the Hilton Hotel in midtown New York. Who could play the better chess: man or machine?

It was a historic match that would go down in the annals of AI history. Kasparov had played an earlier version of Deep Blue the previous year and won 4–2. Now IBM was looking for revenge. In the six-game rematch in 1997, scores were level after the first five games. Human and computer had one win each, and the other three games were, as is often the case at the top level of the game, drawn. So it all came down to the nailbiting sixth and final game. And Deep Blue won, taking both the match and the US$700,000 prize.*

Kasparov wrote admiringly of his opponent, describing the very artificial intelligence he was playing against: ‘I could feel – I could smell – a new kind of intelligence across the table. While I played through the rest of the game as best I could, I was lost; it played beautiful, flawless chess the rest of the way and won easily.’7

I too have felt that sense of wonder, awe and artificiality with the AIs that I’ve built. They are nothing like human intelligence. And they continue to surprise us. I’ll come back to this idea later in the book.

Today, computers are much, much better than humans at playing chess. In August 2009, a chess program running on a mobile phone – a middle-of-the-range mobile phone that was running the much derided Microsoft Windows Mobile operating system – beat several grandmasters to win the Mercosur Cup in Buenos Aires, Argentina. The best chess engine available today, Stockfish 13, has an astronomical Elo rating of 3546 points.** The current world champion, Magnus Carlsen, has had a peak Elo rating of 2882 points, the highest ever held by a human chess player. In a best-of-five match against Stockfish 13, Magnus Carlsen’s chance of winning would be only one in a billion.

So it’s pretty much game over for humanity, at least when it comes to chess. Or backgammon. Or Go, poker, Scrabble or almost every other game you can name. Computers can wipe the board with us.

Funnily enough, the only part of chess that humans can do better than computers today is picking up the pieces. We still can’t write an AI program that can get a robot to walk up to a chessboard it has never seen before and pick up a pawn as effortlessly as a human can. But when it comes to working out on which square to put that pawn down, humans aren’t in the same league as computers anymore.

Fake robots and humans

Robots are often depicted as the very embodiment of artificial intelligence. That’s not surprising. For a robot to act intelligently in the world, it needs AI. It needs to sense, reason, act and learn from an ever-changing world.

Now, not all robots have AI. Some simply follow the same instructions repeatedly. These are the sort of robots you often find in factories, and usually they’re in cages to protect humans from their repetitive, pre-programmed movements. But when robots are out in the real world, away from a controlled environment like the factory floor, they need some artificial intelligence.

The word ‘robot’ was introduced by the Czech writer Karel Čapek in his 1920 play R.U.R., with the acronym standing for Rossumovi Univerzální Roboti, or Rossum’s Universal Robots. The play features several topical ideas a century ahead of their time, such as the replacement of human labour by robots, the decline in human birth rates, and robot armies that threaten the existence of the human race.

Seven years after Čapek’s play came out, a robot played a central role in one of the very first feature-length science-fiction movies, the marvellous Metropolis. The plot of Fritz Lang’s masterpiece revolves around Maschinenmensch (literally ‘machine-human’), a robot double for the human character Maria. This plot device has been used in many other films in which robots pretend to be human, from the replicants in Blade Runner to the very intelligent Ava in Ex Machina.

But fake robots are not just a staple of science-fiction movies. Unfortunately, they’re part of the real world and they are being used to fool humans today. Perhaps the most egregious example is Sophia, a humanoid robot developed by Hanson Robotics. Sophia has the dubious distinction of being the first robot to receive citizenship of any country. In October 2017, Sophia was made a citizen of Saudi Arabia. It was an unconsciously ironic PR stunt for a nation that denies many basic human rights to its women citizens to give greater rights to a robot than to half its population.

To understand the fakery behind Sophia, you probably need to understand David Hanson Jr, the man behind its creation. He’s the founder and CEO of Hanson Robotics, and he has an interesting background. He started out with a Bachelor of Fine Arts in film, then worked for Disney as an ‘Imagineer’, creating sculptures and animatronic figures for their theme parks, before getting his PhD in aesthetic studies.

Sophia is very lifelike, even for a humanoid robot. She has human-like skin, eyebrows, eyelashes, lips that are painted with red lipstick, and grey eyes that follow you around. Sophia has an expressive face that can smile, laugh and frown. But she’s almost entirely a fake. There’s very little AI under the hood. Her conversations and gestures are mostly carefully scripted.

Chief AI scientist at Meta Yann LeCun responded to a flattering story about Sophia in the industry news website Tech Insider with a withering tweet on 5 January 2018:

This is to AI as prestidigitation is to real magic.

Perhaps we should call this ‘Cargo Cult AI’ or ‘Potemkin AI’ or ‘Wizard-of-Oz AI’.

In other words, it’s complete bullsh*t (pardon my French).

Tech Insider: you are complicit in this scam.

Yet David Hanson has unashamedly fuelled the hype around Sophia. On The Tonight Show in April 2017, he told host Jimmy Fallon that ‘she is basically alive’. To quote Yann LeCun, this is complete bullshit.

There’s nothing ‘alive’ about Sophia. She’s more like a glorified puppet than any sort of sophisticated AI. A conversational agent such as Siri or Alexa contains far more advanced artificial intelligence than Sophia. I once tried to hire Sophia for a day, hoping to have her open a big AI conference. I was shocked by the US$30,000 price tag. But I wasn’t surprised by the booking form, which laid out how carefully scripted her conversations are.

When initial coin offerings (ICOs) were all the rage in late 2017, Hanson co-founded SingularityNET, a decentralised marketplace for AI algorithms, and launched the AGI coin. The names chosen for this venture promise the mythical singularity when technological growth becomes uncontrollable, and we achieve artificial general intelligence (AGI), where machines match and then exceed human intelligence. But the reality is much more prosaic: SingularityNET is a simple marketplace for some rather dumb AI algorithms.

There are some fundamental technical problems with such a marketplace. For instance, 70 years of AI research have failed to generate a uniform interface for AI algorithms – what you might call an API for AI – in order for the market to operate. Nevertheless, the ICO raised over US$36 million in just 60 seconds.8 The AGI coins were initially priced at over $1 each. Two years later, you could buy them for a little over one cent.

Even more recently, when non-fungible token (NFTs) became fashionable, Hanson Robotics announced a collection of NFT-based digital artworks supposedly created by Sophia; it raised over US$1 million.9 As you might have concluded by now, there’s the unmistakable smell of snake oil about much that Sophia touches.

Even Elon Musk has indulged in some AI robot fakery. In August 2021, at the Tesla AI Day, Elon announced the Tesla Bot, a humanoid robot being built using Tesla’s full self-driving computer. The robot had been designed to do ‘dangerous, repetitive, boring tasks’, and Elon provided an example, suggesting the robot could ‘go to the store and get groceries’. To avoid any robot takeover, the Tesla Bot is designed to be slow and weak so a person can easily outrun and overpower it. Bizarrely, the announcement about the yet-to-be-built Tesla Bot featured a person dressed up in a white full-length bodysuit pretending to be a Tesla Bot. I doubt anyone was fooled.10

AI alchemy

The problems with artificial intelligence go much deeper than a few tricksters and charlatans peddling fake robots, however. The very foundations of the field rest on quicksand.

Of course, intelligence, whether human or artificial, is not easy to understand. William James, an influential professor at Harvard University who is often called the ‘father of American psychology’, wrote:

When, then, we talk of ‘psychology as a natural science’, we must not assume that that means a sort of psychology that stands at last on solid ground. It means just the reverse; it means a psychology particularly fragile, and into which the waters of metaphysical criticism leak at every joint, a psychology all of whose elementary assumptions and data must be reconsidered in wider connections and translated into other terms . . . not the first glimpse of clear insight exists. A string of raw facts; a little gossip and wrangle about opinions; a little classification and generalization on the mere descriptive level; a strong prejudice that we have states of mind, and that our brain conditions them: but not a single law in the sense in which physics shows us laws, not a single proposition from which any consequence can causally be deduced . . . This is no science, it is only the hope of a science . . . 11

James wrote this over 130 years ago, back in 1892. There are many, such as science journalist Alex Berezow, who would argue that psychology today still isn’t science.*

James’ comments are a good description of artificial intelligence as we understand it today. A few facts, a lot of gossip and opinions, some strong prejudices, but little in the way of universal laws or logical deduction. There is remarkably little science in AI. It would be better to describe much of it as hope of a science.

Not surprisingly, then, many have compared the field of AI to medieval alchemy. Rather than attempting to turn base metals into gold, the ambition of artificial intelligence is to turn simple computation into intelligence. Eric Horvitz, chief scientific officer of Microsoft Research and a past president of the Association for the Advancement of Artificial Intelligence, told The New York Times in 2017: ‘Right now, what we are doing is not a science but a kind of alchemy.’12 I checked with Eric and he stands by this observation today. He remains ‘intrigued, curious, and optimistic that there are deeper insights and principles to be uncovered’.

All is, however, not lost. Alchemy might not be the worst starting place from which to build artificial intelligence. Terry Winograd, who wrote one of the first and most influential AI programs for processing natural language 50 years ago, has argued as such:

It is perhaps too early to compare the state of artificial intelligence to that of modern biochemistry. In some ways, it is more akin to that of medieval alchemy. We are at the stage of pouring together different combinations of substances and seeing what happens, not yet having developed satisfactory theories. This analogy was proposed by [Hubert] Dreyfus (1965) as a condemnation of artificial intelligence, but its aptness need not imply his negative evaluation. Some work can be criticized on the grounds of being enslaved to (and making too many claims about) the goal of creating gold (intelligence) from base materials (computers). But nevertheless, it was the practical experience and curiosity of the alchemists which provided the wealth of data from which a scientific theory of chemistry could be developed.13

It would be reasonable to conclude, therefore, that the foundations of artificial intelligence today are truly artificial, in the sense that they are fake and lacking substance. And that much of the artificial intelligence we build is itself artificial – and thus very different to human intelligence. To top it off, artificial intelligence is often being put to artificial ends, such as faking human intelligence. That’s a lot of artificiality to consider.

This goal of this book is to draw back the curtain and reveal the reality behind all this artificiality. These machines that imitate our human intelligence are set to play increasingly important roles in our lives. They will take on the dirty, the dull, the difficult and the dangerous, which is a good thing. Indeed, it is hard to imagine a part of our lives that they won’t touch.

And artificial can be good. Autonomous cars, for example, are being developed in artificial simulations as well as on real roads. Indeed, autonomous cars drive far more kilometres today in simulators than they do in the real world, and this is helping to increase their safety.

Simulators provide scale, reproducibility and controllability. They can run much faster than real time. Millions of kilometres can be driven overnight, while humans sleep. Simulators can repeat accident situations precisely, until the AI algorithms learn how to respond in the safest way possible. And they can create situations that might be hard to find or dangerous to test in the real world. What happens when a car is driving towards the setting sun, there is rainwater on the road and a garbage truck in front, and a child dressed in dark clothes dashes out from behind a parked car? It would be irresponsible to test this in the real world, but we can test it repeatedly in a simulator.

But alongside these benefits of the artificial in artificial intelligence, there are some very real risks. It’s not just that machines will be stealing ever more of our attention with all this fakery. Our attention is a precious asset, and they are already stealing too much of it. No, the risks are potentially much more damaging than this.

All this fakery threatens to blur the distinction between what is real and what is artificial. It might even throw into question the very essence of what is human and what is not. The stakes, therefore, are as high as they could be. Our very humanity is on the line.

The book will cover both fake AI and AI fakes. We’ll explore, for example, AI applications where the artificial intelligence is actually much less impressive than it appears. But we’ll also consider AI applications where the artificial intelligence is designed to deceive you.

First, we’ll look some more at the fake AI problem, exploring the hype and false claims made about artificial intelligence (Chapter 2). Then we’ll move on to AI fakes, and think about how AI, from its very beginnings, has tried to imitate human intelligence (Chapter 3), to fake real people (Chapter 4) and to emulate human creativity (Chapter 5). We’ll discuss then how AI is often designed deliberately to deceive us (Chapter 6), even though artificial intelligence is very different to human intelligence (Chapter 7), and is neither sentient nor conscious (Chapter 8). Finally, we’ll explore the role that technology companies developing AI are playing in all this fakery (Chapter 9), and what we might do to limit the harms (Chapter 10).

Let’s begin.

____________________________________________________

*   Artificial intelligence isn’t the only discipline with a name problem. Take cybernetics, one of AI’s closest intellectual cousins. There’s a beautiful letter from Esther Potter, a director of the Library of Congress, to Dr Norbert Wiener, author of the seminal text Cybernetics, appealing for help in trying to classify his book. ‘We have read and reread reviews and explanations of the content of your book and have even tried to understand the content of the book itself,’ she wrote, ‘only to become more uncertain as to what field it falls into. I am appealing to you as the one person who should be able to tell us . . . If we were not somewhat desperate about this particular problem, I should hesitate to bother you with it.’ (See https://tinyurl.com/WhatIsCybernetics.) Cybernetics has been variously described as the study of ‘control and communication in the animal and the machine’ (Wiener), ‘systems of any nature which are capable of receiving, storing, and processing information so as to use it for control’ (Andrey Kolmogorov), ‘the art of creating equilibrium in a world of constraints and possibilities’ (Ernst von Glasersfeld) and, in a beautiful meta-definition, ‘a way of thinking about ways of thinking (of which it is one)’ (Larry Richards).

*   Amazon’s ‘Mechanical Turk’ is a crowdsourcing website (www.mturk.com) named after the fake chess-playing automaton that is used to contract remotely located ‘crowdworkers’ to perform tasks that computers currently cannot do. For example, it is used to prepare and label data that is used by machine-learning algorithms.

*   This is the knight’s tour that the Mechanical Turk was believed to have used: It’s a ‘closed’ knight’s tour, meaning one that comes back on itself. Therefore the tour can be started and ended from any square of the chessboard. Brute force alone cannot find such a knight’s tour. There are more than 1051 possible tours of the chessboard that a knight can make, and most of them don’t visit every square only once. Trying out all possible tours is therefore beyond even the fastest computers today; it requires some insight and ingenuity to find a tour. For instance, a good heuristic is for the knight to move next to the most constrained square, from which the knight will have the fewest onward moves. It’s best to visit this square now as it will likely only get more constrained if we wait.

The knight’s tour is a cousin of another famous problem, in which you are challenged to take an afternoon walk that traverses the seven bridges of the city of Königsberg exactly once. This problem was proved impossible in 1736 by Leonhard Euler, one of the greatest mathematicians ever to have lived. In solving this problem, Euler laid the foundations for topology, the branch of mathematics focused on abstract mathematical shapes such as knots and the never-ending Möbius strip.

*   This first ever chess-playing program was called Turochamp, a portmanteau of Turing and Champernowne’s abbreviated surnames.

*   If you’re worried about Garry Kasparov’s pride, he was able to console himself with the US$400,000 prize given to the runner-up. He had also won US$400,000 the year before when beating an earlier and less powerful version of Deep Blue in the first of their eventual two-match contest. IBM refused Kasparov’s request for a third match and dismantled Deep Blue, which ensured that he could never take back the crown.

**   The Elo rating system is a method for calculating the relative skill levels of players in games such as chess. It is named after its creator, Arpad Elo, a Hungarian-American physics professor. Two players with equal ratings who play against each other are expected to score an equal number of wins.

*   Alex Berezow describes himself as ‘a veteran science writer, public speaker and debunker of junk science’. In July 2012, in an op-ed in the Los Angeles Times