Artificial Intelligence - Jack Copeland - E-Book

Artificial Intelligence E-Book

Jack Copeland

0,0
38,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Presupposing no familiarity with the technical concepts of either philosophy or computing, this clear introduction reviews the progress made in AI since the inception of the field in 1956. Copeland goes on to analyze what those working in AI must achieve before they can claim to have built a thinking machine and appraises their prospects of succeeding.

There are clear introductions to connectionism and to the language of thought hypothesis which weave together material from philosophy, artificial intelligence and neuroscience. John Searle's attacks on AI and cognitive science are countered and close attention is given to foundational issues, including the nature of computation, Turing Machines, the Church-Turing Thesis and the difference between classical symbol processing and parallel distributed processing. The book also explores the possibility of machines having free will and consciousness and concludes with a discussion of in what sense the human brain may be a computer.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 709

Veröffentlichungsjahr: 2015

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title

Copyright

Dedication

List of figures

Acknowledgements

Introduction

In outline

1: The beginnings of Artificial Intelligence: a historical sketch

1.1 The arrival of the computer

1.2 The Logic Theorist

1.3 The Dartmouth Conference

1.4 Alan Turing and the philosophy of AI

2: Some dazzling exhibits

2.1 Inside the machine

2.2 Parry the paranoid program

2.3 Eliza the psychotherapist

2.4 Shrdlu the robot

2.5 Hacker the program-writing program

2.6 Programs that play games

2.7 The General Problem Solver

2.8 Sam and the Frump

2.9 Expert systems

3: Can a machine think?

3.1 Is consciousness necessary for thought?

3.2 The Turing Test

3.3 Has the Test been passed already?

3.4 Four objections to the Turing Test

3.5 Assessing the Test

3.6 Decision time

4: The symbol system hypothesis

4.1 Symbol manipulation

4.2 Binary symbols

4.3 Programs as symbols

4.4 A full-scale program

4.5 The definition of a computer

4.6 The hypothesis

4.7 Multiple realizability

5: A hard look at the facts

5.1 The evidence for the hypothesis

5.2 Getting the evidence into perspective

5.3 Hype

5.4 Programming common sense

5.5 Data versus know-how

5.6 The CYC project

5.7 The complexity barrier

6: The curious case of the Chinese room

6.1 The Chinese room argument

6.2 What’s wrong with the argument?

6.3 Deciding about understanding

6.4 Turing machines and the biological objection to Al

7: Freedom

7.1 Turbo Sam makes a choice

7.2 Is freedom of the will an illusion?

7.3 Two kinds of freedom

7.4 Kleptomania and other compulsions

7.5 Libertarianism

7.6 Predictivism and chaos

7.7 The inevitable

8: Consciousness

8.1 Neglect and disarray

8.2 The fuzzy baseline

8.3 Consciousness as a type of internal monitoring

8.4 The ineffable

FEEL

of it all

8.5 Into the heart of the mystery

8.6 What is it like to be a bat?

8.7 What Mary doesn’t know

8.8 Drawing the threads together

9: Are we computers?

9.1 The strong symbol system hypothesis

9.2 Hardware versus wetware

9.3 Goodbye, von Neumann

9.4 Putting meaning into meat

9.5 Believing what you don’t believe

9.6 Productivity and systematicity

9.7 Evaluating the arguments

9.8 The meaning of ‘computer’

10: AI’s fresh start: parallel distributed processing

10.1 The basic ideas

10.2 English lessons

10.3 Escape from a nightmare?

10.4 The contrast with computers

10.5 Comparisons with wetware

10.6 Searle’s Chinese gym

10.7 The Church–Turing thesis

10.8 Are our cognitive processes algorithmically calculable?

10.9 Simulating networks by computer

10.10 The battle for the brain

10.11 Concluding remarks

Epilogue

Notes

Bibliography

Index

End User License Agreement

List of Illustrations

2: Some dazzling exhibits

Figure 2.1 Shrdlu’s world of coloured blocks. (Based on figure 3 of Winograd, T.

Understanding Natural Language

.)

Figure 2.2 Simple block-moving tasks.

Figure 2.3 The tower of contrition.

3: Can a machine think?

Figure 3.1 Can you read this? (From figure 15 in Hofstadter, D.

Gödel, Escher, Bach: An Eternal Golden Braid

.)

Figure 3.2 The Turing Test.

4: The symbol system hypothesis

Figure 4.1 A register.

Figure 4.2 Recursion.

Figure 4.3 Think of each binary digit as pointing to a box.

Figure 4.4 Each box contains a base 10 number.

Figure 4.5 How the compiled program is stored in memory.

5: A hard look at the facts

Figure 5.1 Shadows are a problem for visual recognition software. (From figures 2.1 and 2.2 of Winston, P.H. (ed.)

The Psychology of Computer Vision

.)

6: The curious case of the Chinese room

Figure 6.1

Figure 6.2

Figure 6.3

Figure 6.4

8: Consciousness

Figure 8.1 Each hemisphere ‘sees’ only one half of the screen.

Figure 8.2 The two hemispheres pick different objects when the split-brain subject is asked which pictured items he associates with what he can see on the screen. (From figure 42 in Gazzaniga, M.S. and LeDoux, J.E.

The Integrated Mind.

)

9: Are we computers?

Figure 9.1 Human neuron.

10: AI’s fresh start: parallel distributed processing

Figure 10.1 The neuron fires a discharge along its output fibres when the weighted sum of its inputs exceeds its threshold.

Figure 10.2 Connections between the layers of artificial neurons in a PDP network.

Figure 10.3 Inverting an input pattern.

Figure 10.4 A few of the variations on the letter ‘A’ to be found in the

Letraset

catalogue.

Figure 10.5 The curse of context.

Figure 10.6 The level of output depends on the amount of input the unit is receiving from other units.

Guide

Cover

Table of Contents

Begin Reading

Pages

cover

contents

iii

iv

v

x

xi

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

Artificial Intelligence

A Philosophical Introduction

Jack Copeland

Copyright © B. J. Copeland, 1993

The right of B. J. Copeland to be identified as author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.

First published 1993

Reprinted 1994, 1995, 1997 (twice), 1998

Blackwell Publishers Ltd

108 Cowley Road

Oxford OX4 1JF, UK

Blackwell Publishers Inc

350 Main Street

Malden, Massachusetts 02148, USA

All rights reserved. Except for the quotation of short passages for the purposes of criticism and review, no part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publisher.

Except in the United States of America, this book is sold subject to the condition that it shall not, by way of trade or otherwise, be lent, re-sold, hired out, or otherwise circulated without the publisher’s prior consent in any form of binding or cover other than that in which it is published and without a similar condition including this condition being imposed on the subsequent purchaser.

British Library Cataloguing in Publication DataA CIP catalogue record for this book is available from the British Library

Library of Congress Cataloging in Publication DataCopeland, Brian Jack

Artificial intelligence: a philosophical introduction / B. J. Copeland

p. cm.

Includes bibliographical references and index.

ISBN 0–631–18385–X (pbk : alk. paper)

1. Artificial intelligence — Philosophy. I. Title.

Q335.C583 1993 92–44278

006.3—dc20 CIP

For Jean and Reg

List of figures

Figure 2.1 Shrdlu’s world of coloured blocks.

Figure 2.2 Simple block-moving tasks.

Figure 2.3 The tower of contrition.

Figure 3.1 Can you read this?

Figure 3.2 The Turing Test.

Figure 4.1 A register.

Figure 4.2 Recursion.

Figure 4.3 Think of each binary digit as pointing to a box.

Figure 4.4 Each box contains a base 10 number.

Figure 4.5 How the compiled program is stored in memory.

Figure 5.1 Shadows are a problem for visual recognition software.

Figure 6.1 Turing machine input.

Figure 6.2 Turing machine input.

Figure 6.3 Turing machine output.

Figure 6.4 Turing machine output.

Figure 8.1 Each hemisphere ‘sees’ only one half of the screen.

Figure 8.2 The two hemispheres pick different objects when the split-brain subject is asked which pictured items he associates with what he can see on the screen.

Figure 9.1 Human neuron.

Figure 10.1 The neuron fires a discharge along its output fibres when the weighted sum of its inputs exceeds its threshold.

Figure 10.2 Connections between the layers of artificial neurons in a PDP network.

Figure 10.3 Inverting an input pattern.

Figure 10.4 A few of the variations on the letter ‘A’ to be found in the

Letraset

catalogue.

Figure 10.5 The curse of context.

Figure 10.6 The level of output depends on the amount of input the unit is receiving from other units.

Acknowledgements

Many people have helped me with this book in many different ways, large and small. I thank you all. David Anderson, John Andreae, Derek Browne, Peter Carruthers, Stephan Chambers, John Cottingham, Hubert Dreyfus, Flip Ketch, Carmel Kokshoorn, Justin Leiber, David Lewis, Michael Lipton, Bill Lycan, Michael Maclean, Pamela McCorduck, James McGahey, Jack Messenger, Alison Mudditt, Gill Rhodes, Michael Smallman, Steve Smith, Kerry Stewart, Bob Stoothoff, Stephen Viles. Particular thanks to Jenny Arkell for commenting extensively on early drafts and suggesting many improvements; to Ann Witbrock for the computer-generated illustrations; and above all to Diane Proudfoot for support, criticism, help and encouragement.

Figure 2.1 is based on figure 3 of Terry Winograd, Understanding Natural Language, with kind permission of Academic Press and Terry Winograd. Figure 3.1 is redrawn from figure 15 of Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid, with kind permission of Basic Books Inc. (copyright © 1979 by Basic Books, Inc.). Figure 5.1 is redrawn from figures 2.1 and 2.2 of P.H. Winston (ed.), The Psychology of Computer Vision, with kind permission of McGraw-Hill, Inc. Figure 8.2 is redrawn from figure 42 of M.S. Gazzaniga and J.E. LeDoux, The Integrated Mind, with kind permission of Plenum and Michael Gazzaniga. Figure 10.4 is a reproduction of parts of the Letraset catalogue and appears by the kind permission of Esselte Letraset Ltd.

Introduction

Artificial Intelligence is the science of making machines do things that would require intelligence if done by men.

Marvin Minsky, founder of the MIT Artificial Intelligence Laboratory1

Not long ago I watched a TV interview with Edward Fredkin, a specialist in electronic engineering and manager of the MIT Artificial Intelligence Laboratory. Fredkin is an earnest man with a serious, authoritative manner. What he had to say was startling.2

There are three great events in history. One, the creation of the universe. Two, the appearance of life. The third one, which I think is equal in importance, is the appearance of artificial intelligence. This is a form of life that is very different, and that has possibilities for intellectual achievement that are hard for us to imagine. These machines will evolve: some intelligent computers will design others, and they’ll get to be smarter and smarter. The question is, where will that leave us? It is fairly difficult to imagine how you can have a machine that’s millions of times smarter than the smartest person and yet is really our slave, doing what we want. They may condescend to talk to us, they may play games that we like to play, and in some sense, they might keep us as pets.3

Has Professor Fredkin missed his vocation as a science fiction writer? Or is this a realistic view of the future, a sober prediction from a man who is better placed than most to understand the implications of current developments in AI research? Are computers that think really a technological possibility? Indeed, does it even make sense to speak of a machine thinking – or is a thinking computer a conceptual absurdity, like a grinning electron or a married bachelor?

For the philosophically curious, Fredkin’s words raise a tumult of intriguing questions. Is thought a biological phenomenon, and thus as far beyond the compass of a silicon-and-metal machine as photosynthesis, lactation, and every other biology-dependent process? Or are thinking and perceiving more like flying – something that both living creatures and metallic artefacts can do? Could a computer ever display more intelligence than the humans who program it? Could a computer act of its own free will? Could a computer possibly be conscious? And then there is the most intriguing question of them all – is it conceivable that research in psychology and neurophysiology will eventually show that we ourselves are just soft, cuddly computers? These are some of the issues that this book addresses. I hope you will enjoy exploring them with me.

In outline

Chapter 1 describes the origins and early years of artificial intelligence. Chapter 2 reviews some notable AI programs. Chapter 3 asks the crucial question: Is it possible for a machine to think? I argue that the answer is Yes. This is not yet to say that computers are capable of thinking, though – for it has yet to be asked whether computers are the right sort of machine to think. Chapter 4 spells out how computers work. I start with the basics, so readers who know nothing at all about computing need not worry that they will be left behind. Chapter 5 then examines the evidence that machines of this sort have the potential for thought. I suggest that at the present stage the evidence is ambiguous. Chapter 6 critically examines Professor John Searle’s arguments for his influential claim that computers, by their nature, lack genuine understanding of their input and output. According to Searle, AI systems will never duplicate intelligence; simulations of intelligence are all that AI can hope to produce. Chapter 7 looks at the issue of free will. If the future does bring thinking robots, will these artefacts be capable of making free choices? Common prejudice has it that a machine with free will is a contradiction in terms. I argue that this is not so. Chapter 8 is an enquiry into the nature of consciousness. What is consciousness, and how does it come about? Could a machine possess it? Chapter 9 examines the theory that the human brain itself is literally a computer – a theory that has considerable influence in psychology at the present time. Chapter 10 describes a new approach to AI known as parallel distributed processing (PDP). Advocates of this approach have distanced themselves from the project of programming computers to think and are engaged in building machines of a different sort, loosely modelled on human neural tissue. Finally, in the Epilogue, I return to a theme of previous chapters. One result of the book’s investigations into machinery and thought is to make the possibility that we ourselves are machines of a sort not only more credible but also less uncomfortable.

This book is a text for classes in the philosophy of mind and cognitive science. However, it is written in plain English and no acquaintance with the technical concepts of either philosophy or computing is presupposed. The book is designed to be understood by everyone who is curious to learn about artificial intelligence and its philosophy – from university students to computer hobbyists to humanists interested in what the study of computers may have to tell us about our own nature.

Welcome to the book.

1The beginnings of Artificial Intelligence: a historical sketch

To begin at the very beginning …

1.1 The arrival of the computer

The story of the invention of the digital computer is a fascinating one. Folklore has it that the computer originated in the United States, but this is not true. Britain, the USA and Germany developed the computer independently at almost exactly the same time. In terms of who got there first, it is Germany that carries off the cup – or more precisely a lone German, Konrad Zuse. He had the world’s first general-purpose programmable computer up and running by the end of 1941.1 It was a case of third time lucky: two earlier machines that he built in the unlikely setting of his parents’ living room did not quite work. Although Zuse was first past the post, few were aware of his achievement at the time, and Allied restrictions on electronic engineering in post-war Germany put paid to any further development. Neither Zuse nor his ideas played any significant role in the commercial development of the computer.2

The British story begins at Bletchley Park, Buckinghamshire, a top secret wartime establishment which was devoted to breaking the Wehrmacht’s codes. With a staff of brilliant mathematicians and engineers, it was almost inevitable that Bletchley Park should produce something revolutionary – and when it came it was the Colossus, an electronic3 computer for deciphering coded messages.4 The first Colossus was installed and working by December 1943, a full two years after Zuse obscurely made history in Berlin. (Some commentators quibble over whether the Colossus was a true computer.5 It was designed for just one specific task, cracking codes, and beyond that could do little or nothing. Enthusiasts at Bletchley had tried to coax the Colossus into doing long multiplication but, tantalizingly, this proved to be minutely beyond the frontier of the machine’s capabilities.6 Zuse’s computer, on the other hand, could be set up to perform any desired calculating task (provided, of course, the task was not so large as to exhaust the machine’s storage capacities). Zuse’s was a general-purpose computer, while the Colossus was a special-purpose computer.)

After the war the Bletchley group broke up and the action moved north to Manchester. It was here that F.C. Williams, Tom Kilburn and their team built the Manchester University Mark I general-purpose computer. The first program ran in June 1948.7 By April 1949 the small prototype had been expanded into a much larger machine.8 On the other side of the Atlantic progress ran just slightly slower. The first comparable American machine (called the BINAC) was tested in August 1949.9

A Manchester firm, Ferranti Limited, contracted to build a production version of the Manchester Mark I. These machines were the world’s first commercially manufactured electronic stored-program computers. In all, nine were sold. The first was installed in February 1951, just two months before the appearance of the world’s second commercially-available machine, the American UNIVAC.10

The Ferranti Mark I has the additional distinction of being the first non-human on the planet to write love letters.

Darling Sweetheart,

You are my avid fellow feeling.

My affection curiously clings

to your passionate wish. My

liking yearns to your heart. You

are my wistful sympathy: my

tender liking.

Yours beautifully,

Manchester University Computer11

The American side of the story begins with a machine known as the ENIAC. (The initials stand for Electronic Numerical Integrator and Computer. It is an exceptionless law that the acronyms used to name computers and computer programs never mean anything interesting.) The ENIAC was built at the University of Pennsylvania by John Mauchly and J. Presper Eckert, and first went into operation in November 1945 (nearly two years after the Colossus).

The ENIAC was a programmer’s nightmare – it had to be rewired by hand for each new task. This was a mammoth operation involving thousands of plugs and switches. It would generally take the operators two days to set the machine up for a fresh job.12 This primitive monkeying about with cables was all that the ENIAC had to offer by way of a programming facility. The Manchester Mark I was much less user-hostile. The Mark I was the world’s first fully electronic stored-program computer.13 Setting it up to perform a new job involved simply feeding in a punched paper tape. The machine would copy the programmer’s instructions from the tape and store them in its own memory. Eckert and Mauchly had realised as early as 1943 that it would be beneficial to be able to store the ENIAC’s operating instructions inside it, but the military wanted the ENIAC operational as soon as possible and so exploration of the stored-program concept had to wait.14

After the ENIAC, Eckert and Mauchly went on to build the BINAC, which was a stored-program machine. They then built the UNIVAC, the first market place offering of the nascent American computer industry. Thereafter the US quickly came to dominate the commercial production of computers. However, history has not been kind to Eckert and Mauchly. In 1972, a prolonged patents struggle between the Honeywell Corporation and Sperry-Univac ended with the judicial decision that ‘Eckert and Mauchly did not themselves first invent the automatic electronic digital computer, but instead derived that subject matter from one Dr John Vincent Atanasoff’.15 Atanasoff was an American college professor who very nearly succeeded in building a special-purpose electronic computer during the period 1936 to 1942.16 Unfortunately he never managed to get his machine properly operational, largely because of malfunctions in a cumbersome piece of equipment for storing information on punched cards. Mauchly paid a visit to Atanasoff’s laboratory in 1941, and in the opinion of the judge it was Atanasoff’s ground-breaking work that had led Mauchly and Eckert to the ENIAC. The judicial ruling notwithstanding, a dispute still rages over the extent to which Atanasoff’s ideas influenced Mauchly and Eckert.

This was not the first time that events had dealt Eckert and Mauchly a bitter blow. The months subsequent to the ENIAC becoming operational ought to have been their time of triumph, but in reality they found themselves upstaged by one of their colleagues – a certain John von Neumann. A gifted mathematician and logician, von Neumann has been described as an ‘unearthly genius’.17 Von Neumann heard of the ENIAC during a chance meeting on a railway station. He was working on the Manhattan Project at Los Alamos at the time, where he applied his great genius to such sinister problems as the calculation of the exact height at which an atomic bomb must explode in order to inflict maximum destruction. He was quick to see the implications of a machine like the ENIAC (‘shell, bomb, and rocket work … progress in the field of propellants and high explosives … aerodynamic and shock wave problems …’).18 He offered to act as consultant to the Eckert-Mauchly project, and rapidly established himself as national spokesman on the new computer technology. Von Neumann was a pillar of the scientific establishment and his patronage did wonders for the prestige of the ENIAC project, but as a result of his lectures and publications the computer came to be more closely associated with his own name than with the names of the people who had brought it into the world.19 Von Neumann had a saying that only a man born in Budapest can enter a revolving door behind you and come out in front.20 He himself, of course, was such a man, and it was Eckert and Mauchly who were left behind.

Von Neumann went on to make huge contributions to computer design. He enunciated the fundamental architectural principles to which subsequent generations of computers have adhered. For this reason, standard modern computers are known generically as von Neumann machines. We shall hear more of von Neumann in later chapters.

The name of John von Neumann is linked with the birth of artificial intelligence in another way. In 1956 the most influential of the early efforts at AI programming ran on the JOHNNIAC, a sophisticated computer designed by him.21 The program is known affectionately as the Logic Theorist. It was the work of Allen Newell, Cliff Shaw, and Herbert Simon, three of the great frontiersmen of AI.22

1.2 The Logic Theorist

Logic is a central preoccupation of AI research. The ability to reason logically is, obviously, a fundamental component of human intelligence; if computers are to achieve the status of artificial intelligences, they too must be given the ability to search logically for a problem’s solution. Newell, Shaw and Simon pioneered the study of how this might be done.

Their initial plan was to write a program that could work out its own proofs for theorems of school geometry. Success eluded them, though, largely because it proved unexpectedly difficult to represent geometrical diagrams in a form that the JOHNNIAC could handle.23 Undeterred, they turned to the related project of programming a computer to search in a logical way for – incestuously – proofs of the theorems of pure logic itself. This time they struck gold, and in the spring of 1956 the Logic Theorist proved its first theorem.

Pure logic consists of theorems such as these:

Given that either X or Y is true, and given further that Y is in fact false, it follows that X is true.

From the information that if X is true then Y is true, it follows that if Y is false, X is false too. (It’s true! Think about it!)

The central areas of pure logic were codified and systematized in the early years of this century by the philosophers Bertrand Russell and Alfred North Whitehead. The Logic Theorist was pitted against chapter 2 of their ground-breaking book Principia Mathematica, and the program succeeded in proving thirty-eight of the first fifty-two theorems presented there. Here, for the first time, was a program that did not simply crunch numbers, but teased out proofs of abstract statements. In a microscopic way, the Logic Theorist could reason.

In the case of one theorem, the proof devised by the Logic Theorist was rather more elegant than the one that Russell and Whitehead gave. As Shaw says, ‘That added a spark to the whole business’.24 The three men wrote a short article describing the new proof and, alongside their own names, they delightedly listed the Logic Theorist as a joint contributor. This was the first academic paper in history to be co-authored by a machine – but sadly the editor of The Journal of Symbolic Logic declined to publish it.25

The Logic Theorist is often described unequivocally as ‘the first’ AI program. However, this is a myth – a part of AI’s not always accurate folklore. The Logic Theorist was certainly the most fecund of the early attempts at AI, but it was predated by a number of programs designed to play chess and other board games (then, as now, board games were seen as an important test-bed for machine intelligence). Pre-eminent among these forerunners was a checkers (or draughts) program that incorporated a learning mechanism. The program rapidly picked up the skills of the game and was soon able to beat its creator, Arthur Samuel. (This triumph of program over programmer is discussed in section 2.6 of the next chapter.) Samuel had his program up and running in the early 1950s, and he demonstrated it on American TV several years before the Logic Theorist was written.26

We will look more closely at the work of Newell, Simon, and Shaw in chapters 2 and 5.

1.3 The Dartmouth Conference

The field of artificial intelligence was given its name by John McCarthy, one of the legendary heroes of the computer revolution. Among his famous exploits are the invention of LISP (the computer language used for the vast majority of AI programs) and the creation of the first timesharing system (the arrangement that enables a computer to attend concurrently to the demands of a large number of users). In 1956, McCarthy organised the conference that AI researchers regard as marking the birth of their subject.27 McCarthy wanted to bring together all the people he knew who had an interest in computer intelligence. Many had not met before, and he invited them all to a summer think-in, a two month long opportunity to stimulate each others’ ideas. He chose Dartmouth College, New Hampshire as the venue and entitled his conference The Dartmouth Summer Research Project on Artificial Intelligence. The last two words stuck.

In many ways the conference was not a success. Most of the people McCarthy invited found two solid months of brainstorming an unenticing prospect. Short visits were common, and people came and went haphazardly. ‘It really meant we couldn’t have regular meetings,’ McCarthy lamented. ‘It was a great disappointment to me … [N]or was there, as far as I could see, any real exchange of ideas.’28 Moreover, lots of feathers were well and truly ruffled when Newell and Simon – two virtually unknown characters who had been invited as something of an afterthought – arrived fresh from the JOHNNIAC with printouts of the first few trials of the Logic Theorist. Nobody enjoys being outshone and the people at McCarthy’s conference were no exception.

Despite all this, the Dartmouth Conference did have a catalytic effect. What had previously been a scattering of individual enthusiasts working in relative isolation was suddenly a scientific community with its own research goals and a strong sense of self-identity. AI had made its debut. In the years following Dartmouth, artificial intelligence laboratories were established at a number of universities – notably at Carnegie Mellon, under Newell and Simon; at Stanford, under McCarthy; at MIT, under Marvin Minsky (a co-organiser of the Dartmouth Conference); and at Edinburgh, under Donald Michie (a leading figure of the British AI scene).

1.4 Alan Turing and the philosophy of AI

Almost paradoxically, the philosophy of AI made its debut several years before AI itself. The founding father of this branch of philosophy was Alan Turing, a British logician and mathematician. Turing is one of the most original minds this century has produced. Six years before the Logic Theorist first ran on the JOHNNIAC (and while he was supervising the programming of the Manchester Mark I) he published an article entitled ‘Computing Machinery and Intelligence’ in the august philosophical journal Mind.29 It began ‘I propose to consider the question “Can machines think?” ’. Turing subjected the question to a careful philosophical discussion, during which he catalogued and refuted nine objections to the claim that machinery can think. In a forthright statement of his own position he declared that by the end of the century ‘the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted’. This article inaugurated the philosophy of AI. Turing’s influential views will be examined in detail in the following chapters.30

When Turing wrote his controversial article there were just four electronic computers in existence (the Manchester Mark I and the Cambridge EDSAC in Britain, the ENIAC and the BINAC in America). The press had already dubbed them ‘electronic brains’ and the idea that they might be the prototypes of thinking machines had begun to take root in the public imagination. In learned circles, however, the tendency was to regard this idea as empty sensationalism – a bit of a joke. ‘When we hear it said that wireless valves think, we may despair of language’ remarked one of Turing’s colleagues, Sir Geoffrey Jefferson, in a public address delivered a few months before Turing wrote his article (Jefferson was Professor of Neurosurgery at Manchester).31 For Jefferson, and many others, the issue of whether it is possible for a computer to think merited nothing better than an offhand and ill-thought-out dismissal. Turing’s careful article set new standards for the debate.

It was typical of Turing to be writing on the philosophy of AI so far in advance of the Dartmouth Conference – he often was several years ahead of everybody else. In 1936, five years before Eckert and Mauchly took up their soldering irons, he wrote (almost by accident) a definitive paper on the logical foundations of computer design.32 His real concern in the article was with an abstract problem in mathematical logic, and in the course of solving it he managed to invent, in concept, the stored-program general-purpose computer. This article remains one of the supreme classics of computing theory. The abstract computers that Turing invented are nowadays known simply as Turing machines. They are described in chapter 6.

During the war Turing worked as a codebreaker at Bletchley Park, and the designers of the Colossus were undoubtedly familiar with the Turing machine concept.33 Curiously, though, Turing himself took little or no part in building the Colossus. He declined an invitation to join the project, and was in fact away on a visit to the United States during the period when the crucial technological advances were made.34 Had Turing lent his vision to the project, the first general-purpose electronic computer might well have been built at Bletchley. (Turing did play a key role in the actual codebreaking.35 To quote one of the Bletchley team: ‘I won’t say that what Turing did made us win the war, but I daresay we might have lost it without him.’36)

Turing’s accidental discovery of an imaginary computer eventually came to dominate his professional life, for during the post-war years he became passionately involved in the development of real machines. His wartime experience with electronics had shown him that the paper ‘machine’ of his 1936 article could be turned into a reality. In 1945 he joined the National Physical Laboratory and drew up designs for an electronic general-purpose computer called the ACE.37 Characteristically, his machine was too fast and too complex to be built at the time. It was ten years before Turing’s ACE became commercially available, and even then in a form that was but a shadow of his original ambitious design. Turing himself quit the project in disgust in 1948 and went to Manchester to pioneer the science of computer programming.38

In June 1954 Alan Turing killed himself by eating an apple drenched with cyanide. Some time previously he had been convicted of homosexuality in a criminal court and sentenced to a period of hormone ‘treatment’ – a degrading maltreatment intended to destroy his libido. When Turing died, computer science lost one of its seminal thinkers, Britain lost a scientist to whom it is considerably indebted, and artificial intelligence lost its first major prophet.

2Some dazzling exhibits

Artificial intelligence has come a long way since its inception in 1956. This chapter takes you on a tour around the AI laboratories of North America and introduces you to some of their spectacular creations. Along the way I point out features of philosophical interest and in various ways set the scene for the more probing discussions that unfold in subsequent chapters. I also touch on some ethical issues. One aim of the tour is to coax readers of a sceptical bent into agreeing that the idea of a ‘thinking computer’ deserves to be taken seriously. This is not an idea that can be dismissed out of hand; the next few pages may read like science fiction, but the programs I describe are in fact quite real.

2.1 Inside the machine

Before we start – what is a computer program? In case you’ve never seen one of these mysterious objects, here is a modest specimen. It is written in the programming language BASIC. (The numbers on the left are line numbers. Programmers number the lines of programs for ease of reference. The numbers go up in steps of 100 so that there is no need to renumber the entire program if extra lines are inserted at a later stage.)

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!