Making Sense of AI - Anthony Elliott - E-Book

Making Sense of AI E-Book

Anthony Elliott

0,0
16,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Industrial robots, self-driving cars, customer-service chatbots and Google's algorithmic predictions have brought the topic of artificial intelligence into public debate. Why is AI the source of such intense controversy and what are its economic, political, social and cultural consequences? Tracing the changing fortunes of artificial intelligence, Elliott develops a systematic account of how automated intelligent machines impact different spheres and aspects of public and private life. Among the issues discussed are the automation of workforces, surveillance capitalism, warfare and lethal autonomous weapons, the spread of racist robots and the automation of social inequalities. Elliott also considers the decisive role of AI in confronting global risks and social futures, including global pandemics such as COVID-19, and how smart algorithms are impacting the search for energy security and combating climate change. Making Sense of AI provides a judiciously comprehensive account of artificial intelligence for those with little or no previous knowledge of the topic. It will be an invaluable book both for students in the social sciences and humanities and for general readers.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 413

Veröffentlichungsjahr: 2021

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Copyright Page

Preface

1 The Origins of Artificial Intelligence

What is Artificial Intelligence?

Frontiers of AI: Global Transformations, Everyday Life

Complex Systems, Intelligent Automation and Surveillance

Notes

2 Making Sense of AI

Two Theoretical Perspectives: Sceptics and Transformationalists

Sceptics

Transformationalists

The Perspectives Compared

Integrating the Insights

Notes

3 Global Innovation and National Strategies

World Leaders: USA, China and Globalization

The USA

China’s AI Ambitions

The World Leaders Compared

The EU and European Developments

Finland

Poland

The UK

Outliers: UAE, Japan and Australia

Notes

4 The Institutional Dimensions of AI

Complex Adaptive Systems and AI

The Increasing Scale of AI

Path-Dependent Connections: New and Old Technologies

The Globalization of AI Technologies and Industries

The Diffusion of AI in Institutional and Everyday Life

AI and Complexity

The Penetration of AI into Lifestyle Change and the Self

AI, Surveillance and the Transformation of Power

Human–Machine Interfaces and Coactive Interactions

Complex Systems

Human–Machine Interfaces

Interfaces and the Changing Location of Social Actors

Notes

5 Automation and the Fate of Employment

Robots Replacing Jobs: AI, Automation, Employment

Automated Professions, Robot Managers

Globalization, Globots and Remote Intelligence

Empowerment: Education, Reskilling, Retraining

Notes

6 Social Inequalities Since AI

Automating Social Inequalities

Ghosts in the Machine: Racist Robots

AI and Gender Troubles

Digital Inequalities: Chatbots and Social Exclusion

Notes

7 Algorithmic Surveillance

The Digital Revolution and Panoptic Surveillance

After Super-Panopticon: Surveillance Capitalism

Military Power: Drones, Killer Robots and Lethal Automated Weapons

Notes

8 The Futures of AI

The Future Now: COVID-19 and Global AI

Automated Societies: Networked Artificial Life

The Year 2045: The Technological Singularity

AI Climate Futures

Algorithmic Power and Trust

Notes

Further Reading

Index

End User License Agreement

Guide

Cover

Table of Contents

Begin Reading

Pages

iii

iv

vii

viii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

213

214

215

216

217

218

219

220

221

222

223

224

225

226

Making Sense of AI

Our Algorithmic World

Anthony Elliott

polity

Copyright Page

Copyright © Anthony Elliott 2022

The right of Anthony Elliott to be identified as Author of this Work has been asserted in accordance with the UK Copyright, Designs and Patents Act 1988.

First published in 2022 by Polity Press

Polity Press

65 Bridge Street

Cambridge CB2 1UR, UK

Polity Press

101 Station Landing

Suite 300

Medford, MA 02155, USA

All rights reserved. Except for the quotation of short passages for the purpose of criticism and review, no part of this publication may be reproduced, stored in a retrieval system or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publisher.

ISBN-13: 978-1-5095-4889-7

ISBN-13: 978-1-5095-4890-3 (pb)

A catalogue record for this book is available from the British Library.

Library of Congress Cataloging-in-Publication Data

Names: Elliott, Anthony, 1964- author.

Title: Making sense of AI : our algorithmic world / Anthony Elliott.

Description: Medford, MA : Polity Press, 2021. | Includes bibliographical references and index. | Summary: "An expert and essential introduction to AI in the modern world"-- Provided by publisher.

Identifiers: LCCN 2021014427 (print) | LCCN 2021014428 (ebook) | ISBN 9781509548897 (hardback) | ISBN 9781509548903 (paperback) | ISBN 9781509548910 (epub) | ISBN 9781509550845 (pdf)

Subjects: LCSH: Artificial intelligence--Social aspects. | Change. | Civilization, Modern--21st century.

Classification: LCC Q335 .E375 2021 (print) | LCC Q335 (ebook) | DDC 006.3--dc23

LC record available at https://lccn.loc.gov/2021014427

LC ebook record available at https://lccn.loc.gov/2021014428

by Fakenham Prepress Solutions, Fakenham, Norfolk NR21 8NL

The publisher has used its best endeavours to ensure that the URLs for external websites referred to in this book are correct and active at the time of going to press. However, the publisher has no responsibility for the websites and can make no guarantee that a site will remain live or that the content is or will remain appropriate.

Every effort has been made to trace all copyright holders, but if any have been overlooked the publisher will be pleased to include any necessary credits in any subsequent reprint or edition.

For further information on Polity, visit our website: politybooks.com

Preface

This book develops central debates and issues first set out in my previous work, The Culture of AI (2019). That book documented the spread of the AI revolution as consisting of massive changes in the here-and-now of everyday life. Building upon those ideas, I focus here on how this transformation also involves the systematic phenomenon of advanced automation across modern institutions, which is profoundly impacting contemporary societies in many significant ways. Drawing technology, economy and society together in a reflective configuration, I seek throughout this book to develop an analysis of the complex AI systems which ‘rewrite’ people’s lives. Both the complex systems associated with AI and the distinctive ‘human–machine interfaces’ it produces, I argue, bring into existence automated intelligent agents powerfully transforming both public and private life.

Some research reported in this book was supported by the Australian Research Council grants ‘Industry 4.0 Ecosystems: A Comparative Analysis of Work–Life Transformation’ (DP180101816) and ‘Enhanced Humans, Robotics and the Future of Work’ (DP160100979). Other research not explicitly detailed, but upon which I draw implicitly in the argumentation of the book, includes my recent European Commission Erasmus+ grants ‘Discourses on European Union 14.0 Innovation’ (611183-EPP-1-2019-1-AU-EPPJMO-PROJECT) and Jean Monnet Network ‘Cooperative, Connected and Automated Mobility’ (599662-EPP-1-2018-1-AU-EPPJMO-NETWORK). Many thanks to the funding agencies which have supported this research. Huge thanks to my wonderful colleagues at the Jean Monnet Centre of Excellence at the University of South Australia, especially Louis Everuss and Eric Hsu. Ross Boyd assisted with the preparation of the manuscript, and was marvellously helpful in making many suggestions that I was able to directly incorporate into the text. At Keio University in Japan, where I regularly visit as part of the Super-Global Program, my thanks as ever to Atsushi Sawai. At University College Dublin, where I also regularly visit, my thanks to Iarfhlaith Watson and Patricia Maguire.

I am very grateful for discussions on various themes with many colleagues who have helped me, directly or indirectly, in the development of my thinking on AI. These include Tony Giddens, Nigel Thrift, Helga Nowotny, Massimo Durante, Vincent Müller, Toby Walsh, Masataka Katagiri, Ralf Blomqvist, Rina Yamamoto, Takeshi Deguchi, Ingrid Biese, Bo-Magnus Salenius, Hideki Endo, Robert J. Holton, Thomas Birtchnell, Charles Lemert, Ingrid Biese, Peter Beilharz, Sven Kesselring, John Cash, Nick Stevenson, Anthony Moran, Caoimhe Elliott, Oscar Elliott, Mike Innes, Kriss McKie, Fiore Inglese, Niamh Elliott, Oliver Toth, Nigel Relph and Gerhard Boomgaarden. John Thompson, my editor at Polity, offered substantive comments that helped transform the book, and it is wonderful to be working with him again. Many thanks also to Julia Davies at Polity. I should like to thank Fiona Sewell for her careful copy-editing. Finally, Nicola Geraghty heard everything in this book first and half-raw, and her support as always made all the difference.

Anthony Elliott

Adelaide, 2021

1The Origins of Artificial Intelligence

In this chapter, I shall not attempt to develop anything like a comprehensive account of the development or current state of artificial intelligence (AI). Since I want to situate my discussion in this chapter and the next in the context of changing relations between society and technology, I will concentrate mainly, although not wholly, on tracing AI through a range of common uses, divergent histories, economic interests and power structures. AI, at once a specialist field and global industry, is often presented as immutable or inevitable. But AI is plural and pluralizing, woven of a whole tissue of different cultural conversations, social practices and technological assemblages. To say this does not mean ignoring the technical knowledge which underpins AI, or placing the whole weight of emphasis upon the social, cultural and political dimensions of the digital revolution. But it is vital, I shall argue, to see that other forms of power, different stocks of knowledge and other ideologies lurk inside the discourse of AI – all of which have unintended consequences and impact upon social development in the current period. In the opening section of the chapter, I outline some general notions connected with the development of AI, which will help construct key underlying themes of this book as a whole. My focus is on unravelling the many different definitions of AI. In the second section, I situate AI in the broad context of both globalization and everyday life. Notwithstanding the dominance of technical thinking which privileges a ‘black box model’ of inputs and outputs, my argument is that the rise of automated intelligent machines should be studied as expressing or incorporating forms of sociality, stocks of cultural knowledge, and unequal power relations that provide a focal point for the investigation of AI.

What is Artificial Intelligence?

In the case of artificial intelligence, it is widely, though erroneously, assumed that its history can and ought to be mapped, measured and retold by recourse and recourse only to AI studies – and that if any of this history falls outside of the purview of the disciplines of engineering, computer science or mathematics, it might justifiably be ignored or assigned perhaps only a footnote within the canonical bent of AI studies. Such an approach, were it attempted here, would aim at reproducing the rather narrow range of interests of much in the AI field – for example, definitional problems or squabbles concerning the ‘facts of the technology’.1 What, precisely, is machine learning? How did machine learning arise? What are artificial neural networks? What are the key historical milestones in AI? What are the interconnections between AI, robotics, computer vision and speech recognition? What is natural language processing? Such definitional matters and historical facts about artificial intelligence have been admirably well rehearsed by properly schooled computer scientists and experienced engineers the world over, and detailed discussions are available to the reader elsewhere.2

As signalled in its title, this book is a study in making sense of AI, not of AI sense-making. This is not about the technical dimensions or scientific innovations of AI, but about AI in its broader social, cultural, economic, environmental and political dimensions. I am seeking to do something which no other author has attempted. While the existing literature tends to be focused on isolated scientific pioneers in the retelling of the history of AI, the present chapter concerns itself more with cultural shifts and conceptual currents. Something of the same ambition permeates the book as a whole. While much of the existing literature tends to concentrate on specific domains in relation to issues such as work and employment, racism and sexism, or surveillance and ethics, I have sought to register something of the wealth of intricate interconnections between such domains – all the way from lifestyle change and social inequalities to warfare and global pandemics such as COVID-19. In fact, I spend the bulk of my time in this book examining these multidimensional interrelationships to make up for the fact that such interconnections are not usually discussed at all in the field of AI studies. It is, in particular, the close affinity and interaction between AI technologies and complex digital systems, phenomena that in our own time are growing in impact and significance as well as in the opportunities and risks they portend, that I approach – carefully and systematically – in the chapters that follow throughout this book. Finally, while the existing literature tends to be focused on the tech sector in one country or AI industries in specific regions, I have sought to develop a global perspective and offer comparative insights. A general social theory of the interconnections between AI, complex digital systems and the coactive interactions of human–machine interfaces remains yet to be written. But in developing the synthetic approach I outline here, my hope is that this book contributes to making sense of the increasingly diverse blend of humans and machines in the field of automated intelligent agents, and to frame all this theoretically and sociologically with reflections on the dynamics of AI in general and its place in social life.

There is more than one way in which the story of AI can be told. The term ‘artificial intelligence’, as we will examine in this chapter, consists of many different conceptual strands, divergent histories and competing economic interests. One way to situate this wealth of meaning is to return to 1956, the year the term ‘artificial intelligence’ was coined. This occurred at an academic event in the USA, the Dartmouth Summer Research Project, where researchers proposed ‘to find how to make machines use language, form instructions and concepts, solve kinds of problems now reserved for humans, and improve themselves’.3 The Dartmouth Conference was led by the American mathematician John McCarthy, along with Marvin Minsky of Harvard, Claude Shannon of Bell Telephone Laboratories and Nathan Rochester of IBM. Why the conference organizers chose to put the adjective artificial in front of intelligence is not evident from the proposal for funding to the Rockefeller Foundation. What is clear from this infamous six-week event at Dartmouth, however, is that AI was conceived as encompassing a remarkably broad range of topics – from the processing of language by computers to the simulation of human intelligence through mathematics. Simulation – a kind of copying of the natural, transferred to the realm of the artificial – was what mattered. Or, at least, this is what McCarthy and his colleagues believed, designating AI as the field in which to try to achieve the simulation of advanced human cognitive performance in particular, and the replication of the higher functions of the human brain in general.

There has been a great deal of ink spilt on seeking to reconstruct what the Dartmouth Conference organizers were hoping to accomplish, but what I wish to emphasize here is the astounding inventiveness of McCarthy and his colleagues, especially their focus on squeezing then untrained and untested variants of scientific strategies and intellectual hunches anew into the terrain of intelligence designated as artificial. Every culture lives by the creation and propagation of new meanings, and it is perhaps not surprising – at least from a sociological standpoint – that the Dartmouth organizers should have favoured the term ‘artificial’ at a time in which American society was held in thrall to all things new and shiny. The era of 1950s America was of the ‘new is better’, manufactured as opposed to natural, shiny-obsessed sort. It was arguably the dawning of ‘the artificial era’: the epoch of technological conquest and ever more sophisticated machines, designated for overcoming problems of nature. Construction of various categories and objects of the artificial was among the most acute cultural obsessions. Nature was the obvious outcast. Nature, as a phenomenon external to society, had in a certain sense come to an ‘end’ – the result of the domination of culture over nature. And, thanks to the dream of infinity of experiences to be delivered by artificial intelligence, human nature was not something just to be discarded; its augmentation through technology would be an advance, a shift to the next frontier. This was the social and historical context in which AI was ‘officially’ launched at Dartmouth. A world brimming with hope and optimism, with socially regulated redistributions away from all things natural and towards the artificial. In a curious twist, however, jump forward some sixty or seventy years and it is arguably the case that, in today’s world, the term ‘artificial intelligence’ might not have been selected at all. The terrain of the natural, the organic, the innate and the indigenous is much more ubiquitous and relentlessly advanced as a vital resource for cultural life today, and indeed things ‘artificial’ are often viewed with suspicion. The construction of the ‘artificial’ is no longer the paramount measure of socially conditioned approval and success.

Where does all of this leave AI? The field has advanced rapidly since the 1950s, but it is salutary to reflect on the recent intellectual history of artificial intelligence because that very history suggests it is not advisable to try to compress its wealth of meanings into a general definition. AI is not a monolithic theory. To demonstrate this, let’s consider some definitions of AI – selected more or less at random – currently in circulation:

the creation of machines or computer programs capable of activity that would be called intelligent if exhibited by human beings;

a complex combination of accelerating improvements in computer technology, robotics, machine learning and big data to generate autonomous systems that rival or exceed human capabilities;

technologically driven forms of thought that make generalizations in a timely fashion based on limited data;

the project of automated production of meanings, signs and values in socio-technical life, such as the ability to reason, generalize, or learn from past experience;

the study and design of ‘intelligent agents’: any machine that perceives its environment, takes action that maximizes its goal, and optimizes learning and pattern recognition;

the capability of machines and automated systems to imitate intelligent human behaviour;

the mimicking of biological intelligence to facilitate the software application or intelligent machine to act with varying degrees of autonomy.

There are several points worth highlighting about this list. First, some of these formulations define artificial intelligence in relationship to human intelligence, but it must be noted that there is no single agreed definition, much less an adequate measurement, of human intelligence. AI technologies can already process our email for spam, recommend what films we might like to watch and scan crowds for particular faces, but these accomplishments do not signify comparison with human capabilities. It might, of course, be possible to make comparisons of AI with rudimentary numeric measurements of human intelligence such as IQ, but it is surely not hard to show what is wrong with such a case. There is a difference between the numeric measurement of intelligence and native human intelligence. Cognitive processes of reasoning may indeed provide a yardstick for assessing progress in AI, but there are also other forms of intelligence. How people intuit each other’s emotions, how people live with uncertainty and ambivalence, or how people gracefully fail others and themselves in the wider world: these are all indicators of intelligence not easily captured by this list of definitions.

Second, we may note that some of these formulations of AI seem to raise more questions than they can reasonably hope to answer. On several of these definitions, there is a direct equation between machine intelligence and human intelligence, but it is not clear whether this addresses only instrumental forms of (mathematical) reasoning or emotional intelligence. What of affect, passion and desire? Is intelligence the same as consciousness? Can non-human objects have intelligence? What happens to the body in equating machine and human intelligence? The human body is arguably the most palpable way in which we experience the world; it is the flesh and blood of human intelligence. The same is not true of machines with faces, and it is fair to say that all of the formulations on this list displace the complexity of the human body. These definitions are, in short, remorselessly abstract, indifferent to different forms of intelligence as well as detached from the whole human business of emotion, affect and interpersonal bonds.

Third, we can note that some of these formulations are sanguine, others ambiguously so, and some altogether over-estimate the capabilities of AI today and in the near future. An interesting feature of many of these formulations is that they tend to flatten AI into a monolithic entity. Today, AI can be a virtual personal assistant, a self-driving car, a robot, a smart lift or a drone. But it is not obvious that many of these formulations can easily cope with these gradations or differentiations of machine intelligence. A smart elevator using AI to manage the flow of demand in an office building based on data collected from daily usage, for example, is essentially goal-orientated and single in technological objective. It is an example of weak or narrow AI, where machine intelligence can only do what it is programmed to do, based on a very limited range of contexts and parameters. Examples of narrow AI range from Google Search to facial recognition software to Apple’s Siri, and these are all quite basic kinds of automated machine intelligence. They have been programmed to perform a single task well yet cannot switch to perform other types of tasks – or, at least, not without considerable further labour performed by engineers and computer scientists. On the other hand, there are more sophisticated forms of AI. Deep AI, or what is termed artificial general intelligence, is an advanced form of self-learning machine intelligence seeking to replicate human intelligence. Unlike narrow AI technologies, deep AI combines insights from different fields of activity, performs multiple tasks of intelligence and displays considerable flexibility and dexterity. Deep AI entails the harnessing of massive computational processing power – for instance, the Summit supercomputer, which, in performing 200 million billion calculations per second, is among the fastest computers in the world – to machine learning algorithms. Arguably one of the best operational examples of deep AI is IBM’s Watson, a system which combines supercomputing with deep learning algorithms: such algorithms are designed to optimize their performance against specified data-processing criteria (such as speech or facial recognition, or medical diagnosis) through self-adjusting the thresholds of what is relevant or irrelevant in the data under analysis. Another AI variant is that of superintelligence, which doesn’t exist yet, but is forecast by many specialists to involve a fully fledged machine intelligence which outstrips human intelligence in every domain, including both cognitive reasoning and social skills. Superintelligence has long been the preserve of Hollywood science fiction, and the personalized AI system of Samantha in the film Her is a signal example. (We will turn to consider technological advances related to superintelligence in more detail in Chapter 8.)

One of the problems of current debate is that there is a lot of hype, a lot of misconceptions and too many overblown claims about AI. One way of reading AI against the grain is to avoid the specialist definitions circulating in the field and talk about resistances, disorders and the historical past instead. It is always useful to get a sense of how a specialist discourse is approached by those outside of its representative institutions, and similarly it helps to look at the prehistory of an emergent technology. This line between the ‘official’ and the ‘unofficial’ version of AI is not always easy to cross, but I want to focus briefly on considering aspects of the prehistory of AI – in order to better grasp the constitution of the whole discourse of AI. That is to say, I want to focus on the function of ideas within and around AI – including the aspirations, objectives and dreams of technologists – in order to better situate today’s technological realities as well as its manifold distortions. In other words, my aim here is to return AI to its own displaced history.

An objection to the glossy image presented by various tech companies that AI has only recently arrived, and arrived fully formed, is that machine intelligence and mechanical automatons are, in fact, historical through and through. Those advocating the technological hype of our times may not wish to be embroiled in trawling through the histories and counter-histories of various technologies, but expanding the historical boundaries of the discourse of AI by bringing back into consideration those developments banished to the background and left out of the official narrative is essential to combating the idea that AI is a straightforward, linear story which runs roughly from the 1956 Dartmouth Conference to the present day. The developments that unite an otherwise disparate and apparently unconnected series of topics in the emergence of AI require us to go back to the eighth century bc, where automatons and robots crop up in Greek myths such as that of Talos of Crete.4 Or you have to go back to the ancient world of Mesopotamia, where Muslim polymath Ismail Ibn al-Razzaz al-Jazari invented automatic gates and automated doors driven by hydropower, whilst simultaneously penning his programmatic text, The Book of Knowledge of Ingenious Mechanical Devices.5 An alternative historical starting point might be the ancient philosophy of Aristotle, who wrote of artificial slaves in his foundational Politics.6

Fast forward to the early modern period in Europe, where the landscape of automatons is still largely about dreaming but also where conflicts between human and machine intelligence become amenable to, and await, resolution. Early modern European thought in cooperation with scientific reason found its way towards such conflict resolution under the twin banners of calculation and mechanics. The French philosopher, mathematician and scientist René Descartes compared the bodies of animals to complex machines. In the political thought of Thomas Hobbes, a mechanical theory of cognition stood for the human territory over which reason extended. In the practice of French mathematician and inventor Blaise Pascal, arithmetical calculations stood for the feasibility and ultimate triumph of the theory of probability – as this prodigious physicist and Catholic theologian worked obsessively to build mechanical prototypes and calculating machines. Fast forward again some centuries and we find writers and artists alike viewing a society leaning solely on human attributes or natural impulses with considerable suspicion. Throughout the modern era, from Mary Shelley’s Frankenstein to Karel Čapek’s Rossum’s Universal Robots, reality was to be shaped, thought about and interpreted with reference to automatons, cyborgs and androids. At the dawn of the twentieth century, the dream of automated machines was brought finally and firmly inside the territory where empirical testing is done, most notably with a tide-predicting mechanical computer – commonly known as Old Brass Brains – developed by E. G. Fischer and Rolin Harris.7 The world had, at long last, shifted away from the ‘natural order of things’ towards something altogether more magical: the ‘artificial order of mechanical brains’.

For most people today, AI is equated with Google, Amazon or Uber, not ancient philosophy or mechanical brains. However, there remain earlier, historical prefigurations of AI which still resonate with our current images and cultural conversations about automated intelligent machines. One such pivot point comes from the UK in the early 1950s, when the English polymath Alan Turing – sometimes labelled the grandfather of AI – raised the key question ‘can machines think?’8 Turing, who had been involved as a mathematician in important enemy code breaking during World War II, raised the prospect that automated machines represent a continuation of thinking by other means. Thinking in the hands of Turing becomes a kind of conversation, a question-and-answer session between human and machine. Turing’s theory of machines thinking was based on a British cocktail party game, known as ‘the imitation game’, in which a person was sent into another room of the house and guests had to try to guess their assumed identity. In Turing’s reworking of this game, a judge would sit on one side of a wall and, on the other side of the wall, there would be a human and a computer. In this game, the judge would chat to mysterious interlocutors on the other side of the screen, and the aim was to try to trick the judge into thinking that the answers coming from the computational agent were, in fact, coming from the flesh-and-blood agent. This experiment became known as the Turing Test.

There has been, then, a wide and widening gamut of automated technological advances, symptomatic of the shift from thinking machines that may equal the intelligence of humans to thinking machines that may exceed the intelligence of humans, but all of which have been and remain highly contested. Whether automated intelligent machines are likely to surpass human intelligence not only in practical applications but in a more general sense figures prominently among the major issues of our times and our lives in these times. Notwithstanding the notoriously overoptimistic claims of various AI researchers and futurists, there has been an overwhelming sense of crisis confronted by scientists, philosophers and theorists of technology alike, in greater or smaller measure, that the feverish ambition to establish whether AI could ever really be smarter than humans has resulted in a new structure of feeling where humanity is ‘living at the crossroads’. There have been, it should be noted, some very vocal and often devastating critiques of AI developed in this connection. The philosopher Hubert Dreyfus was an important early critic. In his book What Computers Can’t Do, Dreyfus argued that the equation mark put between machine and human intelligence in AI was fundamentally flawed. To the question of whether we might eventually regard computers as ‘more intelligent’ than humans, Dreyfus answered that the structure of the human mind (both its conscious and unconscious architectures) could not be reduced to the mathematical precepts which guide AI. Computers, as Dreyfus put it, altogether lack the human ability to understand context or grasp situated meaning. Essentially reliant on a simple set of mathematical rules, AI is unable, Dreyfus argued, to grasp the ‘systems of reference’ of which it is a part.

Another critique, arguably more damaging, of the limitations in equating human and machine intelligence was developed by the American philosopher John Searle. Searle was strongly influenced by the philosophical departures of Ludwig Wittgenstein, especially Wittgenstein’s demonstration that what gives ordinary language its precision is its use in context. When people meet and mingle, they use contextual settings to define the nature of what is said. This time-and-effort contextual activity of putting meaning together, practised and rehearsed daily by humans, is not something that AI can substitute for, however. To demonstrate this, Searle provided what he famously termed the ‘Chinese Room Argument’. As he explains:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.9

The upshot of Searle’s arguments is clear. Machine and human intelligence might mirror each other in chiasmic juxtaposition, but AI is not able to capture the human ability of constantly connecting words, phrases and talk within practical contexts of action. Meaning and reference are, in short, not reducible to a form of information processing. It was Wittgenstein that pointed out that a dog may know its name, but not in the same way that her master does. Searle demonstrates this is similarly true for computers. It is this human ability to understand context, situation and purpose within modalities of day-to-day experience that Searle, powerfully and provocatively, asserts in the face of comparisons between human and machine intelligence.

Frontiers of AI: Global Transformations, Everyday Life

Another way of reading AI against the grain – contesting the ‘official’ narrative of artificial intelligence – is to rethink its relation to economy, society and unequal relations of power. These are all key domains in which the discourse of AI can and must be situated. I have argued in the preceding section that what the idea of an intelligence rendered ‘artificial’ signifies is, among other things, the transformation and transcendence of human capabilities from natural, inborn and inherited determinations of the biological and biographical realms. AI consists in the project of transforming human knowledge into machine intelligence – and charging social actors with the task of integrating, incorporating and invoking such newly minted artificial automations into the living of everyday life. Such manufacturing of automated intelligent machines, however, works not only upon an internal register – the field of individual life, individualization and the development of human intelligence – but also outwards – across societies, economies and power politics. AI-powered software programs are today downloaded to multiple locations across the planet – at once stored, operationalized and modified. Contrasting the limitations of the human brain by cranial volume and metabolism with the extraterritorial reach of AI, Susan Schneider argues that automated machine intelligence ‘could extend its reach across the Internet and even set up a galaxy-wide “computronium” – a massive supercomputer that utilizes all the matter within a galaxy for its computations. In the long run, there is simply no contest. AI will be far more capable and durable than we are.’10

So, AI is also all about galaxy-wide movement and especially the automated global movement of software, symbols, simulations, ideas, information and intelligent agents. AI-powered information societies involve a relentless automation of economic, social and political life. This point is an important one to register, as many commentators invoke the spectre of globalization to capture the economic transformations of manufacturing, industry and enterprise as a consequence of AI technology and its deployment in offshore business models. Certainly, a great deal of academic and policy thinking has emphasized how the global digital economy has become ‘borderless’, with many frontiers now automated and regulated through the operations of intelligent machines. The rise of AI is intricately interwoven with globalization, it is often said. This is surely the case, though it is vital to see that globalization links together people, intelligent machines and automation in complex, contradictory and uneven ways. Understanding that AI is both condition and consequence of globalization has to be properly contextualized.

Many studies have cast globalization solely as an economic phenomenon. From this angle, globalization consists of the ever-increasing integration of economic activity and financial markets across borders. Some analyses have emphasized that globalization is the driver of economic neoliberalism, privatization, deregulation, speculative finance and the crystallization of multinational corporations operating across the borderless flows of the global economy.11 It is obvious that such an image of globalization is well geared to rendering AI as simply an upshot of the corporate activities of IBM, Amazon, Google, Microsoft and Alibaba. Other writers have argued that globalization is synonymous with Americanization. AI here is viewed as a set of effects brought about by powerful actors, academic research institutes and industry labs, administrative entities and political forces promoting the Americanization of the world. Much AI research, as we will examine throughout this book, has indeed been funded by the American government, especially the US Department of Defense. Consider, for example, the extensive role of the Defense Advanced Research Projects Agency (DARPA), which during the 1960s poured millions of dollars into the establishment of AI labs at MIT, Carnegie Mellon University and Stanford University along with commercial AI laboratories including SRI International. As I discuss in some detail in chapter 3, the influence of the US Department of Defense upon the digital revolution was hugely consequential and brought in its train a global extension of emergent markets for artificial intelligence.

And so we come back to the big issue of who exactly commissioned the major AI projects that were launched in the 1950s and 1960s. Who was paying for the key AI research breakthroughs? What forms of power were these early commissions advancing and reinforcing? Obviously there were many divergent interests, although the history of the funding cycles around AI clearly suggests that nation-states (especially the United States and, to a much more limited extent, the United Kingdom) along with the biggest multinational companies were the principal actors. Beyond nation-states and corporations, however, another dimension of AI concerns the world military order. Understanding the connections between the techno-industrialization of war, automated techniques of military organization and the flow of AI technologies is very important to grasping the globalizing of AI. I seek to highlight these issues in terms of an institutional account of what I shall call algorithmic modernity, developed with reference to the operations of advanced capitalism, lifestyle change, social inequalities and surveillance, throughout the book as a whole. For the moment, however, it is notable that many of the early successes, as well as some fairly dramatic failures, in AI can be traced to overlaps between military power and the development of automated intelligent machines.

Some argue, rightly in my view, that the rise of AI sprang directly from challenges that the West faced in relation to Soviet communism and the outcomes of the Cold War. Certainly, the general imperative of establishing military dominance in world politics meant that, during the Cold War, the US military sought to automate the translation of documents from Russian and other languages into English. This situation led to considerable state investment in machine translation research. During this initial period of increased defence funding in AI research, a cluster of economic, political and military changes occurred around the late 1950s and early 1960s that were of essential significance to the building of better intelligent machines and advanced AI systems. First, Soviet communism delivered a major shock to the American psyche with the launch of Sputnik, the first artificial earth satellite, in 1957. Beyond this dramatic shock, further reverberations were felt throughout the West in the same year when Russia launched Sputnik 2, a spacecraft that put Laika the dog into orbit. The idea of a space future successfully colonized by Soviet-bloc countries spurred the USA into dramatically increasing spending – military and otherwise – on science, technology and research. Second, new research funding in AI – from machine translation to speech-recognition projects – was launched in America by agencies including the CIA, the National Science Foundation and the Department of Defense. This increasingly defence-driven system of research innovation resulted in a much greater speed-up of advances in automation as well as other breakthroughs in machine intelligence.

Third, during this period of state-led AI research investment in the 1960s, various socio-technical and cultural shifts took place as regards the promise, power and prestige of automated machine intelligence. The establishment of the Advanced Research Projects Agency (ARPA) in 1962 represented, for example, a gigantic effort to ensure that America was first to land on the moon. Beyond the space race, however, this entity ushered into existence other world-transforming contributions too, most notably breakthroughs in advanced computing and automated system architectures led by J. C. R Licklider. A psychologist with a passion for mathematics and mechanical engineering, Licklider served at the Pentagon and sought to expand ARPA (and subsequently DARPA, with the D added in 1972) beyond its narrow military confines by supporting multiple AI research projects and associated breakthroughs in advanced computing. As a chief networker among networked researchers and technologists, Licklider authorized support for many projects, including the work of John McCarthy, as well as projects at Carnegie Mellon University, SRI International and the RAND Corporation. His major legacy was to develop a computer network linking these colleagues and research projects together, initially pursued through Project MAC – the development of multi-access computing. This, in turn, culminated in the establishment of ARPANET – a computational network which was, in effect, the forerunner of the Internet and the World Wide Web. But it was ideas as well as inventions for which Licklider deserves a prominent place in the history of artificial intelligence. The digital transformation envisaged by Licklider was captured most vividly in his 1960 paper, ‘Man-Computer Symbiosis’. This was a dramatic advance beyond Turing’s notion that machines might one day think. Licklider’s vision, by contrast, was all about intuitive interactive computing, the interface of human and machine. In his compelling intellectual history The Dream Machine, M. Mitchell Waldrop argues that Licklider

was unique in bringing to the field a deep appreciation for human beings: our capacity to perceive, to adapt, to make choices, and to devise completely new ways of tackling apparently intractable problems. As an experimental psychologist, he found these abilities every bit as subtle and as worthy of respect as a computer’s ability to execute an algorithm. And that was why to him, the real challenge would always lie in adapting computers to the humans who used them, thereby exploiting the strengths of each.12

In this speaking up for interactivity, technological interfaces, decentralization and connectivity, Licklider can in many ways be said to have shaped AI as we know it today.

Complex Systems, Intelligent Automation and Surveillance

One sometimes hears the opinion that the industry of AI – the tech giants from Silicon Valley to Shenzhen – is inhospitable to critique. AI as a global enterprise has been, over a long period, the sworn enemy to critical thought about what it may control, whilst altogether blocking off engagement with questions of how new technologies might be controlled by other economic powers and political forces. While hospitable to engagement from consumer society, AI industry leaders have been remarkably silent on questions of control, power and exploitation. In retrospect, we can say that AI – both within industry and beyond – has often been presented as a neutral object. Against such trends towards diffusion or neutralization, the critical question remains this: what might it mean to read power and control back into the discourse of AI? The notion that AI is associated with globalization is familiar enough. Science, technology and automated intelligent machines more generally play a fundamental role in the globalizing of AI. However, I seek throughout this book to reframe this issue in terms of an institutional account of AI, developed in terms of interdependent complex systems. The overall direction of AI is to create automated settings of action which are ordered in terms of complex systems at once robust and fragile. This is an important, although nuanced, point – and requires further elaboration. Many commentators emphasize the exponential dynamics of change in contemporary society as a result of AI, but this is often misleading because AI can also contribute to the stabilization of socio-technical systems for long stretches of time. Rather, the point is that AI facilitates persistent structures and durable systems on the one hand, and the break-up, breakdown or disappearance of complex systems on the other hand. Understanding how AI intersects with complex systems which are dynamic, processual and unpredictable is of key importance for grasping the ways in which automated intelligent machines also function as a field of force, a realm of conflict and coercion in which power and control are produced, reproduced and transformed.

Some central notions from complexity theory are developed in this book, especially in chapter 4. In seeking to demonstrate the power interests realized in and through artificial intelligence, it is necessary to characterize the complex systems of AI. Over the course of the twentieth century and into the twenty-first century, a number of interdependent complex systems served to create a major field of AI, spun off from economic, bureaucratic, industrial and military forces, and each typically providing major resources for the advancement of AI in the contemporary world. The interdependent complex systems, as I discuss at length in chapter 4, include:

the scale, scope and extensity of AI in terms of research and innovation, industry and enterprise, as well as technologies and consumer products;

the intricate interplay of ‘new’ and ‘old’ technologies, and of the role of established technologies persisting or transforming within many modes of more recent AI and automated intelligent machines;

the globalization of AI and the centrality of AI technologies and industries in high-tech digital cities;

the growing diffusion of AI in modern institutions and everyday life;

the trend towards complexity, at once technological and social;

the intrusion of AI technologies into lifestyle change, personal life and the self;

the transformation of power as a result of AI technologies of surveillance.

The complex systems in which AI is enmeshed in the contemporary world are at once economic, social, political, material and technological. These interconnected complex systems, as I seek to show, should not be reduced to separate ‘factors’ or ‘processes’. There are no automated intelligent machines without complex systems. As a result, AI is a field characterized by transformation, unpredictability, innovation and reversal. The interdependent complex systems of AI are continually adapting, evolving and self-organizing.

In the early decades of the twenty-first century, there have been two major debates about technology and the general conditions of society and world order. One concerns a possible ‘autonomization’ of society and possibly even of culture and politics. The other concerns broad, massive changes in technological systems, sometimes labelled the coming AI revolution. AI is often presented as an alternative to existing society, which is represented by some critics as politically limited or by other critics as fundamentally flawed. The new, complex systems underpinning the stunning technological advances of AI are often pictured as a utopian pathway to a better world and a more equitable society. Advances in AI, especially powerful predictive algorithms, promise an ever-greater digitalized measure of the world. According to some critics, AI is nothing if not mathematical precision. If we return to complexity theory, however, things are not so clear-cut. Utopic forecasts which emphasize precision or control (of people, of systems, of societies) fail to take into account that such interventions – even the so-called exquisitely precise technological interventions of AI – can generate unanticipated, unintended and opposite, or almost opposite, impacts. One reason for this is the force field of tiny but potentially major changes often described as ‘the butterfly effect’. In 1972, Edward Lorenz posed the question: ‘Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?’ Lorenz had been studying computer modelling of weather predictions, and he discovered that certain systems – not only meteorological systems, but traffic systems and transport systems – are intrinsically unstable and unpredictable. Notwithstanding the gigantic transformations and combinations of new technology today, some critics invoke the butterfly effect thesis – of highly improbable and unexpected events – to argue that AI technologies, no matter how powerful and advanced, will always fall short of their predictive mark. James Gleick, in Chaos: Making a New Science, argues that AI is unable to secure the goal of precision control – or, we might add, controlled precision – because the smallest variations in measurement may dramatically disrupt the results.

It has been argued previously that separating right and wrong predictions of the future is a task that not even computational analysis will solve; and, if undertaken, is bound to fail at any rate. Our complex world, as well as our opaque lives and social interactions, are far more labyrinthine, and even chaotic, than the mathematical precision of AI allows. This does not mean, however, that all predictive algorithms circulate in a self-referential, sealed-off technical domain; from the fact that AI can’t explain, or even reveal, the complexity that shapes social events and global trends, it does not follow that automated intelligent machines do not influence global complexity or the engendering of catastrophic change. Perhaps instead of talking about the long-dreamt-of controlled precision, or precise control, of AI, it would be more in keeping with the conditions of current global systems to speak of algorithmic cascades