Critical Theory of AI - Simon Lindgren - E-Book

Critical Theory of AI E-Book

Simon Lindgren

0,0
14,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

We live in an age of artificial intelligence. Machines think and act in ever more complex ways, making suggestions and decisions on our behalf. While AI might be seen as practical and profitable, issues of data surveillance, algorithmic control, and sexist and racist bias persist. In this rapidly changing landscape, social analysis of AI risks getting scaled down to issues of 'ethics', 'responsibility', and 'fairness'. While these are important issues, they must be addressed not from an 'AI first' perspective, but more thoroughly in terms of power and contention. Approaching artificial intelligence from the often overlooked perspective of critical social theory, this book provides a much-needed intervention on how both old and new theories conceptualize the social consequences of AI. Questions are posed about the ideologies driving AI, the mythologies surrounding AI, and the complex relationship between AI and power. Simon Lindgren provides a way of defining AI as an object of social and political critique, and guides the reader through a set of contentious areas where AI and politics intersect. In relation to these topics, critical theories are drawn upon, both as an argument for and an illustration of how AI can be critiqued. Given the opportunities and challenges of AI, this book is a must-read for students and scholars in the humanities, social sciences, and STEM disciplines.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 380

Veröffentlichungsjahr: 2023

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



CONTENTS

Cover

Title Page

Copyright

1 AI and critical theory

Interview with the vampire

Operation manual

AI: an empty signifier

An end-to-end sociology of AI

Super machines

Overview of the book

2 AI assemblage

AI as social architecture

Making a monster

AI anatomy

Layers of AI

Context, action, construction

Algorithmic assemblages and beyond

3 Ideology behind AI

Power is everywhere

AI libertarianism

AI and technocracy

AI solutionism

4 Ideology within AI

AI as a vehicle for ideology

Models are wrong

AI is ideology

5 Social machines

AI and the social

Unboxing AI

AI is humans

Myths of full automation

Behavioural surplus

6 AI at work

Replacing humans?

The black holes of AI capitalism

AI and platform labour

Management by algorithm

Patron Fantasma

Heteromation, fauxtomation and automation’s last mile

7 AI subjects

Garbage in, garbage out

AI as representation

Binary and beyond

AI panoptism

8 AI in the loop

Bias bias

* washing

Towards AI criticism

References

Index

End User License Agreement

List of Illustrations

Chapter 2

Figure 1

AI Assemblage. Sketch with dimensions numbered (I - XIX), in their order of appe…

Guide

Cover

Table of Contents

Title Page

Copyright

Begin Reading

References

Index

End User License Agreement

Pages

iii

iv

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

Critical Theory of AI

Simon Lindgren

polity

Copyright © Simon Lindgren 2024

The right of Simon Lindgren to be identified as Author of this Work has been asserted in accordance with the UK Copyright, Designs and Patents Act 1988.

First published in 2024 by Polity Press

Polity Press65 Bridge StreetCambridge CB2 1UR, UK

Polity Press111 River StreetHoboken, NJ 07030, USA

All rights reserved. Except for the quotation of short passages for the purpose of criticism and review, no part of this publication may be reproduced, stored in a retrieval system or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publisher.

ISBN-13: 978-1-5095-5578-9

A catalogue record for this book is available from the British Library.

Library of Congress Control Number: 2023934719

The publisher has used its best endeavours to ensure that the URLs for external websites referred to in this book are correct and active at the time of going to press. However, the publisher has no responsibility for the websites and can make no guarantee that a site will remain live or that the content is or will remain appropriate.

Every effort has been made to trace all copyright holders, but if any have been overlooked the publisher will be pleased to include any necessary credits in any subsequent reprint or edition.

For further information on Polity, visit our website:politybooks.com

1AI and critical theory

Machines produce only machines

Jean Baudrillard, The Intelligence of Evil (2005, p. 80)

Interview with the vampire

It is easy to be impressed, sometimes even awestruck, by some of the recent advances in artificial intelligence and machine learning. At the point in time when I am writing this book, in early 2023, there is an intense buzz around a set of AI-driven tools in the area of so-called Generative AI. This is a type of artificial intelligence that is focused on generating content that is similar to existing data, but which also aspires to be unique in its own right. Generative AI can, for example, create art, realistic images, and text. This is achieved through a variety of currently popular AI techniques, including machine learning and deep learning, where the things generated are always based on the data and algorithms that were used to train the models. The content is generated as the system is provided with a prompt for what to spit out. ‘Draw a picture of a sloth in a bathtub, holding a laptop, and do this in the style of Andy Warhol.’ Or ‘Tell me a story about a squirrel that got lost in a data centre.’ The fascinating and uncanny results of such prompts invite further experimentation, all the while the models learning from the inputs with which they are bombarded. I am prompting ChatGPT – a state-of-the-art model for natural language processing by OpenAI – to:

Write the introduction to a book about critical theory and AI.

The cursor flashes for a second, and then text – letter by letter, word for word – populates the results box. I read:

In recent years, the field of artificial intelligence has advanced rapidly, leading to significant developments in areas such as machine learning and natural language processing. As Al technologies become more sophisticated and widespread, it is increasingly important to understand their potential impacts and limitations. Critical theory of Al is a field of study that examines the ways in which Al technologies are shaped by, and in turn shape, society and culture. This book provides an introduction to the critical theory of Al, exploring the historical and philosophical roots of the field and its key concepts and debates. Through a series of chapters, the book considers the social, ethical, and political implications of Al technologies, and offers a critical perspective on the ways in which they are being developed and deployed.

Wow. I basically could not have said this better myself. I get a momentary sense of vertigo: Will there even be a need for scholars who write books in the future? Do we need to keep thinking and reflecting? Can we contribute anything that a machine cannot? I have since realized, however, that the reason why I felt that I could not have written a better introduction myself is that it was already me who wrote it – not the machine. Or, rather, at least, scholars such as myself have written, posted, and published wordings such as these already in the places from where GPT (Generative Pre-trained Transformer) gets its training data. It is just mimicking us. If we think new things, future models will generate new outputs, because they will have learned from new human thinking, not because they were thinking by themselves.

GPT, when I ask it, explains that because its goal is to ‘generate human-like text’, it has been ‘trained on a large amount of written and spoken language’. Looking up the original technical paper on GPT-3 (Brown et al., 2020), I learn that it was initially trained on the CommonCrawl corpus, consisting of petabytes of data collected through continuous web crawling since 2008 (CommonCrawl, 2022), alongside a ‘dataset of millions of webpages called WebText’ (Radford et al., 2019), ‘two inter-net-based books corpora’, and English-language Wikipedia (Brown et al., 2020, p. 4). All of this text, which it learned from, was human-created. Clearly, then, ChatGPT impressed me with advanced maths, leveraging knowledge both of the structure of book blurbs and of what a critical perspective on AI may entail. It achieved its goal of generating human-like texts. In the words of Baudrillard (2005, p. 80), ‘machines produce only machines’. The creators of GPT write that ‘large pretrained language models are not grounded in other domains of experience, such as […] real-world physical interaction, and thus lack a large amount of context about the world’ (Brown et al., 2020, p. 8). Just to be sure, I ask ChatGPT:

Can you analyse AI in terms of critical theory?

It responds:

Yes, AI can be analyzed from the perspective of critical theory, which is a framework for evaluating social, political, and economic systems and their potential for change. Critical theory looks at the ways in which power is distributed and used in society, and how this affects different groups of people. In the context of AI, critical theory can be used to examine the ways in which AI systems are developed and deployed, and how they may impact different groups of people. This can include looking at issues of bias and discrimination in AI, as well as the potential consequences of AI for job displacement and economic inequality. Overall, using critical theory to analyze AI can provide valuable insights into the potential social and economic implications of this technology.

Realizing that it interpreted the question in general terms, I tighten up my prompt:

Sure, but can you perform such analyses?

The cursor in the response box blinks for a while. I feel a spur of excitement run through my body. I think I know the response, but what if? What will it respond? Again, a rush of that abysmal feeling: Is this the end of social science writing? What’s the point if machines will now be able to do it? Will the machines – like vampires – suck the lifeblood out of our profession? Now comes GPT’s reply:

As a large language model trained by OpenAI, I am not capable of conducting original research or analysis. My function is to provide information and answer questions to the best of my ability based on the data and information I have been trained on. I can provide general information and explanations on a wide range of topics, but I am not able to perform original research or analysis.

Phew. Let’s go then.

Operation manual

This book is about how to approach the transformative socio-political phenomenon of AI – broadly conceptualized as artificial intelligence alongside interconnected technologies and phenomena such as datafication, machine learning, and algorithmic automation – with its societal and cultural contexts and consequences, from the perspective of critical theory. As you will see in the next chapter, I see AI in terms of assemblage – that is, as a very broad, and societally entangled phenomenon. AI itself consists of several different technologies, and AI is positioned in a network of humans, politics, economy, culture, meaning-making, and power.

This view means that this is actually not a book about artificial intelligence. By this I mean that it is not about the computer science incarnation of AI, as it is developed in labs for particular applications and implementations. That is a different field of research and expertise, which operates with a much less fluid understanding of what AI is and means. This book is however about ‘AI’ – as a sociopolitical reality. Importantly, it is not a book about fantasies, dreams or fears about what AI may become. It is about currently existing technology and its ramifications for people, politics, and power.

But let’s not make any mistakes. We do indeed live in an accelerating age of machines where processes of computerization and datafication are key drivers (Couldry and Hepp, 2017). There is, in fact, also increased talk about the equally exciting and scary prospect of the development, or emergence, of a form of ‘machine superintelligence’ (Bostrom, 2016), or ‘Life 3.0’ (Tegmark, 2017). It might not be all fantasies. AI scholars such as Nick Bostrom and Max Tegmark have reinvigorated discussions about the possibility of an ‘intelligence explosion’, as envisioned already in the 1960s by statistician Irving John Good:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any [hu]man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of [hu]man[s] would be left far behind. Thus the first ultraintelligent machine is the last invention that [hu]man[s] need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously. (Good, 1965, p. 33)

Views certainly differ on if and when such a development may happen. Tegmark (2017, p. 31) discusses a range of different prognoses that range from those guessing that we will see AI surpass human-level intelligence in just a few decades, and those who believe that it will take at least a century, to those who think it will never happen at all. This book, once again, is not about the probability, technicalities, and eventualities of such developments in AI. It is about the present reality of AI and related technologies, and particularly on how its current and actual incarnations and applications play into, and affect, society, politics, and culture.

AI, at its present stage of refinement, is rapidly developed and implemented all over society. Even though it carries the aura of futurism and science fiction, it is clear, as tech philosopher Mark Coeckelbergh (2020, p. 4) writes, that ‘AI is already happening today and it is pervasive, often invisibly embedded in our day-to-day tools’. And while some forms of AI have the potential to contribute to society in a positive, the insight that it also may be used, and come to operate, in ways that threaten society and humanity is also part of our collective consciousness.

There is certainly a conspicuous awareness across the technologically oriented fields of AI research of the need to pose critical questions about the uses of AI and its future. Indeed, scholarship focusing on AI Ethics and Responsible AI is thriving (Liu and Zheng, 2022; Vieweg, 2021). A common goal in such literature is ‘the development of intelligent systems according to fundamental human principles and values’ (Dignum, 2019, p. 6). In some cases, this means aligning with perspectives from areas such as feminism (J. Gray and Witt, 2021) and racial/intersectional studies (Korn, 2021).

Still, crucial parts of AI ethics research are carried out in close proximity to the processes where AI is developed, which means that there is a risk that the more critically aware perspectives end up playing second fiddle to the heavy technological and economical drives to simply push on at whatever cost. Some literature, for example, argues that AI should be ethical for the instrumental reason that ‘unfair decisions made by the systems [are] ultimately impacting the reputation of the product’, and that ‘AI done correctly […] is not just good for the user; it is good business too’ (Agarwal and Mishra, 2021, p. 3). There is certainly a divide between, on the one side, research in more technologically oriented fields such as Ethical AI and Responsible AI and, on the other side, critical research in the humanities and social sciences where AI is approached from a significantly different perspective. As put by digital humanities scholars Benjamin Roberts and Caroline Bassett:

Critical studies of artificial intelligence pick up where normative models of ‘responsible AI’ end. The latter seeks to build acceptance for AI decision making by incorporating a form of ‘best practice’ into AI techniques and machine learning models to ensure ‘equity’ of some order (racial, gender, less often sexuality or disability, hardly ever class) through the avoidance of bias and [through] ‘transparency’ in their operation. Responsible AI is, then, primarily a technical solution to technosocial problems. […] As such it fails to address the concerns about artificial intelligence that are bound up with wider social challenges, debates and anxieties, anxieties that are not simply technical in nature. (Roberts and Bassett, 2023)

AI research, however, must by necessity be multidisciplinary (Dignum, 2020), so there is an obvious need to bridge this divide. While there may still exist a clear polarization of perspectives in many respects, there is also in fact an increased openness and overlap between Ethical/Responsible AI scholarship and sociopolitically informed critical perspectives. In general, there is a growing interest in analysing our age of AI also in ways that are more clearly rooted in long-standing critical theories of power, politics, and technology, and which allow for a more far-reaching challenge of downsides and injustices related to AI, datafication, and automation (Dignum, 2022; Gunkel, 2012; Noble, 2018; O’Neil, 2016; Pasquale, 2020; Waelen, 2022).

But at the same time AI is also big business, and there are limits to the openness. This was apparent for example in the affair surrounding Google’s ousting of AI ethics researcher Timnit Gebru in late 2020. Having co-authored a seminal paper on how facial recognition technology was discriminating against women and people of colour (Buolamwini and Gebru, 2018), and being a founder of the Black in AI initiative, Gebru was tasked by Google to study social and ethical consequences of AI, and particularly how the technology could maintain social injustices.

Being a vocal advocate for diversity and an outspoken critic of bias in AI systems, Gebru was in the end forced – under contentious circumstances – to leave the company after having co-authored a paper on the risks of large language models (such as for example GPT), pointing out their costs in environmental and financial terms, their inherent risk of leading to dangerous biases, and the possibility that such models can be used as tools for deception (Hao, 2020). Gebru’s banishing, followed by the firing in early 2021 of her colleague Margaret Mitchell, another scientist on the Google AI ethics team who had voiced concerns that the company was now censoring research (Osborne, 2021), sent shockwaves through the AI community. Questions were raised about the role of critical perspectives in Ethical AI research, and the limits imposed by the industry as to what kinds of criticism were welcomed or even allowed.

This is the context in which this book is written. My argument is that we need to push social science analyses of AI much further into critical territory than what is the case today. With this book, I want to contribute to an ongoing discussion about what critical theory can contribute to analyses of AI – what questions it may raise, and what concepts it can provide to discuss them. But let’s back up a bit first and reflect on the very definition of ‘AI’.

AI: an empty signifier

Clearly, artificial intelligence is everywhere today. Cute little robot units vacuum our homes and mow our lawns, while learning, as machines, to improve their performance on a daily basis. Indeed, machine learning – developing methods for training non-human agents to interpret patterns in data, to be able to reason and make decisions without the supervision of humans – is on the rise.

Beyond robot vacuums, mowers, and the like, so-called deep learning – which leverages novel computing power in mimicking the neural networks of human brains – has accelerated this development. The previously mentioned GPT-3 was said around the time of its 2020 launch to be the most powerful neural network ever created. GPT-4 will, according to some reports, be 500 times bigger (Romero, 2021), and there are other competitors too (Smith et al., 2022).

As society has become increasingly marked by datafication – through digital platforms’ and services’ amassing of unprecedented amounts of information about our social and psychological lives – we have progressively more raw materials than ever to feed into the learning machines. Consequently, these smart machines can be employed in various processes of intelligent automation as a multitude of algorithms for sorting and deciding stuff proliferate in a number of areas.

The contexts in which machines may be used today to carry out tasks with very little human intervention are innumerable. They include the personalized shopping recommendations that we get online, which are based on intelligent agents leveraging algorithms in analysing patterns in our browsing histories, as well as different kinds of targeting of messages in marketing and political campaigning. Additionally, there are applications ranging from customer service chatbots to robots for industrial manufacturing, that enable businesses to rationalize their operations.

Intelligent automation is increasingly called upon in various processes of assessment or decision-making, such as detecting diseases in x-ray images or other health data, the grading of school assignments, recruitment of new employees, social welfare decisions and other public services, as well as for logins or passport controls through facial recognition or other biometrics. AI-enabled automation is also used to create ever more realistic and human-like non-player characters in computer games, as well as for real-time object detection during sports television broadcasts. The list goes on and on, and clearly these technologies can be of great help in many cases, while their expanding use in several sensitive areas also calls for serious caution. This is both because the models are sometimes not as good as the claim, and because they are sometimes outright harmful.

In this book, I take AI to refer to this whole complex of developments: artificial intelligence, machine learning, neural networks, automation, algorithms, datafication, and so on. I also include in my definition, the social, political, and economic contexts that surround these technologies. This is because I am of the view that tech must be understood in relation to political economy, and that it is not an autonomous force. I am aware that such a definition is not satisfactory for everyone. It is, for example, problematic in technological terms to claim that ‘automation’ is AI, that ‘AI imaginaries’ are AI, or that ‘datafication’ is AI. All of them are, however, crucially interrelated. This book has a social and political focus, where the object of study is not primarily technological.

There is reason, in other words, to differentiate between definitions that are straightforwardly technological on the one hand, and definitions that are less stable and more contentious on the other. A book about electronic music, for example, could define its subject matter in quite concrete and direct terms, as a set of musical genres that use electronic or electromechanical, analogue, or digital, instruments as tools for expression. This is a technological definition of electronic music. Another possible way of defining electronic music, depending on the profile of the imagined book, could be in terms of its cultural impact, its sometimes-futuristic imaginaries, and its relationship throughout its history to other, more mainstream, musical genres. In contrast to a technological definition, this would be a sociopolitically flavoured definition of electronic music. Neither of these two definitions is wrong, but they focus on different things.

We can imagine the same difference in relation to other, less directly technological phenomena. What is freedom? Just as in the above example, we can define it in, what seems to be, quite unproblematic terms. The Oxford Dictionary of English, as provided through the native Dictionary app on my macOS computer, defines freedom as ‘the power or right to act, speak, or think as one wants’, and as ‘the power of self-determination attributed to the will’, and as ‘the state of not being imprisoned or enslaved’. These are all reasonable and workable definitions, but if we add a more sociopolitical sensibility to discussions of freedom, we will clearly see tensions and frictions among different views: Freedom for whom? Under what conditions? Free from what? Free to do what? Freedom at what cost?

The same goes for AI. It indeed has a straightforward technological definition. In an encyclopaedia of AI, it is stated that:

Artificial intelligence is a fast-growing branch of computer science, a discipline famous for its interdisciplinarity and free-spirited interest in next-generation challenges of automation. Within universities, it encompasses the study of intelligent agents, autonomous programs that sense and actively respond to data collected in an environment. The virtual assistants Alexa and Siri are examples of intelligent agents. Agents may be embedded in smart robots, such as driverless cars and unmanned aerial vehicles. More broadly, artificial intelligence describes real or imagined efforts to simulate cognition and creativity. The term distinguishes machine and code from the natural intelligence of animals and people. (Frana and Klein, 2021, xi)

This, then, is a clearly technological definition of AI. It says that AI is a specialization in computer science that is concerned, in practice, with automation through intelligent agents and, more broadly, with using computers to simulate human intelligence. AI, in this sense, has a fairly long and well-documented backstory. Its modern history is said to have started in the summer of 1956 when a group of researchers arranged a summer workshop at Dartmouth College. The research field of AI then moved through a series of ups and downs – commonly described as AI winters of setbacks and disappointments (mid 1970s to the early 1980s, and late 1980s to the early 1990s), and AI summers of hope and optimism in between (Bostrom, 2016, pp. 5–9). As put by Vesa and Tienari (2020, p. 1):

Artificial intelligence (AI) seems to be a topic of masterful grand returns. It hibernates into AI winters, only to receive a thorough thawing once the promise of a novel and radical technological breakthrough emerges. In this respect, we live in interesting times as AI has yet again awoken from hibernation.

Today, the hype is in full force. A wave of literature, as well as actual technological progress, in recent years has painted a fairly dramatic vision of what AI may be able to achieve. The new optimism, philosopher Nick Bostrom explains, largely has to do with progress that has been made in the particular area of machine learning. This is also related to the increased power of computers and the dawning of neural networks, that connect computer processors together in a way similar to neurons in a human brain. The rapid development in machine learning has, in turn, contributed to a renaissance for artificial intelligence.

Computational neuroscientist Terrence Sejnowski (2018, p. 3) writes of how today’s learning algorithms are ‘refineries’ that extract information from the abundance of data which is ‘the new oil’. Consequentially, various applications of AI that solve specific problems or that prove useful in specific domains have been both practically and commercially successful in recent years (Bostrom, 2016, p. 18). Examples are decision-making algorithms, self-driving vehicles, virtual assistants, and computer vision for interpreting x-ray images. As mentioned before, we are now at the point where some experts even claim that AI can match or surpass human-level intelligence – through so-called strong AI or artificial general intelligence (AGI) – is within reach, even though prognoses differ wildly on how long this may take. Critical analysis of the kind proposed in this book is however not as interested in future prospects as in actual social and political circumstances in the here and now. As data journalist and AI researcher Broussard (2018, p. 32) writes:

General AI is the Hollywood kind of AI. General AI is anything to do with sentient robots (who may or may not want to take over the world), consciousness inside computers, eternal life, or machines that ‘think’ like humans. Narrow AI is different: it’s a mathematical method for prediction. There’s a lot of confusion between the two, even among people who make technological systems. Again, general AI is what some people want, and narrow AI is what we have.

So, while the vision of General AI has not been realized, we see more and more examples of Narrow AI emerging in the shape of ‘systems that achieve human-level (or even superhuman) performance on specifically defined tasks’ (Leong, 2021, p. 160). A truly critical perspective on AI demands that we move past the technological definitions, and towards the sociopolitical ones. It is important to note here the shift of focus that is made: AI in technological terms is what it is and exists in its own right. What we are doing here is taking one step to the side and one step back, zooming out to see AI as a phenomenon in its societal and cultural context. One way of conceiving of such a manoeuvre is in terms of a deployment of what sociologist C. Wright Mills wrote about as the sociological imagination. No matter if the object of study is electronic music, freedom, or artificial intelligence, such as in our examples above, mobilizing this fantasy will help reveal immense complexities (see Chapter 2) around the notions in question. According to Mills (1959, pp. 5–6):

The sociological imagination enables its possessor to understand the larger historical scene in terms of its meaning for the inner life and the external career of a variety of individuals. It enables [… us] to take into account how individuals, in the welter of their daily experience, often become falsely conscious of their social positions. [The sociological imagination] is the capacity to shift from one perspective to another – from the political to the psychological; from examination of a single family to comparative assessment of the national budgets of the world; from the theological school to the military establishment; from considerations of an oil industry to studies of contemporary poetry [or from a language model to intersectional politics]. It is the capacity to range from the most impersonal and remote transformations to the most intimate features of the human self – and to see the relations between the two.

Paraphrasing Mills then, sociologically imagining AI is to see it in relation to the larger historical context; to assess how it may relate both to people’s inner life and to their external conditions and life circumstances. It has to do with seeing how AI relates to people’s everyday experiences and social positions. It is also about shifting the perspective on AI (from a technological one) to an approach that considers a complexity of contextual circumstances on a variety of levels in society and culture. This is clearly no easy task, but it is one for which we turn to the tradition of critical theory in this book.

This means seeing AI as a kind of empty signifier – ‘a privileged element that gathers up a range of differential elements, and binds them together into a discursive formation’ (Wrangel, 2015). In other words, acknowledging that AI can mean many different things, not only in technological terms but, more importantly, as regards what political meanings are symbolically connected to it, and what consequences this gets in terms of both ideology and social practice. As explained by political theorist Ernesto Laclau (1996, p. 35): ‘Society generates a whole vocabulary of empty signifiers whose temporary signifieds are the result of a political competition.’ Seeing AI as such a signifier means realizing that what it ends up being – technologically, politically, practically, socially – is not prescribed by nature, but the result of sociopolitical processes.

An end-to-end sociology of AI

This book is set in the broader context of the surfacing of a field that has been dubbed Critical AI Studies (Lindgren, 2023; Roberge and Castelle, 2021a). Scholars in this emerging area have written about how, repeatedly, ‘critical reflections on artificial intelligence emerging from the social sciences have had to fight for their legitimacy’ (Roberge and Castelle, 2021b, p. 2). In an introduction to the field, sociologists Jonathan Roberge and Michael Castelle point out that while critical sociological reflection is indeed needed around AI and machine learning, ‘missteps and missed opportunities […] have punctuated the relation between machine learning and the social sciences’ (2021b, p. 12). They point, for example, to the need to revisit previous work by the likes of Brian Bloomfield, who wrote in the 1980s about ‘the culture of AI’ that the critical, sociopolitical, analysis of AI must not be restricted to the mere ‘effect of “intelligent” computers on individuals or society’, but rather that this research should seek to ‘tackle the social milieu and tradition behind the groups who are the originators and disseminators of the ideas and ways of thinking that characterize AI (Bloomfield, 1987, p. 59).

Similarly, scholars in the area of science and technology studies (STS) have emphasized the need for a sociology of machines (Woolgar, 1985), with a focus on how knowledge is constructed in and around AI (Forsythe, 1993). Sociologist Steve Woolgar outlined, in 1985, a sociology of machine intelligence that should be careful not to ‘merely adopt the discourse of AI’. Rather than analysing AI in a way that is guided and delimited by that technological field’s own language and definitions, we should ‘take as topic the dichotomies and distinctions which characterize and sustain this discourse’. Instead of doing research within the culture of AI, then, we should work to ‘develop a sociological approach which takes as its focus the human/mechanical language community; […] What circumstances generate these public accounts of the importance of AI […]?’ (Woolgar, 1985, p. 567).

In addition to such a focus on how knowledge production happens around AI, the field of Critical AI Studies must house an ‘end-to-end sociology of contemporary ML/AI’ (Roberge and Castelle, 2021b, p. 3) that will:

address machine learning and artificial intelligence from differing historical, theoretical, and political perspectives from their epistemic genesis to sociotechnical implementations to social impact.

This means that AI must be seen from the theoretical perspective of the social shaping of technology (Pinch and Bijker, 1984; Williams and Edge, 1996). This emphasizes how the design and implementation of technologies, such as AI, is always historically, socially, and culturally dependent. Technology is envisioned, developed, talked about, and gets its consequences through processes that are entangled with a variety of social considerations and contingencies (MacKenzie and Wajcman, 1985). By consequence, technologies are political. They embody power and social relations. Historian of technology Melvin Kranzberg (1986, p. 546) argued that:

Technology’s interaction with the social ecology is such that technical developments frequently have environmental, social, and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.

This underlines the important point that the effects of technology are not universal. One and the same technology will produce different results in different contexts, and under different circumstances – for different individuals and groups of people. Furthermore, AI is today becoming increasingly construed as an all-purpose or general-purpose technology (Brynjolfsson and MacAfee, 2017, p. 4; Crawford, 2021, p. 214). Just like electric light or the telephone, AI is becoming so ubiquitous – ‘contributing to the evolution of the activities and relations of so many distinct sites of practice – that it requires considerable effort to understand their effects on society, assuming that such a global notion of “effects” even makes sense’ (Agre, 1997, p. 131).

But in spite of all complexities of politics and power, consequences of AI are still predominantly studied by the same scientists who are engaged in creating the AI agents themselves (Rahwan et al., 2019). This leads to a strong focus on research that is designed to ensure that AI fulfils intended functions. AI is seen as having to be adequate, efficient, responsible, and so on. And even though it could be argued that social scientists, and also humanities scholars, are taking part in AI research to a somewhat growing degree, the research agenda is still set through posing questions based in the AI technologies per se, rather than in their social and political contexts. Roberge and Castelle (2021b, p. 4) argue that seeing AI as being co-produced at the intersection of the social and the technical, instead of as a predominantly technological phenomenon, ‘requires an epistemic step that ML practitioners have not fully accepted themselves’. This is the scholarly setting wherein this book wants to contribute.

There is much complexity, not only regarding what AI is and will become, but also when it comes to the multitude of potential and relevant ways to analyse it, drawing on different perspectives and various forms of academic expertise. It is important to note here, that we are by no means looking for any one perspective to rule them all. Rather, we must work together across the academic community so that a number of different approaches can be brought to bear on AI. It is quite fruitful to see AI as a ‘boundary object’; it is ‘plastic enough’ for its definition to bend in a number of different directions, yet ‘robust enough to maintain a common identity across sites’ (Star and Griesemer, 1989, p. 393). It is a kind of phenomenon, the study of which requires bringing together several different viewpoints and actors and managing the tension between them (Star and Griesemer, 1989, p. 389). This book then, brings one such viewpoint, that of critical theory, to the table.

Super machines

When it comes to critical theory there is a narrow definition, and a broader definition. This book relies on the broader one. Narrowly defined, the concept of ‘critical theory’ has been used within the social sciences and humanities to refer to the form of social and cultural analysis that was founded within and around the so-called Frankfurt School. During the interwar period in the 1920s and 1930s, the Institute for Social Research in Frankfurt, with scholars such as Max Horkheimer, Theodor Adorno, Erich Fromm, and Herbert Marcuse, formulated an interdisciplinary research agenda focused on a critique of the alienation and reification of modern life. As explained by political scientist Stephen Bronner:

They investigated the ways in which thinking was being reduced to mechanical notions of what is operative and profitable, ethical reflection was tending to vanish, and aesthetic enjoyment was becoming more standardized. Critical theorists noted with alarm how interpreting modern society was becoming ever more difficult. Alienation and reification were thus analyzed in terms of how they imperilled the exercise of subjectivity, robbed the world of meaning and purpose, and turned the individual into a cog in the machine. (Bronner, 2017, pp. 4–5)

While not always highlighted, the original critical theory of the Frankfurt School indeed has quite a lot to say about technology, and some of those ideas can be carried into the age of AI (Delanty and Harris, 2021, p. 88). In its study of capitalist society, the Frankfurt School was interested in analysing power and politics in relation to the introduction of new technologies or machinery. On the one hand, technology bears the promise of rationalization, and the emancipation of workers, as well as the potential that its more creative expressions could ‘revolutionize human sensibilities’ (Jeffries, 2016, p. 176).

This was expressed, for example, in the writings of Walter Benjamin (1999, p. 17), which discuss technology as ‘the spark that ignites the powder of nature’. On the other hand, technological rationality and automation can pave the way for ‘exploitation, repression, and alienation’ (Bronner, 2017, p. 1; Wogu et al., 2017). Adorno (1978, p. 118) argued that the technical possibilities of the time meant that mechanical processes developed, in ways that made them increasingly autonomous – as is now the case in AI. Furthermore, he wrote that, according to the logic of capitalism, once investments have been made in ‘super-machines’, they must be used no matter if we actually need what they produce or not:

The fascinated eagerness to consume the latest process of the day not only leads to indifference towards the matter transmitted by the process, but encourages stationary rubbish and calculated idiocy. (Adorno, 1978, p. 118)

Read that Adorno quote once again and think of AI in the 2020s. The technologies may change, but the importance of being critical, rather than fascinated, eager and indifferent to consequences, remains. The Frankfurt theorists did not ‘bemoan modern technology as such’ but emphasized the need to ‘reassert goals’ (Delanty and Harris, 2021, p. 91).

The Frankfurt scholars saw technology both as an enabler of ‘new cultural realities’, and as a machinery of instrumental rationalization and dehumanization (Delanty and Harris, 2021, p. 90). While the latter view was no doubt dominating, it is important to note that the early critical theorists were critical not of technology as such, but of how it was used. Among some of their contemporaries, such as political philosopher Hannah Arendt, technology was seen as something that should be held back by humans. The threat, otherwise, is a situation where it is ‘no longer the body’s movement that determines the implement’s movement but the machine’s movement which enforces the movements of the body’ (Arendt, 1958, p. 164). For Arendt, there was a risk that rather than technological tools to be used by humans, we would have machinery under which humanity could become ‘helpless slaves [and] thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is’ (Arendt, 1958, p. 21).

What we bring with us into this book from the first iteration of critical theory is its emphasis on analysing technology in relation to society, economy, ideology, and power relations. This brings us further to the broader definition of critical theory, which includes a wide range of other approaches and concepts beyond the Frankfurt School. In the wake of the narrowly defined critical theory – which refers to ‘several generations of German philosophers and social theorists in the Western European Marxist tradition [following] the Frankfurt School’ – a series of other critical theories have emerged since the mid twentieth century (Bohman et al., 2021). Such theories are unified by their direct or indirect adherence to Horkheimer’s (1972, p. 246) definition of critical theory as ‘radically analyzing present social conditions’, providing a ‘critique of the economy’ (broadly conceived), to put the finger on processes of power so that the research can be a ‘liberating […] influence’ with an emancipatory agenda ‘to create a world which satisfies the needs and powers’ of humans. Bohman et al. (2021) explain that:

Because such theories aim to explain and transform all the circumstances that enslave human beings, many ‘critical theories’ in the broader sense have been developed. They have emerged in connection with the many social movements that identify varied dimensions of the domination of human beings in modern societies. In both the broad and the narrow senses, however, a critical theory provides the descriptive and normative bases for social inquiry aimed at decreasing domination and increasing freedom in all their forms.

These critical theories include a range of perspectives and schools such as, but not limited to, cultural studies (Hall, 1997b; Williams, 1961), semiology (Barthes, 1970; Saussure, 1966), poststructuralism (Derrida, 1976; Foucault, 1972–2000), post-Marxism (Althusser, 1984; Gramsci, 1971), feminism (Haraway, 1985; Kristeva, 1980), postmodernism (Baudrillard, 1994; Lyotard, 1984), queer theory (Butler, 1990; Halberstam, 2005), race, and postcolonialism (Crenshaw, 1995; Said, 1978; Spivak, 1996), and critical theory of technology (Chun, 2011; Feenberg, 1991; Stiegler, 2019; Wajcman, 1991). While different books that purport to give overviews of the field of critical theory differ somewhat as regards inclusions and omissions, these mentioned perspectives tend to persist (Buchanan, 2010; Easthope and McGowan, 2004; Felluga, 2015; Tyson, 2006; see for example Wake and Malpas, 2013).

All of these perspectives will not be covered within this book. No matter if one opts for the narrower, Frankfurt School, definition, or for the broader definition including a range of other perspectives, the main focus is on a certain kind of critical perspective more generally. Critical theorist of communication Christian Fuchs (2022, pp. 17–18) contends that ‘[o]ne could say that all contemporary academic thought is critical because it questions opinions of other scholars’, and that ‘[a]ll contemporary political communication is in a specific way critical because it consists of speech acts that normally question political opinions and practices of certain actors’. The focus of the critical in critical theory, however, is on ‘the analysis and questioning of power structures in order to overcome them and create a better society’ (Fuchs, 2022, p. 10). Critical theory, in this rendition, does not refer to the practice of being critical to anyone or anything specific, but is a critique with a focus on ‘the questioning of power, domination and exploitation’ (Fuchs, 2022, p. 17).

Fuchs suggests defining critical theory, not by referring to particular schools or approaches, but focusing on the level of content. He then cites six dimensions of what constitutes a critical theory: ‘Critical ethics; Critique of domination and exploitation; Dialectical reason; Ideology critique; Critique of the political economy; Struggles and political practice’ (Fuchs, 2022, p. 20). Drawing on a similar strategy, this book defines critical theory as the analysis of power relations in society and breaks this down into:

First, A critique of ideology