16,99 €
Machine learning algorithms are widely presumed to herald a world in which the crippling burdens of anxiety can be left behind. The digital revolution promises a brave new world where individuals, communities and organizations can at last take control of the future – anticipating, designing and commanding the future, possibly even with mathematical exactitude. Yet, paradoxically, algorithms have unleashed widespread fears and forebodings about the impact of digital technologies. Whether it’s worries about unemployment, distress about social media’s harmful effects on teenagers, or the fear of intrusive digital surveillance, we live in an age of turbo-charged anxiety where the prophecies of algorithms are increasingly enmeshed with fundamental disruption and anxieties about the future.
In this book, Anthony Elliott examines how machine learning algorithms are not only transforming global institutions but also rewriting our personal lives. He tells this story through a wide-ranging analysis which takes in ChatGPT, Amazon, the Metaverse, Martin Ford, Netflix, Uber, Bernard Stiegler, Squid Game, Kate Crawford, LaMDA, Byung-Chul Han, autonomous drones, Jean Baudrillard and the automation of warfare.
Questioning why people often assume that they need to adopt new technologies in order to lead fulfilling lives, Elliott argues that people may be as much entranced as inspired by their outsourcing of personal decision-making to smart machines.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 407
Veröffentlichungsjahr: 2024
Cover
Title Page
Copyright Page
Dedication
Preface and Acknowledgements
Notes
1 Algorithmic Ascendency, Ambient Anxiety
Society as Automated Algorithmic Management
Automaticity Goes Global
Ambient Anxiety in the Algorithmic World
Algorithmic Modernity: Outsourcing Agency to Smart Machines
Critical Notes: Theoretical Coordinates and Conceptual Terminology
Automation, Machine-learning Algorithms and AI: Some Terminological Distinctions
Notes
2 Automation after Amazon
‘You’re Not Dead’: Living with Automation
The Algorithmic Secession
Digital Despair
Sequestration and Stress
The Open Wounds of Automated Uselessness
Coda
Notes
3 Netflix’s Nihilism
The Rise of Netflix: Algorithmic Recommendation Technology
Cultural Consumption: Between Algorithmic and Human Agency
Magic and the Contrived Omnipotence of Algorithms
Notes
4 The Lethal Ecstasy of Algorithmic Violence
Squid Game
’s Lethal Ecstasy
Deadly Elimination: The Trade-offs of Eros and Thanatos
Squid Game
’s Digitally Engineered Self-destruction
Computational, Automated Battles
Battles of Automated Computational Machines
Notes
5 The Metastasis of the Metaverse
The Metaverse and Digital Transformation
Metaverse as Voraciously Devouring the World
Selfhood and Experience in the Metaverse
Notes
6 Machine Intelligence and Its Discontents
The LaMDA Debate: Two Contrasting Perspectives
Questioning Intelligence in the Age of Smart Machines
Coda: Images of ChatGPT
Notes
7 On Agency after Smart Machines
From Augmentation to Atrophy
The Composition of Insecurity
Fear in the Algorithmic World
Fear of Data Sweeps
Dataphobia: Fears of Missing Out
Hacking Fears
Fears and Chances: Breaking the Cycle?
Notes
Index
End User License Agreement
Cover
Table of Contents
Begin Reading
iii
iv
v
vi
ix
x
xi
xii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
Anthony Elliott
polity
Copyright © Anthony Elliott 2024
The right of Anthony Elliott to be identified as Author of this Work has been asserted in accordance with the UK Copyright, Designs and Patents Act 1988.
First published in 2024 by Polity Press
Polity Press
65 Bridge Street
Cambridge CB2 1UR, UK
Polity Press
111 River Street
Hoboken, NJ 07030, USA
All rights reserved. Except for the quotation of short passages for the purpose of criticism and review, no part of this publication may be reproduced, stored in a retrieval system or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publisher.
ISBN-13: 978-1-5095-5542-0
ISBN-13: 978-1-5095-5543-7(pb)
A catalogue record for this book is available from the British Library.
Library of Congress Control Number: 2024930230
by Fakenham Prepress Solutions, Fakenham, Norfolk NR21 8NL
The publisher has used its best endeavours to ensure that the URLs for external websites referred to in this book are correct and active at the time of going to press. However, the publisher has no responsibility for the websites and can make no guarantee that a site will remain live or that the content is or will remain appropriate.
Every effort has been made to trace all copyright holders, but if any have been overlooked the publisher will be pleased to include any necessary credits in any subsequent reprint or edition.
For further information on Polity, visit our website: politybooks.com
To the memory of
Dorothy Jean Elliott
(1930–2022)
Some years ago I wrote several volumes about the cultural and institutional parameters of artificial intelligence. These were published as The Culture of AI (2019), Making Sense of AI (2021) and Algorithmic Intimacy (2022). This trilogy was not intended as a further contribution to the soaring technical literature on AI. Rather, the volumes sought to develop the contours of a critique of contemporary economy, society and politics in the wake of the digital revolution. Artificial intelligence has emerged as one of the most transformative forces in the world today – not only in terms of the scale of its impact on institutional life, but also in its reshaping of processes of self-formation and personal life. Algorithmic Intimacy developed a sociological portrait of how predictive machine-learning algorithms are becoming deeply lodged inside us – discreetly reshaping our personal behaviour – and I had initially meant this book, Algorithms of Anxiety, to be a companion volume about fear.
The individual person, wrote Fyodor Dostoevsky, ‘is tormented by no greater anxiety than to find quickly someone to whom he can hand over that gift of freedom with which the ill-fated creature is born’.1 Recalibrating Dostoevsky’s insight, my conjecture is that – in our own time – fear-inducing anxiety of which persons wish to rid themselves is increasingly passed over, or more accurately outsourced, to smart machines. To be sure, most fearsome is the ubiquity of fears encircling automated intelligent machines: it is as if threatening risks and terrifying anxieties may leak out of any nook or cranny of our newly minted technological workplaces and homes, or issue from engagement with our digital devices. Fears that AI is a job killer. Menacing trepidations about digital surveillance and the erosion of privacy. Escalating worries that algorithms accentuate social divisions and stoke racial and gender inequalities. Surging personal vulnerabilities in the face of social media culture. Mind-chafing and body-numbing panic about the militarization of drones and killer robots. Then there is that most terrifying fear that AI jeopardizes not only our workplaces, homes and families but threatens the very existence of humanity itself.
The intricate interconnections between algorithms and anxieties are the central theme of this book. To a significant extent, modern society enthusiastically valorises artificial intelligence: it is seen as a ‘breakthrough science’, a powerful new source of economic growth transformative of our lives in these times. But still an inherent ambivalence remains: fearful frames of mind attach to the daily struggle of connecting our online and offline worlds; feelings of insecurity continually erupt in the face of ever-escalating ‘information overload’; and self-propelling fears spread as an ‘unintended consequence’ of people’s outsourcing of their personal decision-making to automated machine intelligence. As I worked on this volume, I realized how much predictive algorithms do not lessen our disquiet but instead exacerbate the anxieties they appear to repair.
So this book is, among other things, an inquiry into the possibilities and perils of predictive analytics viewed from within the boundaries of human practice and emotional life. I situate machine-learning algorithms in the context of today’s cultures of anxiety as a starting point for exploring wider social problems.
I owe much to so many people who provided support, comments and criticism during the writing of this book. I’m grateful for discussions with Tony Giddens, Helga Nowotny, Nigel Thrift, Massimo Durante, Bob Holton, Masataka Katagiri, Hideki Endo and Rina Yamamoto about the social science of technology. I owe a special debt of gratitude to Ross Boyd, for his extremely valuable background work on predictive analytics and research assistance with the book as a whole. Dariusz Brzezinski at the Polish Academy of Sciences, Takeshi Deguchi at the University of Tokyo and Iarfhlaith Watson at University College Dublin arranged lectures where I presented parts of this book; I thank colleagues and students at each of these places for their comments.
I have been fortunate in having the following people as an ‘invisible college’ to make virtually everything possible: Ralf Blomqvist, Anthony Moran, Carmel Meiklejohn, Oliver Toth, Kriss McKie, Fiore Inglese, Nick Stevenson, Gerhard Boomgaarden, Nigel Relph, Roman Batko, Atsushi Sawai, Ross Boyd, Louis Everuss, John Cash, Bo-Magnus Salenius, Mike Innes, Kamil Filipek, Deborah Maxwell and Eric Hsu. Many thanks also to Tim Clark for his exquisite copy-editing skills.
My thanks, finally, to Nicola Geraghty, and to Caoimhe, Oscar and Niamh, who heard many of the arguments I develop in the book half-raw and saved me from many blunders. This book is dedicated to a person who for nearly sixty years supported my endeavours, and on whom I always depended.
Anthony Elliott
Adelaide, 2023
1
Fyodor Dostoevsky,
The Brothers Karamazov
, trans. Constance Garnett, New York: Modern Library, 1996, p. 282.
The power of predictive algorithms is widely presumed to herald a world in which the crippling burdens of anxiety will be left behind and where individuals, communities and organizations can at last take control of the future, possibly with mathematical exactitude. Yet, paradoxically, algorithmic societies of the twenty-first century have unleashed gripping fears and disabling anxieties. Whether it’s anxiety about social media’s harmful effects, technological unemployment or profoundly intrusive and often disabling digital surveillance, we live in an age of turbo-charged anxiety where the prophesies of algorithms are increasingly enmeshed with fundamental disruption and detachment.
The growing sense of algorithmic anxiety which has today taken hold and multiplied – ranging from our personal worries over privacy to political attempts to legislate collective protection in the face of big tech and the harms of social media – is rooted in the fear that we are being refused possible freedoms.1 One of the claims I advance in this book is that, by elucidating the many complex AI-enabled digital systems invisible from the vantage points of routine daily life, social theory can help us to see that automated machine intelligence is prone to functioning as interdiction; as prescription masquerading as recommendation; as prohibition disguised as calculation.
Managed by Bots, an eye-opening study by Cansu Safak and James Farrar, casts fresh light on the fate of gig economy workers who are increasingly subjected to the surveillance of automated algorithmic processes. The authors, commissioned by the non-governmental organization Workers Information Exchange, insist that today’s ‘algorithmic management tools, with the addition of intensifying surveillance practices, continuously scrutinizing workers for potential fraud or wrongdoing, are resulting in a deeply exploitative working environment’.2 Automated algorithmic management promised to deliver efficiency and transparency on a scale not previously realized. However it hasn’t and will not, say Safak and Farrar. Delving into the experiences of gig workers from such companies as Uber, Deliveroo, Amazon Flex and Bolt, there emerges a persistent lack of fairness and transparency in automated management systems – so the authors suggest – across areas such as recruitment, work allocation, performance management and dismissals. ‘Gig platforms’, they write, ‘assert control over workers by maintaining an informational asymmetry’.3 From the use of facial recognition technology to check identities to connecting drivers with customers to summary dismissal of employees, the structure and workings of predictive algorithms are, at root, oblique, incomplete, contingent. Frustrated gig economy workers, argue Safak and Farrar, can find little redress in the face of today’s algorithmic techniques for automated management systems.
Managed by Bots was a report undertaken against a global backcloth of policy-makers, regulators and courts seeking to reform the ways in which the gig economy’s current business model operates. Increased scrutiny of these platforms’ algorithms had been especially evident across Europe. In Italy, Deliveroo was fined for discriminating against workers because its algorithmic rating system failed to differentiate between unproductive employees on the one hand, and workers who were ill or otherwise unavailable on the other. Spain had around the same time passed legislation that sought to tighten regulations governing AI, granting employment status for gig workers. And the UK Supreme Court, in a decision that attracted global headlines, ruled that Uber drivers were entitled to workers’ rights with access to the minimum wage and paid holidays. But despite these regulative efforts to bring big tech into line with some limited worker protections – adapting and adjusting the ways and means of advanced automation to the requirements of long-established basic employment practices – governments and the legal system struggled to keep up with the advances of AI. The lightning-fast innovations of market-leading technology continued to release the business of algorithmic management from any regulative strings attached. Safak and Farrar sombrely note that, as a consequence of this outstripping of regulation by tech, ‘We are seeing an inordinate number of automated dismissals across the entire gig industry, many of which we believe to be unlawful.’4
In present-day automated society no one is presumed to face disciplining unless it has been administered by the algorithmic techniques of control as a specific remedy for misbehaviour. In their case study titled ‘Algorithmic Control’, Safak and Farrar recount the plight of gig worker Alexandru, a thirty-eight-year-old Uber driver from London who had fashioned a flawless five-star customer service rating over the course of some 7,000 trips:
Uber routinely sends drivers messages when they are flagged by its fraud detection systems to warn them that they may lose their job if they continue whatever behaviour is triggering the system. The messages contain a non-exhaustive list of the potential triggers, but do not provide a reason specific to the driver that is being accused of fraud. When Alexandru received the second and final one of these messages, knowing another flag would result in dismissal, he decided to call the driver support team to get further details on why he was triggering the anti-fraud system and what he could do to avoid it. Through the call, Alexandru and the support agent discussed a variety of situations that may have caused his trips to appear irregular, revealing the limited ability support teams have in deciphering the indications made by the system.5
The computational problem of how to understand the scene of fraud detection – to comprehend, to decipher, to see what has happened – is brought low, ultimately, by the opacity of algorithms. Pinpointing and isolating a deemed ‘cause’ or ‘factor’ for disciplining remains bound up in the vast multiplicity of parameters and hidden layers of machine-learning algorithms. A worker or employee may raise questions, and there appears no shortage of digital means to reach out to ‘support teams’, but in the search to determine the extent of the seriousness of a fault, error, misdemeanour or crime there is ‘no outside of the algorithm’. As it happens, the aperture of predictive algorithms in organizational life is such that the weights and outputs of machine intelligence remain largely hidden from view. It is in this uncertain, unforgiving and indeterminate digital space that wrongdoing is rewritten through the presumption of guilt.
The regime of technologism, in which disciplining is explained away by the presumption of guilt, tends to have an individualizing effect on the employee by making their plight one which is isolating and isolated. Customer support centres, web links and other digital tools are supplied organizationally, together with ‘good practice guidelines’ for internal complaint processes. But the responsibility for proceeding and seeing through the workplace complaint procedure lies squarely on the shoulders of the worker issued with the message that they are in violation of company policy. Company supervisors, line managers, operation directors all disperse. It is instead a matter of self-directing, self-scrutinizing and self-exploratory submission by employees before the ‘support agents’ supplied by the human resource departments of companies. Madhumita Murgia, writing in Financial Times, followed up Uber driver Alexandru’s battle with automated algorithmic management, emphasizing the myriad ways in which gig economy companies fail to explain the operation of their algorithms to employees:
Alexandru claimed that while he was warned for fraud, such as deliberately extending the time or distance of a trip, he was not given any explanation of what had happened, and was unable to find out. ‘It’s like you find a warning left on your desk by your boss, but they are inaccessible, and if you knock on their door, they say, just stop whatever you’re doing wrong, or you will be fired, but they won’t tell you what it is. You feel targeted, discriminated against’, he said. When he called Uber for support, no one could help him. ‘They kept saying, “The system can’t be wrong, what have you done?”’6
If the ‘system can’t be wrong’, the supposedly logical conclusion is that the fault lies with the employee. But this, in turn, shifts the attention of workers away from the bona fide cause of the problem – so displacing rather than defying, condensing rather than confronting, the limits and dead-ends of automated machine intelligence. Instead of leading to individual questioning and public inspection of algorithmic techniques of management and control, the fear engendered by guilt whips up the employee into a storm of self-asserting efforts in order to grapple with the key organizational question: ‘What have you done?’
The whole business of automated algorithmic management emerges as Kafkaesque. The world of machine-learning algorithms anticipates our future propensities with mathematical exactitude, promising the delivery of finely calibrated socio-economic order, yet its decrees are both coercive and inscrutable. Kafka’s The Trial is the classic literary portrait of this double-bind.7 Kafka’s description of Law in The Trial captures the radical ambiguity of a rationality without reason. The novel’s hero, Joseph K., stands accused by unidentified agents from an unknown organization of committing an unnamed crime. He seeks throughout the novel to learn why he stands accused, yet is unable to do so because the Law has arbitrarily decreed it to be so. What follows is a host of inanities: K. remains ‘free’ despite being ‘under arrest’ and, undaunted, visits any authority that might supply clues as to his crime; he tries in vain to access ‘control authorities’ and courts in dubious locations; throughout, the crime of which he is accused remains unspecified. At various points in the novel, it is made clear that K. lives in a society with a legal constitution and under enforceable law. But Kafka tells us there is no justification before the Law. The novel reveals a world in which innocent individuals, once accused, are deemed guilty.
If this is what the fate of Law is about, it is certainly arguable that the authority of automated machine intelligence consists in nothing beyond the sheer performative automation of its own domination. I am suggesting that Kafka’s The Trial cannot be consigned simply to the realm of fiction; it is indicative of a disturbing new reality. Kafka’s analysis of power and guilt, when situated in a world of AI where extensive surveillance collects information on every conceivable aspect of people’s lives, is fertile ground for grasping how machine intelligence forces human subjects into dissembling, disavowal and never-ending self-justification before a Law of ferocious arbitrariness. The Trial was most certainly Kafka’s conclusive caution about totalitarian power. A similar arbitrariness in power relations is arguably discernible in the algorithmic era. Uber driver Alexandru’s good will and determination to gain access to an authority that might furnish him with an explanation of what he had allegedly done wrong is one signal example of this. Try and try as he did, no official explanation or legitimation of his position was forthcoming. Against the backdrop of rhetoric about the transparent algorithm, it was seemingly impossible for Alexandru to tell whether his fate had been decided in advance or whether this was a random error. In a world where predictive algorithms are accorded the status of objective certainty and definiteness to guard against uncertainty, corporate power exerts authority only by being resoundingly tautological (‘The system can’t be wrong’). Because Alexandru was accused by Uber, he became guilty (‘what have you done?’).8
‘The court’, wrote Kafka in The Trial, ‘wants nothing from you. It receives you when you come and dismisses you when you go.’9 This vicious cycle might equally describe automated algorithmic life. In the same way that the deafening silence of the court renders the defendant as her or his own judge, so too a similar arbitrariness belongs to the technoscientific terrain of machine-learning algorithms in general. Even though its advocates proclaim some sort of novel mathematical order in the world, automated algorithmic life remains ambiguous, ambivalent, fitful. On the one hand, the sheer technologism of advanced automation (management, surveillance, policing) has the proselytizing potentiality of all pure hegemony, retrospectively casting the human subject as subordinate to the absolute authority of machine intelligence. The reverse lining of this algorithmic power which precedes and pre-empts the individual self, however, is a mystifying indeterminacy which means it’s all but impossible to know whether courses of social action are rigorously predetermined or wildly random. Alexandru’s reflection that power is ‘inaccessible’ in the gig economy captures this double-bind well. As he sums up the pure violence of management by bots: ‘You feel targeted, discriminated against.’
While Alexandru’s comments are dramatic and alarming in their own way, this is of course far from the entire story in terms of understanding the sweep, reach and consequences of algorithmic power today. For one thing, there are vital differences between the socio-economic authority increasingly vested in digital technologies at a global level on the one side, and the absolute authority of the Law as portrayed in Kafka’s The Trial on the other. The omnipotence accorded to the Law in general throughout The Trial is, perhaps, why Kafka casts all authority as impervious to reasoned argument. Joseph K.’s fate was not just to be the absence of a fair trial. The end of the novel sees the hero brutally executed, screaming ‘Like a dog!’ Thankfully, such tragic torments of pure violence are not asserted or produced in the institutionalized realm of automated management – or, at least, not in the fashion discerned by Kafka. For Alexandru did, as it happens, eventually receive an apology from Uber. The company apologized for ‘flagging’ him in error. The issue was simply a computer mistake. But this, like many other corporate apologies today, was cast in the administered and administering language of computational detection, with its supposedly objective implication that any deviation from the automated pattern, or any individual trespass from algorithmic management, will result in termination of contract.
Another point of which it is important to be mindful is that how the digitalization of power has evolved in the corporate world over recent years might arguably be very different from the way that the automation of power has played out in the broader public sphere and in other sectors of cultural life. Both the stunning opportunities and the massive risks of automated algorithmic systems are unfolding in very complex and highly uneven ways across the globe (which I tried to sketch in The Culture of AI and Making Sense of AI10), the deciphering of these consequences being one of the paramount tasks of social science. That said, we can still glean a great deal about the generic socio-technical changes now taking place from the report Managed by Bots. We can see, I think, that Alexandru’s story is about the increasing automation of all workplaces, and algorithmic life more generally, in a social world undergoing profound digital transformations. Personal life has become bound up with automaticity, which in turn generates new demands and novel anxieties. More and more, people are caught up in a complex series of automated interactions that have to be constantly assessed, or routinely monitored, in terms of their impacts upon, and consequent implications for, other parts of life.
Alexandru, in fact, makes just this point, claiming that the torments of management by bots are becoming increasingly bound up with other parts of economy and society. As he explains:
Everybody feels safe, they have a nice job and this won’t affect them, just those poor Uber drivers. But it isn’t just about us. The lower classes, gig workers, self-employed, we are the first to be affected because we are the most vulnerable. But what’s next? What if AI decides if an ambulance is sent to your house or not, would you feel safe without speaking with a human operator? This affects all our lives.11
What is happening in the gig economy, so we are told here, is happening in other parts of the economy as well. This isn’t just about gig workers in a single country. The impacts of automated algorithmic management are happening throughout most advanced societies.
It is understandable that much of the economic debate about the impact of automated management systems has been focused on the gig economy, and specifically how AI is changing the future of work through the automation of assignments. As I shall argue throughout this book, we live in a world in which automation, apps and algorithms creep into more and more aspects of our daily lives, creating new patterns of complexity and anxiety. Focusing scholarly attention largely on contractors, freelancers and gig workers (however important that is) runs the risk, however, of losing sight of other key transformations that are equally or perhaps of even greater significance as regards automated algorithmic life. Consider, again, Alexandru’s hunches about the ramifications of the gig economy for other socio-economic developments. This is arguably an important point and one that should be placed in a broader context. Ethnographer Alex Rosenblat has powerfully shown how Uber has created a new template for employment based on algorithms, one that she says is generalizable to the wider economy:
As a technology company in the ridehail business, Uber has an employment model that is changing the nature of work. The company promised to leverage its technology to provide mass entrepreneurship to independent workers. At Uber, algorithms manage how much drivers are paid, where and when they work, and the eligibility requirements for their employment. But the power of algorithmic management is obscured from view, hidden within the black box of the app’s design. While speaking with hundreds of drivers, culling thousands of forum posts online, and working together with scholars across disciplines to suss out the implications of what I’ve observed, I’ve found that the technology practices Uber implements (such as algorithms) significantly shape and control how drivers behave at work.12
What Rosenblat refers to as ‘the algorithmic power of management’ can be reduced to formulaic understanding, such as the computational ordering and automated calculation of data underpinning the company’s bundling together of customers, drivers, trips, charges and the like. Yet her argument is more subtle than this. Seeking to move beyond textbook definitions of an algorithm, Rosenblat draws attention to Uber’s algorithmic logic, specifically how the company’s machine-learning systems adapt to change in response to its disaggregated workforce.
Machine-learning algorithms have been crucial to the collection, analysis, manipulation and sale of data extracted from both customers and contingent labour at Uber. But for many of the large tech companies like Uber, what matters is not so much the step-by-step computational programming but rather the adaptive technical systems which can accumulate and use data to signify in relation to other complex systems – all bound up within the domain of information capital. In other words, it is the open-endedness of Uber’s algorithms that lie at the core of the company’s data power, connecting the forcefield of advanced automation with the many other corporate practices and technological assemblages of the platform. As Rosenblat teases out these corporate algorithmic logics:
Uber distances itself from the role of employer. Uber bills drivers as free and independent entrepreneurs but, through automated, algorithmic managers, obscures the control it leverages over how drivers behave at work. Because technology is ‘connective’, Uber identifies the work and services it provides as a type of sharing in the sharing economy, a message that effectively devalues and feminizes paid work. Issues like missing wages are attributed to technical language, such as ‘glitches’. The market logic of price discrimination is reframed as an innovation of artificial intelligence. Over and over again, we see how the language of technology is used rhetorically to advance the argument that what we think is one thing is, in fact, another. Uberland is driven not just by the mechanics of technology but also by the substantive sway of technological persuasion in American culture.13
This is, in other words, the ideology of technology setting the vocabulary of Uber’s grand narrative in a way that underwrites nothing but automated algorithmic management.
Automated society positions its appeal on the pledge to meet ‘human needs’ and satisfy ‘consumer desires’ through the sweeping reach of predictive algorithms. In a society of advanced automation, predictive analytics and smart algorithms are both entry and exit points for the market’s recalibration of ‘genuine needs’ and ‘possible desires’. The impact of predictive algorithms upon us as individuals and the repercussions that advanced automation has on us as a society implies much more than a cultural preoccupation with the coercive influence of data power, the dangers of algorithmic bias or the scale of private data sold to large corporations in behavioural futures markets. It involves (in addition) an ambient anxiety which has now acquired global dimensions, a batch of prismatic yet closely interconnected fears and forebodings, disillusionments and doubts, repetitive and repressive assumptions about the ways the world and its institutions are investing people, things, places and events with a kind of automated logic as well as routinized forms of computational regulation.14 And so, life in automated societies is recast as an infinite succession of experiments and anxieties.
To indicate this variety of cultural anxiety, let me note – more or less at random – some of the headlines making global news about algorithms in recent years:
Facebook whistleblower Frances Haugen, testifying before the US Congress in 2021, stated that the social media giant’s algorithms ‘harm children, stoke division and weaken our democracy’.
15
Haugen’s testimony followed her leaking of tens of thousands of internal company documents revealing that Facebook had failed to remedy numerous consumer issues, instead ‘prioritizing profit over people’. She also revealed that internal Facebook research showed that Instagram is ‘toxic to teens’, particularly the mental health of teenage girls.
In 2023, the Screen Actors Guild–American Federation of Television and Radio Artists took its members out on strike, calling for better wages and regulations on the use of AI by studios. The ‘terrifying’ subtext of the artists’ strike was generative AI, widely viewed as reshaping Hollywood as its algorithms and computer-generated imagery were increasingly being deployed by studios to render artists redundant. The strike, in turn, led several established American novelists – including John Grisham, Jodi Picoult and Jonathan Franzen – to bring litigation against Open AI, the creators of ChatGPT, for copyright violation through the ‘feeding’ of its program with their books.
At a UN conference addressing the rise of militarized Terminator-style ‘slaughterbots’, experts warned of the alarming consequences of AI robots programmed to destroy citizens, communities and cities. Seeking to promote a ban on Lethal Autonomous Weapons Systems under the terms of the UN’s Convention on Certain Conventional Weapons, conference delegates argued that AI is now advancing so exponentially that governments can no longer adequately regulate, or legislate against, arising dangers, risks and other catastrophic mistakes. As conference delegate James Dawes commented: ‘It is a world where the sort of unavoidable algorithmic errors that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities.’
16
A 2023 Royal Commission in Australia determined that an automated government scheme, known as ‘Robodebt’, incorrectly demanded welfare recipients pay back benefits. People received letters saying they owed thousands of dollars in debt, based on an inaccurate algorithm. According to the Commission, the scheme caused victims to feel like criminals and directly led to suicides.
Leading electric car manufacturer Tesla blamed ‘faulty algorithms’ for the exaggeration of the driving range of its cars. Tesla was fined $2.2 million by the South Korean government in 2023 for the ‘rigging’ of vehicle range-estimates software. Several reports noted that customers had incurred significant distress in bringing these complaints to the attention of the manufacturer. Tesla, according to a report in
Forbes
, ‘went as far as to create an entire Las Vegas-based “diversion team” devoted to quieting customer complaints about the inaccurate range, a group that was told by managers it saved Tesla $1,000 for every appointment it cancelled’.
17
Australian comedian Hannah Gadsby condemned Netflix as an ‘amoral algorithm cult’ over an intense media controversy about the transgender community.
Let us note some striking features of these otherwise disparate instances of algorithmic anxiety. Four features in particular stand out.
To begin with, it is striking how far machine estimations of human preferences, interests and desires actually reach in terms of both their sweep and depth. Social media addiction, mental health, militarization, sexuality, regulation and governance, excessive data consumption and automated bingeing: what do these have in common? Some have suggested such developments are the result of a massive breach between ethics and the digitalization of society. That surely isn’t convincing, however. Most of the examples listed carry important ethical consequences to be sure, but the spread of algorithmic anxiety reaches much further. Not just ethics then, but also identity, intimacy, ideology and institutional life are at issue. Anxiety over algorithmic calculations and automated recommendations – the degree to which machine intelligence is surreptitiously influencing us, steering our preferences, performing calculations that we haven’t requested or don’t require – attaches to both ethics and the everyday, to intimacies and institutions, to politics and the personal.
This is how Kyle Chakya, in his eye-opening article in The New Yorker, ‘The Age of Algorithmic Anxiety’, reconstructs lifestyles besieged by recommendations generated through computational systems. We live today in a world, says Chakya, in which algorithmic recommendation systems seem ‘more in control of our choices than we are’. From food-delivery apps like Uber Eats and DoorDash that predict tasty dishes based on your ordering history, to text and email message systems which supply predictive formulations for what you might communicate to others, what matters today, in both professional and personal life, is a form of control stemming from predictive analytics which claims to know what is likely to happen in the future. ‘It can feel as though every app’, writes Chakya, ‘is trying to guess what you want before your brain has time to come up with its own answer, like an obnoxious party guest who finishes your sentences as you speak them. We are constantly negotiating with the pesky figure of the algorithm, unsure how we would have behaved if we’d been left to our own devices. No wonder we are made anxious.’18
Anxiety and fear of social control are not simply common feelings, but far-reaching social realities of our time, forebodings firmly anchored in algorithmic societies. In view of the kind of ‘algorithmic anxiety’ that oozes throughout automated life, the individual increasingly comes to understand their own self-awareness, self-confidence and personal worries in terms of the logics of predictive analytics and the performativity of big data. Chakya conducted an online survey about algorithmic anxiety, presenting a preliminary catalogue of the fears, forebodings and worries that influence overall social behaviour. Whilst the survey was only provisional in design, the aspects of digital life it reported on concerned societal pressures to conform to algorithmic norms. Many complained, writes Chakya, ‘that algorithmic recommendations seem to crudely simplify their tastes, offering “worse versions of things I like that have certain superficial qualities” … One wrote that the problem had become so pervasive that they’d “stopped caring,” but only because they didn’t want to live with anxiety’.
Yet anxiety, it transpires, might not be so easy to sidestep. Anxiety is a vital internal information system for coping with dangers and risks of all types. Certainly, the intrusion of anxiety into routine forms of social activity can be experienced as a major kind of dislocation which, in turn, demands various mechanisms of adjustment for the reproduction of social order.19 In our own time, terms such as ‘algorithmic anxiety’, ‘digital panic’ and ‘automation fear’ are repeatedly used by authors to capture something about our disrupted relationship with automated recommendation technology. But such anxiety is not simply about technology. Anxiety generated in the course of interaction with non-human objects, I claim, is intricately interwoven with a more basic set of worries and fears about our contact and connections with people. How new are these emergent algorithmic anxieties? How do they cross and tangle with interpersonal anxieties, fears and forebodings? What are the risks and opportunities? In what ways are these risks and opportunities shaped by emergent technologies? These are issues I shall seek to confront throughout this book.
Second, the affliction of algorithmic anxiety involves much more than merely concerns that the power exerted by predictive algorithms is warping our experience of time and conception of the future. To speak of ‘algorithmic anxiety’ is to underscore the profound digital dislocations of speed, dynamism, momentum and acceleration. Predictive analytics come couched in computational probabilities that promise to transcend our ignorance, at once elevating speed and degrading continuity. Algorithms help us to cope with the world of continuous change but, at the same time, promote in their speed a life of continuous change. Traditional forms of sense-making and inherited scripts governing the conduct of life appear to be no match for an all-enveloping world of algorithms, machine learning and high-power computing. This is the point where anxiety tips the individual towards the now commonly held assumption that ‘algorithms know us better than we know ourselves’.
All of this, let me emphasize, is propelled by the speed of data-driven automated decision-making.20 Predictive algorithms appear seductive in their capacity to map out trajectories for future behaviour and to do so instantly, immediately, without waiting. For many people, the sense that we are living in a world that is constantly speeding up, a society that is continually ‘pressed for time’ as Judy Wajcman describes it,21 is increasingly widespread. More and more people feel hurried, harried and rushed. There is simply not enough time in the day to get things done, let alone find the time to do the things that one might wish to do. Time-saving digital technologies were the promised route out of this dilemma, but have turned out only to demand more time from people: the time required to research the latest technological gadgets or download new software; the time devoted to posting status updates; the never-ending intervals of clicking ‘like’, ‘retweet’, ‘accept’ or ‘delete’. This is where the seductive power of predictive algorithms comes into the picture, promising to help us cope with the overburdened time demands of this runaway world and to give us back some control over the future. More often than not, however, this voyage into automation is an act of self-effacement and disabling anxiety. It transpires that the speed of predictive analytics leaves people with the feeling that they have been cast adrift, or are simply overwhelmed by a computational system that is dynamically closed upon itself. As Bernard Stiegler insightfully comments, the ‘network works at 200 million kilometres a second while your own body works at 50 metres a second. So the coefficient of difference is that the network is 4 million times faster than your own body. So you are taken by speed.’22
Third, threaded through remote-controlled, semi-automated and fully automated processes and interactions that don’t seem to splice together into a meaningful let alone coordinated pattern, the individual becomes reduced instead, as Stiegler puts it, to a world where ‘calculation prevails over every other criteria of decision-making and where algorithmic and mechanical becoming is concretized and materialized as logical automation and automatism’. Frustrated and dismayed daily by data overload, the individual increasingly takes flight into the collusive solace of digital automation. People find a shelter for ‘data fatigue’ in tools like Buffer or HootSuite, which automate social media posts, or Roboform, which automatically fills in forms online, or in shopping subscription services like Trunk Club, Stitch Fix or Bombfell, which automatically purchase clothes selected ‘just for you’. The average person makes 35,000 decisions a day, notes Jason Patel, who argues that automation promises the hope of redemption since it delivers an effective means to outsource the mundane details of life. ‘Automation’, Patel writes, in a high-tone sermon style aimed at ‘keeping the American dream alive’, ‘means streamlining processes, limiting distractions, and saving time and effort. It means clearing your mind of clutter and spending your day on work matters.’23 And yet, given the destructive effect of the algorithmic age on knowledge of how to act and how to live, this promise of freedom can only be deceptive so far as the creative repertoire of life-stories is concerned. Stiegler’s argument is that computational capitalism’s ‘automatization of existences’ replaces all forms of knowledge with the behavioural prescriptions produced by predictive algorithms. But what, exactly, does that mean for individuality?
In a nutshell, automation apps, devices and tools are supposed to be the solution not the problem. But if we stop to explore the logics of those of us who have been cajoled to mix automation cocktails and taught to narrate life-stories of machine-driven futures, the practice of ordinary automated life reveals a very different picture. People turn to digital systems and processes to automatically accomplish tasks when there are things they don’t wish to voluntarily undertake, or commitments they cannot ‘find’ time to complete. The ideal horizon of automation is the removal of the individual from the mechanized process. The art of automation is focused on the prediction of the behaviour of complex systems, which is meant to buttress the power of agency and yet, paradoxically, closes off options to the individual. Significantly, these dismaying automations generate the illusion of time having stopped. In outsourcing tasks to smart machines, it is as if people are distancing themselves from their own agency, keeping decision-making, choices, options and alternatives at bay. Routine tasks and mundane matters, to be sure, require no protracted consideration or deliberation and so can proceed without personal input. But automated languages of the self are a spurious form of mastery. The singular ‘identity core’ which can be discerned in societies of automation is that predictive algorithms carry an omnipotent power of agency that people have attributed to them. The problems of identity are magically sidestepped in a deterministic world where predictive algorithms are invested with the power to foretell the future. Contrary to appearances, however, there can be no final resolution of identity troubles in face of the surreptitious violence of over-simplification realized through emergent technologies. That violence remains poised at the volatile interface between humans and machines.
The fourth and final feature in the instances of algorithmic anxiety I highlight here concerns the markedly different ways of thinking about automation and conceiving responses to the incursions of data power and informational capital. What becomes evident in reviewing these headline examples is that two distinct domains, the private and the public, both vital to the essential mediation of interpersonal interaction and civil society, converge on the current automation discourse. Yet these intersecting lines between public and private life notably elude integration, each tending to remain trapped in its own self-referential assumptions and practices, with little prospect for cross-referencing or alternative forms of conceptualization. Thanks to the algorithmic techniques of platform capitalism (the excess, evasion and escape now at the disposal of large tech companies), societal self-understanding of the cultural anxieties unleashed by the digital revolution is oftentimes held in check, displaced and so disabled simply by the opacity of predictive algorithms and the non-commitment and evasion of corporate entities associated with specific forms of power based on informational capital. The private worries circulated in media coverage of, say, dating apps or quantified lives are rarely referred to broader public debates regarding digital surveillance, fake news or cybersecurity. On the contrary, such worries all too often remain trapped in the privatized language of individual troubles, do-it-yourself strategies for evading the fast-sprouting accumulation of data risks, or therapeutic castings of individualized mental health. In all of this, there appears little prospect for substantial immersion in the messy complexities of the digital world in order to understand how digital technologies are reshaping public and private life.
The title given by Byung-Chul Han to his philosophically captivating study, Non-things, insightfully captures the embryonic suite of problems which today besets social theory in the algorithmic age. Society has morphed, argues Han, from the age of solid objects and secure dependencies to the age of virtual digitalization and the rise of non-objects. Contemporary society exists in and through networked webs of information, not objects. According to Han, this ‘informationalization of our world’ turns social life into data and cultural exchanges into information. That casting, however, lacks temporal stability and spatial fixity. ‘Informationalization’ now means something very different from what it meant in even the recent past, when many critics enthusiastically welcomed the digital revolution. Information now decentres our lives, dematerializes the social landscape, disembodies the individual subject and ultimately eats away at the substantiality of the world. As Han develops this line of analysis:
Things are increasingly receding into the background of our attention. The present hyperinflation and proliferation of things are precisely a sign of an increasing indifference towards them. We are obsessed not with things but with information and data. We now consume more information than things. We are literally becoming intoxicated with communication. Libidinal energy is redirected from things to non-things. The result is infomania.24
The upshot of society’s redirection of cultural energies from objects to non-objects is a mania for information. On this view, people are reconstituted as infomaniacs. It is as if the digital age makes data fetishists of us all.
The degradation of experience and sociality effected by ‘infomania’ portends hard times for the individual self, or at least for that form of subjectivity wrapped around the Enlightenment twinning of personal agency and human autonomy.25 The ephemerality of information in the digital age corrodes experience, memory and perception, with the individual subject rendered a cipher of predictive analytics. ‘In a world controlled by algorithms’, writes Han, ‘the human being gradually loses the power to act, loses autonomy. The human being confronts the world that resists efforts at comprehension. He or she obeys algorithmic decisions, which lack transparency.’26 Nowadays what counts is short-term immediacy, as predictive algorithms outstrip reflective agency and effectiveness replaces truth. The age of non-things is not entirely empty of sociability or community, but no one enveloped by information can remain calm, or hold on to long-term priorities and commitments, for long. Infomania promotes only fleeting forms of attention. The uncertainty into which it has cast the multitude dependent on predictive algorithms feeds off the excitement of novelty and a self-perpetuating frenzy of topicality. This, in turn, gives rise to new modes of disengagement, basic distrust, evasion – as witnessed in the spread of ‘fake news’ and ‘conspiracy theories’, both of which reflect the utter precariousness and contingency that is intrinsic in information. As Han points out: ‘Information’s fleetingness alone can account for the fact that information destabilizes life. It constantly attracts our attention. The tsunami of information agitates our cognitive system.’27
I want to suggest that whatever else Han is saying here he’s making us think about that strange haunting of the individual in conditions of advanced digitalization. Infomania makes possible tech intoxication – or so it seems, Han intimates – and yet