16,99 €
Across the world, AI is used as a tool for political manipulation and totalitarian repression. Stories about AI are often stories of polarization, discrimination, surveillance, and oppression. Is democracy in danger? And can we do anything about it?
In this compelling and balanced book, Mark Coeckelbergh reveals the key risks posed by AI for democracy. He argues that AI, as currently used and developed, undermines fundamental principles on which liberal democracies are founded, such as freedom and equality. How can we make democracy more resilient in the face of AI? And, more positively, what can AI do for democracy? Coeckelbergh advocates not only for more democratic technologies, but also for new political institutions and a renewal of education to ensure that AI promotes, rather than hinders, the common good for the twenty-first century.
Why AI Undermines Democracy and What to Do About It is illuminating reading for anyone who is concerned about the fate of democracy.
Now available as an audiobook
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 234
Veröffentlichungsjahr: 2024
Cover
Title Page
Copyright
Preface
Notes
Acknowledgments
1. Introduction
About this book
Notes
2. A Not So Democratic History
Politics and cybernetics
Histories of technology and democracy
Notes
3. What AI, What Democracy?
Artificial intelligence (AI)
Democracy
Notes
4. How AI Undermines the Basic Principles of Democracy
Foundational principles of democracy: liberty, equality, fraternity, rule of law, and tolerance
How AI undermines the foundations of democracy
Notes
5. How AI Erodes Knowledge and Trust
The problem of totalitarianism revisited: lessons from Hannah Arendt
Some challenges regarding AI, knowledge, and democracy
Notes
6. Strengthening Democracy and Democratizing AI
Strengthening and creating new democratic institutions
Regulation and oversight
Democratic AI
Notes
7. AI for Democracy and a New Renaissance
AI for democracy
A new Renaissance
Digital humanism
Conclusion
Notes
8. The Common Good and Communication
Common good and the commons
Communication and community
Common experience and building a common world
Notes
Executive Summary for Policy Makers
References
Index
End User License Agreement
Cover
Table of Contents
Title Page
Copyright
Preface
Acknowledgments
Begin Reading
Executive Summary for Policy Makers
References
Index
End User License Agreement
iii
iv
vi
vii
viii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
MARK COECKELBERGH
polity
Copyright © Mark Coeckelbergh 2024
The right of Mark Coeckelbergh to be identified as Author of this Work has been asserted in accordance with the UK Copyright, Designs and Patents Act 1988.
First published in 2024 by Polity Press
Polity Press65 Bridge StreetCambridge CB2 1UR, UK
Polity Press111 River StreetHoboken, NJ 07030, USA
All rights reserved. Except for the quotation of short passages for the purpose of criticism and review, no part of this publication may be reproduced, stored in a retrieval system or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publisher.
ISBN-13: 978-1-5095-6094-3
A catalogue record for this book is available from the British Library.
Library of Congress Control Number: 2023941290
The publisher has used its best endeavours to ensure that the URLs for external websites referred to in this book are correct and active at the time of going to press. However, the publisher has no responsibility for the websites and can make no guarantee that a site will remain live or that the content is or will remain appropriate.
Every effort has been made to trace all copyright holders, but if any have been overlooked the publisher will be pleased to include any necessary credits in any subsequent reprint or edition.
For further information on Polity, visit our website:politybooks.com
These are busy times for anyone concerned with the ethics and politics of artificial intelligence (AI). As I add the finishing touches to this manuscript, there has been the rise of large language models for text generation (since November 2022); wild claims have been made about the supposedly catastrophic impact of AI on the world by tech CEOs and the “experts” who inspire them1; and in June 2023 the European Parliament passed the AI Act, claimed to be the world’s first comprehensive AI regulation.2 What is going on? Is AI a threat to the world? And what will be the impact of the legislation? AI ethics specialists are in demand. Public debates about AI mushroom everywhere. AI becomes the object of (geo)political struggle. Tech CEOs such as Sam Altman manage to dominate the public discussion. They lobby publicly and are received by the White House and by European heads of state.3 Citizens and governments are told that we have to develop AI fast, otherwise others will overtake us. We are told how and how much AI should be regulated. We are told that AI might have catastrophic effects and that we should pause the development of AI.4 Things are going fast and the choir of those who hype or doom-think AI grows, often boosted by the media.
In the midst of this turmoil, which is often created with specific political aims, voices of those who calmly and systematically analyze the impact of AI on society are often not heard. But doing that work is important if we want to use the opportunities this technology offers and use it in a responsible way. Focusing on AI’s influence on democracy, this book discusses the politics of AI by putting it in a wider intellectual context and offers a vision of how to move forward. It is about AI, of course, but it is also and mainly about democracy. About how vulnerable and resilient our current forms of democracy are and can be in the light of powerful technological and anti-democratic forces. And especially about what kind of democracy we want. Ultimately, that question is about what kind of society we want and what kind of world we want to leave to the next generations. Do we go with the current situation or do we try to make sure that advanced technologies help us to work towards the common good? This – not short-term corporate interests or science fiction fantasies about AI destroying civilization – is what should be at stake when we discuss our common technological future.
Kyoto, 21 June 2023
1.
See, for example,
https://www.bbc.com/news/uk-65746524
2.
https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
3.
In May 2023, OpenAI’s CEO Sam Altman toured European regulators, threatening to stop operating in Europe if the EU regulated AI too much (and not on his terms).
https://techcrunch.com/2023/05/25/sam-altman-european-tour/
4.
Published in March 2023, the Open Letter from the Future of Life Institute (and also signed by Elon Musk) claims that advanced AI could change the history of life on earth and get out of control, arguing that therefore we should pause the training of powerful AI systems for at least six months.
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
I thank anonymous reviewers for their comments, which helped me to revise and update the initial draft of this manuscript, and especially my editors Mary Savigar and Stephanie Homer for their continuous support during this process. I also thank Zachary Storms for helping with the formatting, and I am grateful to all the people who during the journey of writing this book have suggested literature, have been conversation partners, and have offered their invaluable friendship and collegiality.
A specter is haunting today’s political world, and it’s an ugly and dangerous one: authoritarianism. Everywhere democracies are under threat, including in the West. Whereas the late 1990s still saw a wave of democratization, today there are many anti-democratic tendencies, which sometimes result in a slide towards authoritarianism. A 2021 report shows that dictatorships are worldwide on the rise. Polarization is worsening, there is a dramatic increase in threats to freedom of expression, and 70 percent of the world population now lives in an autocracy, compared to 49 percent in 2011 (Democracy Report 2022). Western democracies are not immune to this trend. Some speak of a new world order, with powerful players trying to destroy the international order set up after World War II and the United States falling prey to polarization and “decay” (Erlanger 2022). There are also autocratization tendencies in Europe in the form of a lurch to the (far) right, for example in Hungary, Poland, and Serbia. In September 2022, a right-wing nationalist Swedish party gained more than 20 percent of the vote. The United Kingdom became politically and financially unstable after populists and later ultra-liberal conservatives rose to power. In the same year, far-right populists won the elections in Italy, and their leader Giorgia Meloni became prime minister. As is well known from history, anti-democratic politicians can come to power through democratic elections and subsequently undermine or even abolish democracy. In some contexts, this is an imminent danger today.
Digital technologies such as artificial intelligence (AI) offer many benefits and opportunities to society. But they also seem to play a role in those erosions of democracies and in the rise and maintenance of authoritarian and totalitarian regimes. Social media are blamed for helping to destabilize democracies by destroying truth and increasing the polarization. AI fares not better. Today, stories about AI are often stories of manipulation, polarization, discrimination, surveillance, power, and repression.
Even if authoritarianism might not be immediately on the horizon, the risks for democracy seem very real. Governments and international organizations are concerned. The US Biden administration recently warned of the dangers AI poses to democracy, complaining that there are limits to what the White House can do to regulate the technology.1 A European Commission website headlined “democracy in peril” in the light of a report on risks posed by current digital technologies such as false information, manipulation, surveillance, and the increased power of commercial entities on which we depend in this area and which set the agenda for our digital future.2 And earlier, the United Nations High Commissioner for Human Rights warned of the impact of AI on human rights, rule of law, and democracy.3 In other words, AI has come to be seen as a problem, and it’s increasingly recognized that it’s a problem for democracy.
Consider the Cambridge Analytica case (Cadwalladr and Graham-Harrison 2018), which involved voter manipulation based on analysis of big data. Millions of Facebook data were collected without people’s consent and used for targeted political advertising in order to support political campaigns in the United Kingdom and the United States. And the use of AI in combination with social media has been said to drive political polarization and to propagate divisions in society, which can then be exploited by groups striving for power (Smith 2019) – groups that are not necessarily democratic. The rise of the far-right QAnon movement in the United States, which has led to a violent insurrection at the Capitol, is a case in point. It seems that we risk being locked in our own bubbles and echo chambers, besieged by algorithms that try to influence us and drive us apart.
The program ChatGPT, a large language model that has recently become both very popular and ethically controversial, has also been linked to undermining democracy. Some worry that AI could get out of control and take over political decision making.4 This may seem rather far-fetched and at least a matter for the distant future. But there is also the near-future concern that AI could nevertheless be used to influence political decision making. For a start, it could be a powerful lobbying instrument. For example, it could automatically compose many op-eds and letters to the editor, submit numerous comments on social media posts, and help to target politicians and other relevant actors – all at great speed and worldwide. This could significantly influence policy making (Sanders and Schneier 2023). It could also be used to spread propaganda – thus influencing elections.
Yet AI is not only being used to gain power but it also increasingly plays a role in existing governance institutions. Here, too, AI has been shown in a bad light. Consider the automated welfare surveillance system used by the Dutch, which a court halted because it said that it violated human rights and breached people’s privacy: did the use of this system amount to “spying on the poor” (Henley and Booth 2020)? In Austria there was controversy about the algorithmic profiling of job seekers by the public employment service AMS, which was accused of unjustly discriminating against some categories of job seekers (Allhutter et al. 2020). AI court decision making has also been criticized for being biased. In the United States, the COMPAS algorithm, used by probation and parole officers to judge the risk of recidivism, has been said to be discriminating against black defendants: a report claimed that “black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism” (Larsen et al. 2016).
In the meantime, AI also became popular with autocratic governments. Western media have reported that China has been using AI for surveillance and repression. According to the New York Times, its citizens are under constant surveillance: phones are tracked, purchases are monitored, chats are censored. Predictive policing is used to predict crime, but also to crack down on ethnic minorities and migrant workers (Mozur, Xiao, and Liu 2022). Human Rights Watch claims that the Chinese government collects a lot of personal information and uses algorithms to flag people who are seen as potentially threatening. They say that this has led to restrictions of freedom of expression and freedom of movement. Some people are sent to political education camps (China’s Algorithms of Repression 2019).
But the use of AI surveillance technology is not restricted to China or even to authoritarian regimes. Developed and supplied by China, but also by countries such as the United States, France, Germany, Israel, and Japan, it is proliferating around the world and even in democracies. According to a Carnegie’s AI Global Surveillance (AIGS) index, more than half of the advanced democracies in the world use AI surveillance systems (Feldstein 2019a). Even if such technologies are used in political systems that call themselves democratic, there is always the risk that they are used for repressive purposes. In 2020, Brazil’s far-right president Bolsonaro was accused of “techno-authoritarianism” for creating extensive data collection and surveillance infrastructure, in particular the Citizen’s Basic Register that brings together data for citizens from health records to biometric information (Kemeny 2020). And even in Europe and the United States, the Covid-19 pandemic has been used to mobilize AI and other digital technologies in the service of stricter law enforcement and control of the population. For example, in 2020 Minnesota used contact tracing to track protestors engaged in demonstrations in the wake of the police killing of George Floyd (Meek 2020), and AI has been widely used to assist far-reaching population mobility control measures.
As we seem to be on course for the emergence of new forms of authoritarianism and totalitarianism aided by digital technologies, it is high time we thought about AI and democracy and repeated Hannah Arendt’s question in The Origins of Totalitarianism (2017 [1951]) after World War II, but this time in the context of digital technologies. Notwithstanding differences in specific historical contexts, is democracy in danger, and are conditions for the rise of totalitarianism emerging again today? In what way do digital technologies contribute to these conditions? Does AI undermine democracy and lead to new forms of authoritarianism and totalitarianism – let’s call them “digital authoritarianism” and “digital totalitarianism”5 – and, if so, how does that work and what can we do about it? And more generally, and beyond the question regarding totalitarianism, what is the impact of AI on democracy? Is AI good for democracy, and if not, what can be done about it? How can we make sure that AI supports democracy?
This book argues that AI as it is currently developed and used undermines the fundamental principles and knowledge basis on which our democracies are built and does not contribute to the common good. After putting the question in a historical context and analyzing it, guided by political-philosophical theories of democracy, it offers a guide to some key risks that AI poses for democracy. It shows that AI is not politically neutral but currently shapes our political systems in ways that threaten democracy and support anti-democratic tendencies by undermining democratic principles, by eroding the knowledge and trust needed for democracy, and by fostering the good of the few at the expense of that of the many.
But the book also provides a way out. It argues for fully acknowledging the political character of AI and points to the need for public deliberation and leadership in steering both our political institutions and the technology in a more democratic direction. AI needs to be rendered less damaging to democracy and preferably should work for democracy. This requires (re)shaping the technology at the stage of development and integrating it in ways that support, rather than undermine, the present political system. Yet the book argues that such a change in political leadership and democratic technology development can only be successful if there is an adequate political-epistemic basis. Such a basis can be built by nurturing a culture and education inspired by Renaissance, humanism, republicanism, and Enlightenment, and, more generally, by creating shared knowledge and experience. It is argued that, ultimately, democracy, and AI for democracy, depends on the making and promotion of the common good, on communica tion, and on building a (more) common world. AI and digital technologies can and should support this project rather than hindering it.
Let me give you a quick road map of the book:
Clearly, AI poses serious risks for democracy, and these risks need to be better understood. After a historical perspective on the relation between technology and democracy (chapter 2), which shows that new technologies have often led to more centralization but also emphasizes that there is no determinism when it comes to the influence of technology on politics, chapter 3 analyzes the two main concepts in question: what do we mean by AI and – important for this inquiry – what do we mean by democracy? Linking discussions about AI to political philosophy and political theory is needed since the concept of democracy is by no means clear and uncontroversial. As to my own position, I distance myself from definitions of democracy in terms of voting and point to richer conceptions: I argue that deliberative, participative, and republican ideals of democracy should guide us in discussing the problem of AI and democracy.
The next chapters then focus on how AI as it is currently used and developed endangers democracy: I show how AI impacts fundamental liberal-democratic and republican principles such as freedom, equality, fraternity, rule of law, and tolerance in ways that undermine these principles – thus endangering liberal democracy and potentially leading to authoritarianism and totalitarianism (chapter 4). Following in the footsteps of Hannah Arendt and linked to contemporary work on the ethics and politics of digital technologies, I also show (in chapter 5) how its use risks undermining the knowledge and trust basis of democracy through the creation of power asymmetries, manipulation, the erosion of the distinction between what is real and fake, and the creation of epistemic bubbles. In the end, AI may even destroy trust between citizens and threaten our self-image as autonomous political subjects, which we have held so dear since the Enlightenment.
But there is no need for doom-thinking. This book is neither pessimistic nor anti-technology, and it rejects technodeterministic views that see socio-technological developments as autonomous and beyond human reach: there is still time and room to intervene and improve things. After an analysis of the problems, I consider how we can fix this and explore constructive approaches. How can we make democracy more resilient in the light of AI? And, more positively, what can AI do for democracy? In the last chapters – call it my “technodemocracy” or “democratic AI” manifesto – I argue that we need to not only change our political institutions and regulate AI in order to make democracy stronger, but also make AI more democratic. The technology should not simply be taken as given; we can change it: the development of AI should be democratized. In chapter 6, I argue that this requires changes at the level of tech development and its links to, and is embedded in, democratic political institutions. Yet, in chapter 7, I emphasize that the project of democratic AI is not only about making AI more democratic and totalitarian-proof but also, less defensively and more constructively, about creating AI for democracy. How can we make AI that supports, rather than erodes, democracy? I briefly discuss some work that tries to do this.
Yet at the end of the book I conclude that redesigning our technologies and reforming our democratic political institutions is not enough. I argue that the project of democratic AI can only succeed if it is embedded in a new cultural and educational environment: a renewal of our political culture needs a Renaissance and a new Enlightenment, this time assisted by digital technologies. Moreover, democratic politics (and indeed tech policy) also needs a deeper kind of normative transformation. In the final chapter, chapter 8, I argue that if we really care about democracy and indeed about fending off anti-democratic tendencies and preventing the rise of authoritarianism and totalitarianism, we need AI and other digital technologies that help us to realize and find the common good, and to really communicate and build a common world.
1.
https://www.wsj.com/articles/white-house-warns-of-risks-as-ai-use-takes-off-d4cc217f
2.
The website
https://euraxess.ec.europa.eu/worldwide/africa/news/democracy-peril-commissions-ethics-group-stresses-need-and-ways-deepen
refers to a recent report on democracy in the digital age by the European Group on Ethics in Science and New Technologies (2023).
3.
https://www.npr.org/2021/09/16/1037902314/the-u-n-warns-that-ai-can-pose-a-threat-to-human-rights
4.
See, for example, Risse (2023) and again the letter that asked to pause the development of powerful AI.
5.
By digital authoritarianism and digital totalitarianism, I mean systems of societal organization and (more broadly) control that rely on digital technologies to submit people to authority (authoritarianism) and exercise total control over them (totalitarianism). This can be done by the government but also by corporate actors. However, the main purpose of this book is not to establish and occupy this academic concept but to investigate how AI risks undermining democracy – often in ways that do not necessarily lead to authoritarianism or totalitarianism – and what can be done about it.
Asking the question about AI and democracy may seem strange since usually we do not link the two terms. This is partly because we tend to see AI and other technologies as mere tools, instruments. We see them as means that do not touch the ends: our human goals and values. Many people assume that technology itself does not much have to do with politics and democracy. They ask: “Surely all depends on what you use it for? In what way is AI connected with democracy at all?” But like other technologies, AI is more than a tool. In ethics of technology, a common way to express that is to recognize the truth in the saying that “guns kill people.” Of course, people kill people – with guns – but the tool matters in ways that do not just depend on what people intend: it enables and encourages the action of killing. Without guns, there would be fewer killings. The same is true for the politics of technology. AI is not just an instrument. It shapes our actions and our goals. It influences our society. It benefits society but also creates risks, which are often unintended and unforeseen. Certainly, its impact partly depends on what people do with it, but the political influence of AI is deeper and its political effects more “internal” to what AI is and does than is commonly assumed. Its political impact is not just about politics and what politicians do but is also related to what the technology does and enables.
Inspired by many decades of work in philosophy of technology, I have formulated this insight as: “AI is political through and through” (Coeckelbergh 2022a: 5). The point is not that AI is a politician or a thing on its own, but rather that AI, as it is used by humans, has political consequences that cannot be reduced to its intended effects and that radiate far beyond the sphere of science and technology; they shape our societies and the ways we govern them.
Consider again ChatGPT: it is not just a text-generation tool but is likely to transform the way we write, the way professionals work (consider, for example, journalism), and, as I suggested already with my examples in the introduction, the way we do politics. These effects are not always intended. Think also of the internet and how it has already transformed our societies: not only the technology but also its manifold effects had not been foreseen. In this sense, AI is not just used by politicians but is itself also political.
This approach, which will be assumed and unpacked throughout this book, allows us to ask the question: In what political direction does AI push us? Does it make the world less democratic and more authoritarian, and if so, does that mean that we are helplessly delivered to its historical forces? Is techno-authoritarianism inevitable?
To better understand the problem regarding AI and democracy, to show how AI and problems with democracy have more to do with each other than expected, and to further discuss the question whether technology determines politics, I propose to put these issues in historical perspective (including some history of ideas) and reflect on the more general relation between technology and democracy.
Let’s start with ancient philosophy. When talking about the art and science of governing the state in the Republic (1997: 488a–489e), Plato compares governing the state with steering and navigating a ship. He uses the term kybernetes (Greek: κυβερνήτης): the steersman or helmsman of the ship, the pilot, the one that is good, artful and skilful at steering and navigating. Steering a ship is a craft; it requires expert knowledge and cannot be left to the sailors. It also requires knowledge of navigation, for example, knowledge of the stars. Similarly, statesmen, Plato argues, should learn the art and science of steering the ship of the state: it is also a matter of cybernetics. They should know how to do it; it requires expertise and cannot be left to the people, a bunch of ignorant, quarrelling, and often drunken good-for-nothings.