The New Empire of AI - Rachel Adams - E-Book

The New Empire of AI E-Book

Rachel Adams

0,0
19,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

As AI takes hold across the planet and wealthy nations seek to position themselves as global leaders of this new technology, the gap is widening between those who benefit from it and those who are subjugated by it. As Rachel Adams shows in this hard-hitting book, growing inequality is the single biggest threat to the transformative potential of AI. Not only is AI built on an unequal global system of power, it stands poised to entrench existing inequities, further consolidating a new age of empire.

AI’s impact on inequality will not be experienced in poorer countries only: it will be felt everywhere. The effects will be seen in intensified international migration as opportunities become increasingly concentrated in wealthier nations; in heightened political instability and populist politics; and in climate-related disasters caused by an industry blind to its environmental impact across supply chains.

We need to act now to address these issues. Only if the current inequitable trajectory of AI is halted, the incentives changed and the production and use of AI decentralized from wealthier nations will AI be able to deliver on its promise to build a better world for all.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 320

Veröffentlichungsjahr: 2024

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



CONTENTS

Cover

Table of Contents

Title Page

Copyright

Dedication

Acknowledgements

Prologue

Note

Introduction: The AI Divide

Empire’s Inequalities

Empire, Old and New

The Majority World

Notes

Chapter 1: A New World Order

The Dawn of AI

Rulers of the World

The Racial State

Notes

Chapter 2: The Cost of AI

Billion-Dollar Enterprises

Colonial Economics

Technology Diffusion

Notes

Chapter 3: The Material World of AI

AI’s Beginnings

The Battle for the Heart of Darkness

The Electric Storm

Notes

Chapter 4: The New Division of Labour

Equal to or Somewhat Better Than an Unskilled Human

Rationalizing Informality

I Am Not a Robot

Workers, Resist!

Notes

Chapter 5: Fit for What Purpose?

Blood Money

Biometric Empires

Notes

Chapter 6: One Language to Rule Them All

A New Frontier

Dominant Worldviews

In the Margins

Notes

Chapter 7: The Way Out

The Limits and New Horizons of AI Ethics

What Governments and Governance Should Do

A Different Kind of Leader

Notes

Coda: The New Politics of Revolution

This Will Affect Us All

The Time for Action is Now

Notes

Key Readings

Index

End User License Agreement

List of Figures

Introduction

Figure 1

PricewaterhouseCoopers’ ‘Sizing the Prize’ figure and …

Figure 2

Number of the extremely poor (in millions), by region, 1990–2030

Chapter 3

Figure 3

Crawford and Joler’s ‘Anatomy of an AI System’ for an …

Figure 4

John Thomson’s 1813 map of Africa, with central Africa as ‘unkn…

Chapter 4

Figure 5

Google DeepMind’s ‘Levels of AGI’ (2023)

Figure 6

Amazon Development, Liesbeek, Cape Town (photo taken by author)

Guide

Cover

Table of Contents

Title Page

Copyright

Dedication

Acknowledgements

Prologue

Introduction: The AI Divide

Begin Reading

Coda: The New Politics of Revolution

Key Readings

Index

End User License Agreement

Pages

iii

iv

v

ix

x

xi

xii

xiii

xiv

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

202

203

204

205

The New Empire of AI

The Future of Global Inequality

RACHEL ADAMS

polity

Copyright © Rachel Adams 2025

The right of Rachel Adams to be identified as Author of this Work has been asserted in accordance with the UK Copyright, Designs and Patents Act 1988.

First published in 2025 by Polity Press

Polity Press65 Bridge StreetCambridge CB2 1UR, UK

Polity Press111 River StreetHoboken, NJ 07030, USA

All rights reserved. Except for the quotation of short passages for the purpose of criticism and review, no part of this publication may be reproduced, stored in a retrieval system or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publisher.

ISBN-13: 978-1-5095-5311-2

A catalogue record for this book is available from the British Library.

Library of Congress Control Number: 2024937085

The publisher has used its best endeavours to ensure that the URLs for external websites referred to in this book are correct and active at the time of going to press. However, the publisher has no responsibility for the websites and can make no guarantee that a site will remain live or that the content is or will remain appropriate.

Every effort has been made to trace all copyright holders, but if any have been overlooked the publisher will be pleased to include any necessary credits in any subsequent reprint or edition.

For further information on Polity, visit our website:politybooks.com

Dedication

For Yazeed

Acknowledgements

This book began as an article written to bring together two fields of thought in which I have been deeply invested. The first field is that of postcolonialism, decoloniality and critical race studies. I have been lucky enough to work with an incredible group of scholars – largely here in South Africa – whose work has profoundly influenced my own. My thanks here go to Joel Modiri, Sanele Sibande, Tshepo Madlingozi, Sabelo Ndlovu-Gatsheni, Pramesh Lalu, Jaco Barnard-Naudé and, in particular, Crain Soudien, whose friendship and encouragement have meant a great deal to me. The second is the emerging field of AI ethics. I’d like to thank especially to Stephen Cave, Nora Ni Loideain and Kanta Dihal, who have been instrumental to my work in this space.

As I sought to bring these two worlds together and to write this book, I have benefitted greatly from the kindness and support of, and long conversations with, many friends and colleagues to whom I would like to extend my deepest thanks: Matthew Smith, Paul Plantinga, Fola Adeleke, Mark Gaffley, Nico Grossman, Kelly Stone, Melanie George, Naila Govan-Vassen, Rosalind Parkes-Ratanshi, Alan Blackwell, Michael Gastrow, Temba Masilela, Urvashi Aneja, Jantina de Vries, Kiito Shilongo, Andrew Merluzzi, Shachee Doshi, Zameer Brey, Divine Fuh and Jane Taylor.

Much of the work I have led and been engaged in over the past few years has allowed me to explore more deeply the implications of AI on inequality. This work would not have been possible without the generosity of the International Development Research Centre (IDRC) of Canada, which sponsors both the African Observatory on Responsible AI and the Global Index on Responsible AI. I’m grateful for the collegiality and conviviality that has characterized all my engagements and partnerships with the IDRC.

I remain deeply grateful to my editor at Polity, Jonathan Skerrett; I could not have asked for a more diligent and understanding partner in getting this book over the line. I am indebted to the incredible team at Polity for all their support.

I also thank Crain Soudien, Stephen Cave, Aubra Anthony, Aisha Sobey and Shakir Mohamed for their critical input and comments on various chapters and drafts of this book, all of which greatly helped to make the story I sought to tell more compelling.

My families on both sides of the world have provided crucial support as this book was written. A big thank you to you all, and especially to my dear children, who now have an outsize interest in AI. I am especially grateful to my parents, whose close readings of the text of this book were invaluable.

Lastly, thank you to my husband, my partner in life and thought, who has been with me every step of the way.

Prologue

When I was first ruminating on the idea of writing a book about the relationship between artificial intelligence (AI) and the history and contemporary conditions in Africa and across the so-called global South, I sought to describe an issue that I could see was bubbling beneath the surface. Having spent the earlier years of my career working on human rights in South Africa, I became aware that the capacity of African states to fulfil their duties in protecting the rights of their citizens was being undermined by the rise of new digital technologies and by the forms of global power in which they were entangled. In 2018, having lived in South Africa for ten years, I moved with my family back to the United Kingdom, to complete a postdoctoral training at the University of London. At the time AI was fast becoming a buzzword. The House of Lords was set to publish its first major report on AI, entitled AI in the UK: Ready, Willing and Able?, while China’s AI capabilities were quickly ascending, prompting fears around the rise of a new global power not of the West’s own making.

While some anxiety existed around AI’s effects on democratic stability, as the Snowden revelations and the Cambridge Analytica scandal had just recently demonstrated the impact and reach of new technologies on individual rights and freedoms, these concerns were not linked to their broader global resonance. Where issues of bias in AI were acknowledged, they were considered system-level concerns, to be solved through better programming. They were not connected to the structural and historical conditions out of which inequality and discrimination have arisen in the world.

In all of this, the position and the fate of the global South and of the African continent in particular were simply being ignored. These places mattered little to the exciting new world into which AI was ushering us. During my postdoc in 2018, I became increasingly concerned about a new global agenda that failed to include the majority of the world but clearly seemed dependent on it, whether for the extraction of resources or for the creation of new markets.

In 2019 we moved back to South Africa, where my work focused on understanding the impacts and implications of AI outside the West. One of the major areas of my work was a platform that I established in order to promote African experiences and expertise in the global discussions and debates about AI – which, in any event, were affecting the continent. This platform, the African Observatory on Responsible AI, was supported and funded by the International Development Research Centre of Canada and took on a life of its own. As the platform grew, our work mushroomed from research on the use and impacts of AI in Africa to advising and training African policymakers, working directly with innovators across the continent who built humane and beneficial AI technologies, and engaging with regional bodies and multilateral institutions. There was much work to be done – and still is.

There is also something important about the African and South African perspective that needs to be heard. South Africa is consistently rated the most unequal country in the world. This has everything to do with its complex colonial history, as the state-sanctioned racial segregation of apartheid imposed the worst form of racial inequality. After 1994, in the years that followed the demise of apartheid and the establishment of democratic majority rule, South Africa has continued to champion the cause of creating an equal society and world, in a global order where its sovereignty to defend the best interests of its citizens is sometimes impeded. Here, too, there is much work to be done to address the inequality and related social ills that continue to pervade South African society. But the tenacity to do that work exists in the spirit of the people and communities who keep fighting for justice.

South Africa’s experiences in trying to build an equal and prosperous society are important lessons for the world. And its efforts are not just internal to the country. In late 2023, South Africa led the world in seeking justice for the horrors that Israel was committing against the Palestinian population in Gaza. Indeed, recent reports have brought to light Israel’s appalling use of automated AI-driven technology in the war on Gaza with programs with cruelly satirical names such as ‘Lavender’ and ‘the Gospel’.1 But South Africa’s protest against Israel was also a cry to all nations to uphold the sanctity of the international system of human rights and humanitarian law in a frightening and fragmenting world. As AI’s planetary power expands, we will need these global systems more than ever.

As the writing of this book continued, the issues I set out to convey have intensified into urgent crises that require urgent action. The trajectory of worsening global inequality that we are confronting as a global society and in which AI is fully implicated is not just a trend. At the centre of it are human lives and livelihoods, ambitions and dreams. People, particularly those who live in the majority world, are paying the price for this new empire of AI, as I call it. The introduction to this book and the opening of Chapter 4 offer a number of vignettes of human stories related to AI. While these stories are presented as hypotheticals, in non-identifying form, they are real accounts I’ve heard time and time again.

This book is not meant to cause despair. It is intended to call us all to action – to hold to account this new global power that reigns among us. I hope this book inspires a new commitment to our collective global humanity and to what, together, we are capable of doing.

Note

1.

Bethan McKernan and Harry Davies, ‘“The Machine Did It Coldly”: Israel Used AI to Identify 37000 Hamas Targets’.

The Guardian

, 3 April 2024.

https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

.

IntroductionThe AI Divide

In a shack outside the city of Johannesburg, a young woman considers her options. She lives with her son in an iron shack hastily built out of discarded material from building sites across Johannesburg’s leafy suburbs. She has 20 rand (R20) left – about $1 – after collecting her monthly childcare grant, which is normally around $115 per month. She had recently signed up for a new loan agreement in order to pay for childcare for her son, so that she may be able to undertake occasional work as a domestic cleaner. This work pays her about R180 ($9) a day. But the loan’s repayment terms were punishing, and she has been unable to pay back in time what she owed. Unbeknownst to her, the parent company of the agency she borrowed money from is the same company that distributes her monthly grant. As this company has access to her data, the loan repayments are automatically deducted from the grant before she even sees the money.

* * *

Upon arrival at Dulles International Airport, Washington, a man is approached by security personnel. He is returning home from a trip to Pakistan. He’s been visiting his family. His facial features and Muslim name prompt an AI-automated alert within the airport’s security systems, causing the ground authorities to detain him for interrogation in an airless room within the airport compound.

* * *

A young man of Tigrayan descent is on the run. A social media post has gone viral. It has been authored by an Amhara general – a leader of the regional armed forces that are conducting a campaign of ethnic cleansing against Tigrayans. The post suggests that a group of Tigrayans live in a nearby village. It is written in local Amharic and has not been deleted by Facebook’s AI-powered hate-speech detector.

* * *

The screen lights up with a new notification. A teenage girl in Malaysia is manning the computer until her father returns. She knows that she has just 15 minutes to respond to the request before the task is offered to someone else. She clicks on the link, which opens up a series of images displaying visceral scenes of indescribable violence. Quickly, she labels the images ‘gratuitous violence’ and closes the task. Her work will stop these images from being used in the training data of advanced image-generation AI, keeping this technology ‘clean’ for its end-users.

* * *

Over the past few years, a sophisticated set of technologies have been created and put to use around the world. They are called ‘artificial intelligence’ and, as their name suggests, these technologies seek to mimic the capabilities of human intelligence, particularly learning, logic, decision-making, and the recognition of speech, objects, and images. In its material existence within the world – a materiality that is largely hidden from the public eye – AI encompasses a vast and expanding network of computer-based technologies that require an interminable input of data and seek to display human-like intelligence and functioning. It involves a mega industry that operates at a planetary scale, with a footprint in every corner of the globe and an infrastructure that is both subterranean and celestial, extending to the deepest parts of the earth’s oceans and circulating into the outer atmosphere of our cosmos. And it enfolds some of the most ambitious pursuits of the human species: not just to create life, but to create the meta-intelligence from which new planetary species and forms can be moulded and brought into being.

Around the world, AI is impacting the lives of ordinary people. For some, things are getting easier. They outsource mundane daily tasks to AI-powered assistants and use new generative AI capabilities that enhance work performance and productivity in their white-collar professions. AI-powered tools equip their children with personalized educational tools, tailored to the learning needs and the pace of each child. And when they are unwell, their doctors draw on AI-driven precision medicine to provide personalized diagnoses and prescriptions. One day, these people will have the opportunity to consider their digital afterlife and how advancements in AI could fuel the continuation of their character and voice long after their death.

But, as AI serves to accelerate prosperity and wellbeing in those places where it is produced and readily integrated into social and economic life, other people elsewhere pay the price.

AI makes life harder for many people across the world – just as it does for the people in the chapter-opening vignettes. For example, AI can hinder people’s ability to meet their most basic needs. Access to critical public services such as social assistance programs is increasingly mediated through AI technologies. To date, many of these programs have failed to meaningfully lift people out of poverty; they entrench instead poverty traps that become harder and harder to escape. The new forms of work created by the AI industry and value chain are also failing to provide avenues for human flourishing. They are often exploitative, dependent on the economic vulnerability of people across the world who have little choice but to undertake precarious tasks for which only very basic wages are paid. As the use of meAI increases around the world, so does the risk that these systems will produce unjust outcomes for people who are most vulnerable to gendered, racial, or ableist bias or most in need of the benefits these technologies might bring.

To date, conversations around the risks and limitations of AI and its future have largely centred on western experiences and evidence. While the question of bias is acknowledged, it is treated as a system-level issue, not as a global structural reality. Where fears of job losses and displacements arise, they are unconnected to the precarious digital labour that fuels the AI industry or to the new global divisions of labour that are taking shape. And where experts weigh in on the long-term risks of AI, the risk of rising levels of global inequality barely features.

Yet the countries that are left out of these conversations play an integral role in the production of this industry: a role that is largely unseen, and certainly not fairly valued. The people who live in these places put in their labour to build and maintain the vast lakes of data upon which AI draws for its resources. Their lands provide the raw earth materials that build the hardware upon which and through which AI operates. Their societies provide the testing ground for trialing new technologies in spaces considered less important, where the collateral damage goes unseen.

In societies across the majority world where AI does exist, it is often ill suited to local needs – or else it is rolled out without the proper safeguards and institutional oversight that these young democracies so sorely need in order to ensure that the human rights of their citizens are not harmed or put at risk. Southern governments turn increasingly to AI solutions as quick fixes to intractable developmental problems such as the gap between supply and demand, and they do so for access to government services. But very often these systems function as poverty traps, failing to lift people out of poverty. In the slow violence of poverty, the marginalized and oppressed live in a constant state of war, fighting against systems that are increasingly harder to see and understand. Crucially, too, where AI is used to distribute resources, it can create divisions between communities, as can automated propaganda machines and sophisticated profiling techniques that function at once to polarize and to depoliticize the targeted people and communities.

The global auditing firm PricewaterhouseCoopers (PwC) attempted to put a price on the value AI would bring to national economies and to GDP growth. In a seminal report published in 2017, just as AI was seeping into global focus, PwC released ‘Sizing the Prize’, a piece that boasted that by 2030 AI would contribute $15.7 trillion to the global economy (see Figure 1). China, North America, and Europe stand to gain 85 per cent of this prize. The remainder is scattered across the rest of the world, with 3 per cent predicted for Latin America, 6 per cent for the region the report terms ‘developed Asia’, and 8 per cent for the entire bloc of ‘Africa, Oceania and other Asian markets’. On the global map upon which the AI prize is split, Africa’s potential gains are simply not there. Instead, this large continental mass, home to 18 per cent of the world’s population, is lumped with Oceania and the other less developed regions of Asia, their collective icon of growth sitting above Australia.

Other global agencies have similarly developed statistical models for the future of a global society. Through research examining the impact of Covid-19 on extreme poverty (see Figure 2), the World Bank has forecast that by 2030 – during the same period in which the global economy will see gains of $15.7 trillion from AI – 90 per cent of the world’s poor will live in sub- Saharan Africa. In fact, sub- Saharan Africa is set to be the only region of the world where extreme poverty will increase, as the rest of the world is set to experience significant drops in the number of those who live well below the poverty line. Added to this, with Africa’s growing youth population and the dwindling population numbers in European countries, it is estimated that by 2050 a quarter of the world’s population will be African.

Figure 1 PricewaterhouseCoopers’ ‘Sizing the Prize’ figure and the invisible African continent (estimated GDP gains of AI to the global economy)

Source: PwC, 2017.1

Little attention has been paid either to the potential for AI to critically worsen the state of global inequality or to the linkages between the enormous economic growth that AI will deliver to particularly privileged zones and the worsening of extreme poverty in postcolonial contexts, especially sub-Saharan Africa. Nor is this division static. One of the defining features of AI is that its rate of development and adoption is exponential. AI will simply teach itself to be better and more efficient; it is optimized to continuously self-improve. Critically, this means that, if AI is not redirected towards addressing global inequality and is left to bring wealth and prosperity to high-income countries increasingly, the global inequality gap is simply going to widen at an exponential rate into a more and more unequal future.

Figure 2 Number of the extremely poor (in millions), by region, 1990–2030

Source: Yonzan, Lakner, & Mahler, 2020.2

It is increasingly recognized that the benefits of AI are not evenly distributed; the policy response to this state of affairs is to affirm that efforts are needed to support those ‘left behind’ and help them ‘catch up’. This is reflected too in the AI for Good movements, which assume that the problem is not AI, but simply how it is used. AI used for the right reasons, for good, will fix everything. This kind of narrative is an easy extension of the idolization of AI that has arisen in recent years: a position that assumes that AI will produce positive net benefits for humanity even if it has not yet, and that it represents the pinnacle of enlightened scientific discovery and applied human reason. This is a new version of the old trickle-down effect. Within such framing, the only goal is to advance AI further and distribute it more widely. There is very little room to ask: do we want AI to play such a dominant role in our societies? And is AI really benefiting – or going to benefit – all of us?

The present book is concerned with creating the space to ask, and begin to answer, these questions. It is written on the back of over 15 years of living and working in South Africa and across the African continent, where the problem and lived reality of inequality is ever present. As I will describe in this book, the risks of AI are higher in places outside the West. This is so for several reasons – from widespread job displacement and precarious and temporary work ‘gigs’ to human rights atrocities in the AI supply chain and to biased or useless AI systems that further marginalize nonwestern groups. What’s more, in many of the places across the global South, the institutional mechanisms that might ordinarily protect individual rights and citizens’ interests are either failing or unavailable, as they deal at full capacity with more fundamental social issues.

In 2017 a story broke out in South Africa that fundamentally impacted public perceptions of the use of technology on a mass scale in the public sector, and consumed many of those who work in these areas, myself included, for a number of years.

The South African Social Security Agency is responsible for providing cash benefits to just under 50 per cent of South Africans, the majority of whom are entirely dependent on this grant for their livelihoods. In 2012 this agency had entered into a contract with Cash Paymaster Services to distribute social grants to around 17 million beneficiaries across the country. The contract ran from 2012 to 2017, during which time the parent company of Cash Paymaster Services, Net1, established a number of subsidiary businesses. Net1 gave these companies access to the banking details, bank accounts, and grant beneficiary information of all the grant beneficiaries Cash Paymaster Services was servicing. The companies in question used this information to profile potential clients and onsell predatory financial services. Loans at extortionate interest rates were sold to South Africa’s most vulnerable people, who were surviving off grants of less than $100 a month. Beneficiaries were subject to various kinds of automated decision-making (a kind of precursor to the more complex AI systems we find today) to assess the terms of credit on offer. And, because of the collusion between Cash Paymaster Services and the other subsidiaries of Net1, automated deductions were made from the accounts of grant beneficiaries as soon as their grant payment was released.

Reporting from GroundUp, a South African investigative journalism group, described how a mother came to collect her child benefit grant – which at the time was R350 (less than $20) – but her balance after the loan deductions was a mere 26 cents, an amount that the cash point could not even dispense. Many others faced similar hardships as a result of the predatory practices of Net1 and its subsidiaries.

Today these kinds of systems are driven by AI. In fact the upgrade to the Cash Paymaster Services system was a program called GovChat, inconspicuously connected to Net1; and this program now integrates AI capabilities to provide advanced analytics. Such systems are rife with major imbalances of power that are hard to define and detect, and even harder to hold to account. In South Africa, while a high court judgment was handed down against Cash Paymaster Services, the company has gone into liquidation and no actual relief appears to have been provided to the many South Africans whose lives were so gravely affected by it.

But the public indignation was important. Part of what makes it so hard to tell the story of how AI perversely affects citizens in the majority world is that there is not enough public indignation against AI when issues occur. For not sufficient numbers of these stories of harm are coming to light, and without these stories in the public domain it becomes hard to detect where AI is not benefitting people and communities and to question the authority of AI and its supposedly benevolent mission in the world. When stories of the negative impact of AI across the globe do appear, it is because of a whistleblower – as in the case of Daniel Motuang, who blew the whistle on the treatment of Facebook and other Big Tech companies’ content moderators in Kenya (an issue we will delve into more in Chapter 4) – or because of fastidious investigative journalism, such as that published by Rest of World, which circulates stories about technology’s impact beyond the West. This paucity is also the reason why the stories that have arisen about the bias exhibited in the outcomes of AI systems hold such important lessons for understanding how AI is likely to impact societies across the majority world – and particularly stories of racial and gendered bias that are coming to light in droves across the western world. These stories point to a deeper underbelly of AI, an underbelly we will traverse in this book. For, like those in the West who experience discrimination in the face of AI, much of the majority world is effectively considered an anomaly within the logic of the AI system, which has been built on the western experience of the world. Nonwestern people, experiences, and language are barely represented in the datasets on which AI technologies are trained and from which AI systems interpret the world they encounter. From the perspective of this AI system, the western world as reflected in its training data is the only world: everything either conforms to it or is rejected.

From fragments of stories about AI’s use and effects beyond the West, in combination with an analysis of key statistical trends and historical accounts and with the support of notes from my own experiences, accumulated over the years, of examining and working to address the effects of digital technologies and AI on African and majority world societies, a stark picture emerges of a deeply divided world. Just as AI serves to accelerate prosperity and wellbeing in those places where it is produced and readily incorporated into social and economic life, it does not manifestly improve lives anywhere else or for anyone else. Instead, as this book will argue, it deepens poverty, fractures community and social cohesion, and exacerbates divides between people and between groups. On this tangled tapestry, it becomes clear that, in the uneven distribution of AI’s benefits, those who benefit do so because others are being used and harmed to produce AI and to sustain its relevance and reliability. And those who are being exploited and oppressed in the production and use of AI are the very same people who have historically been exploited and oppressed by global powers: women, people of colour, and citizens of the majority world.

Empire’s Inequalities

All over the world we are facing rising levels of global inequality; these levels are the same as they were at the height of European colonialism, at the turn of the twentieth century. The World Inequality Report of 2022 gives us this picture:

Global inequalities seem to be about as great today as they were at the peak of Western imperialism in the early 20th century. Indeed, the share of income presently captured by the poorest half of the world’s people is about half what it was in 1820, before the great divergence between Western countries and their colonies. In other words, there is still a long way to go to undo the global economic inequalities inherited from the very unequal organization of world production between the mid-19th and mid-20th centuries.3

Inequality takes many forms. It is not a monolithic phenomenon and produces radically different experiences in different groups and individuals. Global statistics allow us to gain a bird’s-eye view of how rampant different factors of inequality are across diverse groups and regions of the world. They will, however, be only a proxy for the real life experiences that any one individual may live through and withstand. Inequality conditions any one individual’s ability to live a life of their own choosing and to reach their full potential. But, while the experience of inequality lies at the level of the individual, social inequality and economic inequality are structural phenomena. They are produced by forms of power that exist at given points in time whereby decisions are made that favour one group while oppressing or marginalizing another.

Evidence on AI systems that have displayed critical racial and gendered biases abounds. Tendayi Achiume, who was appointed by the United Nations as a Special Rapporteur to understand contemporary forms of racism and racial discrimination, declared in her report to the UN Human Rights Council that emerging digital technologies such as AI were sharpening inequalities along racial, ethnic, and national origin lines.4 Stories like the ones included among this chapter’s vignettes – about people wrongfully detained or denied access to financial services or benefits because of AI systems whose codes incorporate the racial biases of the world around us – are emerging around the world. A study published by the US National Institute of Standards and Technology, the key government body for measuring the performance of technologies against industrywide standards, demonstrated that, out of 189 AI-driven facial recognition technologies it reviewed, non-white faces were misidentified between 10 and 100 times more than white faces.

An important body of work, led largely by women of colour, examines the relationship between AI, digital technologies, and the production of new forms of racism and exclusion. Ruha Benjamin examines the uses of race in the history of technology, deftly exploring how race is itself used as a technology for oppressive ends, to support white supremacy.5 The work of Safiya Umoja Noble, a co-founder of the Center for Critical Internet Inquiry at the University of California, exposed the deep levels of stereotyped discrimination that are at work in Google’s search algorithms.6 Writing specifically on the racialized histories of surveillance technologies, Simone Browne argues that these technologies are enacted on the bodies of people of colour, functioning to reproduce and reinscribe racialized hierarchies, categorizations, and social conditions.7 This is how the precursors of AI were trying to manage and contain black bodies, and they assumed intention and conduct from a mere ‘reading’ of the appearances of the body.

AI has demonstrated an equally appalling performance on gender equality. AI-driven recruitment systems have been found to downgrade the ranking of CVs that contain references to women’s colleges or women’s rights advocacy. AI-powered assessments for credits and loans have generated different outcomes for men and women with the same financial profiles. At the same time, online advertising has used algorithmic profiling to show women and people of colour lower-paid and less prestigious job adverts. In fact the AI industry is dominated by men, only around 12 per cent of leadership positions being occupied by women. A recent survey revealed that a staggering 73 per cent of the women who work in tech have experienced gender-based bias that ranges from favouritism towards male colleagues to sexual harassment.8

These concerns have largely been raised in relation to evidence that has come to light from western contexts. Applied to the majority world, the racial and gender biases that AI has exhibited become even more acute. Facial recognition AI systems applied in countries where the majority of the population is non-white are risky, to say the least. And in many contexts across the majority world, women are even more marginalized and disempowered. Will AI help ordinary people to live better lives, or will it make their lives worse?

European colonialism produced the most profound forms of inequality between people and places that continue to structure our contemporary condition. Race and gender, the most pervasive categories of inequality, are both linked critically to European colonialism. Race was colonialism’s central creed: an invented marker of human difference and worth. And gender – particularly western binary notions of gender and traditional gender norms – was fixed by colonialism in huge areas across the world.9

To date, there has been no systematic treatment of the relationship between colonial histories and the AI divide, while the relationship between AI and global inequality goes almost completely unacknowledged.10