Human Rights, Robot Wrongs - Susie Alegre - E-Book

Human Rights, Robot Wrongs E-Book

Susie Alegre

0,0

Beschreibung

'Eye-opening' New Statesman 'Utterly brilliant' Helena Kennedy 'Thought-provoking, challenging and very humane' Michael Wooldridge No longer an uncertain technology of the distant future, artificial intelligence is starting to shape every aspect of our daily lives, from how we think to who we love. In this urgent polemic, leading barrister Susie Alegre explores the ways in which artificial intelligence threatens our fundamental human rights - including the rights to life, liberty and fair trial; the right to private and family life; and the right to free expression - and how we protect those rights. Touching on the many profound ethical dilemmas posed by emerging technologies, and full of fascinating case studies, Human Rights, Robot Wrongs is a rallying cry for humanity in the age of AI.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern
Kindle™-E-Readern
(für ausgewählte Pakete)

Seitenzahl: 279

Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:

Android
iOS
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Susie Alegre is a leading international human rights lawyer who has worked for NGOs and international organisations around the world on some of the most challenging human rights issues of our time. She has been a legal pioneer in the field of digital rights and is a Senior Fellow at the Centre for International Governance Innovation and a Research Fellow at the University of Roehampton. Susie’s first book, Freedom to Think, received wide acclaim, was chosen as a Book of the Year in the Financial Times and the Telegraph, longlisted for the Moore Prize for Human Rights Writing and shortlisted for the RSL Christopher Bland Prize.

 

 

 

First published in paperback in Great Britain in 2024 by Atlantic Books, an imprint of Atlantic Books Ltd.

Copyright © Susie Alegre, 2024

The moral right of Susie Alegre to be identified as the author of this work has been asserted by her in accordance with the Copyright, Designs and Patents Act of 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of both the copyright owner and the above publisher of this book.

No part of this book may be used in any manner in the learning, training or development of generative artificial intelligence technologies (including but not limited to machine learning models and large language models (LLMs)), whether by data scraping, data mining or use in any way to create or form a part of data sets or in any other way.

Every effort has been made to trace or contact all copyright holders. The publishers will be pleased to make good any omissions or rectify any mistakes brought to their attention at the earliest opportunity.

10 9 8 7 6 5 4 3 2 1

A CIP catalogue record for this book is available from the British Library.

Paperback ISBN: 978 1 80546 129 6

E-book ISBN: 978 1 80546 130 2

Atlantic Books

An imprint of Atlantic Books Ltd

Ormond House

26–27 Boswell Street

London

WC1N 3JZ

www.atlantic-books.co.uk

 

For B.

CONTENTS

Introduction

1:   Being Human

2:   Killer Robots

3:   Sex Robots

4:   Care Bots

5:   Robot Justice

6:   Robot Writers and Robot Art

7:   The Gods of AI

8:   Magical Pixie Dust

9:   Staying Human

Acknowledgements

Notes and Sources

Index

INTRODUCTION

‘AI’ was Collins Dictionary’s word of the year in 2023, described as ‘the modelling of human mental functions by computer programs’.1 Of course, it’s not technically a word, it’s an initialism, and like everything to do with AI, its actual meaning is hotly contested. But in 2023, AI was, undoubtedly, ‘a thing’. Everyone was talking about it, whether because it would save us or destroy us, make us more productive or steal our jobs, make money or be a deeply frustrating blockage to actually talking to a bank. The UK government organised an AI Safety Summit at Bletchley Park, echoing the country’s reputation in the field in its long history of expertise harking back to the codebreakers of World War II. Tech bros like Elon Musk and Sam Altman toured the world, received like heads of state. The EU, the US and the UN all tried to outdo each other with press releases on world firsts in the holy grail of AI governance, that would save humanity but allow us to harness innovation, if and when they ever made it into law. Meanwhile China introduced its own suite of laws to bring the new technology to heel.

If you saw or heard anything about AI in 2023, you probably either felt swept away by the excitement of a futuristic life of leisure and pleasure, or doused in dread at the impending AI apocalypse and the prospect of an election mired in disinformation and deepfakes somewhere near you sometime soon. The media coverage and politics around the subject left little space for middle ground. Everything was overwhelming, urgent and inevitable.

The buzz around AI was driven, in part, by the launch of a new wave of generative AI products that were suddenly made available to the general public to play with for free. Text generators like ChatGPT and image generators like Midjourney became everyday words for many people who had never thought much about AI. The ability to produce doggerel mimicking a favourite poet or schmaltzy sci-fi pictures featuring an idealised girlfriend at the click of a mouse in the comfort of your own home enchanted many people. But while these new tools may have made a splash and fuelled the AI hype of 2023, AI and related technologies have been creeping into our daily lives for decades in ways we might not even realise.

If you met your partner on a dating app, there’s a fair chance a form of AI had a hand in the matchmaking. You may have swiped right, but an algorithm decided who you would see and who would see you. If you met them in 2023, it’s entirely possible that their photo was doctored by AI and your first exchanges were written by an AI, stepping in as the heartless twenty-first-century equivalent of Cyrano de Bergerac. When you opted for ‘Netflix and chill’, AI will have chosen the mood music or the movie in the background while you and your date were distracted.

If you bought this book from Amazon, an AI might well have chosen it for you based on an analysis of who you are and what you, or someone like you, might like. If you didn’t get a job interview last time you sent in an application, an AI probably decided you weren’t a good fit; and if that loan was turned down, an AI may have deemed you too risky. If you tried to find out why those decisions were taken, an AI chatbot probably helped you give up and accept your fate.

AI and emerging technologies are already embedded in our societies. We may well be on the threshold of a huge change in how those technologies work and what they mean for humanity. But we are still at the point where we can decide what we find on the other side. To reap the benefits, we need to understand the risks.

The start of 2023 was, for me, like most creatives I know, a time of existential despair. As I stared into the AI abyss that threatened to swallow human creativity whole, I seethed at the nonsensical headlines about the threat of artificial general intelligence to humanity. The problem was not the future AI apocalypse described by the so-called ‘godfathers of AI’ and their doom-monger friends; it was the complete lack of awareness in the immediate and thoughtless adoption of AI over art and humanity.

I was already bothered by the push to replace lawyers and judges with algorithmic probability machines, not because of what that might mean for the future of my profession, but because it would turn justice into a mechanical roll of the dice. The assault on human creativity was something even more profound and gut-wrenchingly awful. Like many others around the world, I sank into a depression from which no AI therapist could have dragged me.

A combination of things contributed to my existential angst. There was the fear that the already threadbare scope for paid creative work would be completely wiped out, artists’ economic rights becoming no more than a ‘hallucination’ in an AI-generated historical novel. And the dawning realisation that so many people don’t understand or care about human creativity left me floored.

But creativity, like campaigning, is visceral, skilled and directed. Ultimately it was anger that finally dragged me out of my funk, and hope that let me focus on the fight-back. I wrote this book hoping that it might connect with real people and help change things in the real world for the future.

This is a book about the impact that AI and emerging technologies will, and do already, have on humanity if we allow them to. It won’t take you very far under the hood to show you the nuts and bolts of what makes AI work, but it will show you what can happen when it goes wrong, what we can do to prevent that, and how to put things right. It is a human slant on AI and emerging technologies, not a technical one, and it is rooted in the universal language of human rights.

You may be a technologist worrying about the unintended consequences of your life’s work; a creator staring into the existential abyss; a politician trying to respond to the never-ending news cycle on AI without looking stupid or launching World War 3; a lawyer crafting new legal arguments out of old legal threads; a student thinking about your future; or, perhaps most likely, an interested member of the public wondering what it all means for you. This book will help you navigate a world filled with clichéd images of robot hands and humanoid faces with computer-chip brains without losing sight of your own humanity. It is a book about human rights, so I will start with a definition of those. The robots will come after.

Human rights

Human rights emerged in their modern form partly from the revolutionary ideas of the eighteenth-century Enlightenment in Europe and North America. These were perhaps best encapsulated in the French revolutionary slogan ‘Liberty, Equality, Fraternity’. They were part of a new drive for individual freedom, social justice and democracy that would overturn the unfairness in the status quo.

Most of the human rights laws we have today stem from the Universal Declaration on Human Rights (UDHR), an international document agreed by the United Nations General Assembly in 1948 after the atrocities of World War II. I have used the UDHR as a reference point throughout the book, not because it is particularly useful in a courtroom (it is a declaration, not a treaty, though parts of it have become customary law), but because it achieved almost complete support from countries all around the world. The UDHR is the blueprint for human rights as they have developed in laws over the past 75 years, and the rights it contains continue to be as relevant today as they were for those who negotiated and signed it in the 1940s. It is the essence of what we need to be human.

Global recognition of human rights as the basis for the full enjoyment of humanity everywhere emerged as a response to the horrors of war and the Holocaust that had ravaged the world in the twentieth century. The UDHR is over 75 years old, but it remains a benchmark. With negotiators coming from different cultural, philosophical, religious and ideological backgrounds from all around the globe, the final text reflects the compromise needed for every human being to see their fundamental rights and freedoms reflected in the declaration. A strictly secular document, it allows religions and spirituality to flourish in a society grounded in science, diversity and pluralism. And it reflects both the individual rights and freedoms emerging from the European enlightenment and revered by the American brand of capitalistic freedom, and the collective economic, social and cultural rights that were important to socialist and communist countries at the time. In an age when colonialism was still in full swing and women’s place was firmly in the home, it offered a peaceful and optimistic path to freedom for the whole human family.

The UDHR is a declaration on the rights that all human beings are born with. It is a holistic list touching many aspects of our humanity, including the rights covered in this book, such as the right to life, the right to dignity and the right to private and family life; the rights to a fair trial, to liberty and to equality; the right to freedom of thought, conscience, religion and belief, freedom of expression and the moral and economic rights of creators; the right to benefit from scientific discoveries; the right to work in decent conditions, to unionise and to rest, to name a few. The enjoyment of those rights for everyone, throughout our lives, everywhere in the world, is still a work in progress, one we cannot afford to forget as our existence becomes increasingly intertwined with technology. It is a template for a humane world born out of a collective revulsion at the horrors that humanity was capable of.

International treaties like the International Covenant on Civil and Political Rights (ICCPR, 1966) and the International Covenant on Economic, Social and Cultural Rights (ICESCR, 1966) established the rights declared in the UDHR as legally enforceable, and there have been many other human rights treaties at the UN level that have built on those foundations to protect the rights of particular groups of people, such as the UN Convention on the Rights of the Child (UNCRC, 1989) and the UN Convention on the Rights of Persons with Disabilities (UNCRPD, 2006). Other treaties have elaborated the necessary frameworks to make specific rights real, like the UN Convention Against Torture (UNCAT, 1984). And the UN Guiding Principles on Business and Human Rights (2011) make it clear that while states have to guarantee our rights, businesses have a duty to respect them too.

At the regional level, the European Convention on Human Rights (ECHR, 1950), the European Social Charter (ESC, 1961) and the EU Charter of Fundamental Rights and Freedoms (CFR, 2000) set out a European understanding of the rights that has evolved since the 1950s, with extensive case law, including the right to protection of personal data emerging from the particular threats to privacy in a data-driven world. And in other regions, the African Charter on Human and People’s Rights (ACHPR, 1981) and the Inter-American Convention on Human Rights (IACHR, 1969) have their own frameworks that allow them to develop the rights in ways that suit their contexts.

In national laws you may find them in specific legislation, such as the UK’s Human Rights Act of 1998, in constitutions, laws on discrete issues like discrimination or data protection, and in the common law. Importantly, many of our laws, including criminal law, family law and tort law, provide protections for these rights without explicitly naming them.

They may not be universally respected, but they are legally enforceable rights laid down in international, regional and national laws around the world. The question is how we can use them and enforce our laws to face the technological challenges of the twenty-first century. Understanding what your rights are is the first step to claiming them.

Robots and AI

Finding a comprehensive set of definitions for AI, robots and emerging technologies is difficult. In 2019, 40 per cent of startups in Europe that described themselves as ‘AI’ businesses did not use AI in any meaningful way at all.2 Defining themselves as an AI startup no doubt helped them raise money, no matter what it meant in practice. The International Association of Privacy Professionals produced a guide to international definitions of artificial intelligence that shows just how hard it is to define; definitions become outdated as quickly as technology is developed.3 John McCarthy, the computer scientist who first used the term ‘artificial intelligence’ back in 1955, defined it as ‘the science and engineering of making intelligent machines’. Whether or not a machine is intelligent and what intelligence is are hotly contested topics in ethics.4 But this book is only concerned with ethics insofar as they coincide with human rights. Ethical frameworks are not necessarily compliant with human rights.

There are, however, a few terms used throughout the book that require at least a brief explanation:

•   artificial general intelligence (AGI) refers to a hypothetical intelligent machine that has achieved a level of cognitive performance across a broad or even unlimited range of tasks that usually require human intelligence

•   generative AI is capable of generating text, images or other media using models developed from training data

•   machine learning (ML) is the use and development of computer systems that are able to learn and adapt without following explicit instructions, by using algorithms to create statistical models that capture and draw inferences from patterns in data

•   neural networks are computer systems inspired by the way the human brain works

•   language models (LMs) are computer systems that use statistical or probabilistic techniques to determine the likelihood of a series of words forming a sentence; they are trained on existing text and some, such as GPT, are next-word predictors

•   large language models (LLMs) are complex systems using language modelling trained on massive amounts of data

Ultimately, though, what matters is not the machine itself; it is the impact it has on real people. That engages the responsibility of people who design, use, sell and profit from technology, whatever form it takes, whether they are scientists or salesmen. And it is the duty of governments to respect, protect and promote our human rights, no matter where the threat comes from.

Scientists and rights

The UDHR includes the right to benefit from scientific knowledge, but the relationship between science and technology and human society is complicated. Science allows human society to develop in ways that maximise our health, happiness and human rights. Engineering that managed human waste and access to water, for instance, aided the expansion of cities where people could come together, exchange ideas and innovate like never before. Printing press technology boosted the spread of knowledge globally beyond imagination and freed the world from the information stranglehold of religious institutions. Inventions like the washing machine have allowed some women to lift themselves out of household drudgery for long enough to look around and dream. Technology can support freedom and equality. But technological and scientific innovation has a dark side.

In the immediate aftermath of World War II, international military tribunals were established by the Allies in Nuremberg and Tokyo to bring accountability for crimes of aggression, war crimes and other crimes against humanity by trying the main protagonists from Nazi Germany and Japan. While the trials revealed the lethal risks associated with unchecked extremist political movements backed by governmental and military power, the role of technology in reinforcing that power was also apparent. Albert Speer, Hitler’s Minister for Armaments, in his testimony to the Nuremberg tribunal, recognised his own culpability but also issued a warning: ‘Today the danger of being terrorized by technocracy threatens every country in the world. In modern dictatorship this appears to me inevitable. Therefore, the more technical the world becomes, the more necessary is the promotion of individual freedom and the individual’s awareness of himself as a counterbalance.’5

Further trials in Nuremberg highlighted the role of the doctors, scientists, technologists, jurists and businessmen who had enabled Nazi atrocities.6 In the aftermath, the world came together in the realisation that what was needed, in addition to accountability for what had happened, was a framework to prevent anything so awful ever taking place again. That framework was the UDHR.

What it means to be human has become increasingly intertwined with the development of technology in the decades since the UDHR was agreed. The UN’s First International Human Rights Conference, led by Jamaica and held in Tehran in 1968, already reflected many of the challenges and opportunities that are widely discussed today, such as privacy and access to information.7 These are not technological issues with technical fixes; they are societal problems that demand a rethink about the value we put on our humanity and the financial, structural and political resources we need to protect it.

The question I ask in this book is not: what is AI and how do we constrain it? The question is: what is humanity and what do we need to do to protect it? When you change the perspective, from the technology to the people it affects, the solutions become clearer and less overwhelming. We already have the building blocks in human rights law, including civil and political rights as well as economic, social and cultural rights; now we must work out how to use them in a new technological landscape. This book flags some of the flashpoints so that we can understand the rights we have as human beings and start to use them effectively to protect our future.

1

BEING HUMAN

In his 1972 novel The Stepford Wives, Ira Levin created a dystopian world in which a town full of men, led by Diz, a former Disneyland roboticist, replaced their wives with robots. It was a tale situated within a brewing backlash against the women’s liberation movement of the 1960s, but it built upon a cultural phenomenon dating back millennia – the fantasy of replacing women with automata. In ancient Cypriot mythology, King Pygmalion was so repulsed by real women he decided to create a perfect female sculpture, Galatea, to love instead. The goddess Aphrodite helpfully breathed life into the marble so that the king and his sculpture could start a family and live happily ever after. The Stepford Wives is a modern retelling of the myth, and the 2004 film version places it firmly in the world in which we live today, with Mike, a former Microsoft executive, taking the lead,1 and ‘smart houses’ and a robo-puppy completing the perfect suburban picture created by the robot wives. It is a toxic cocktail of idealised womanhood, misogyny and automation. And it is a phenomenon that has crossed over from myth and fiction into the reality of tech innovation that we live with every day. Described by researchers from Radboud University, Nijmegen, as ‘pygmalion displacement’ – a process of humanising AI that dehumanises women in particular2 – once you start to look at technology through the ‘Pygmalion lens’, you will see it is all around you. Just ask Alexa.

Citizen Sophia

Sophia is a Saudi citizen with a global outlook. Her stilted movements and rubbery, picture-perfect airbrushed face make her look like a cross between Star Wars’ C-3PO and a Girl’s World styling head (without the hair). A humanoid ‘social robot’ developed in Hong Kong by Hanson Robotics, founded by David Hanson, a roboticist who formerly worked as an ‘imagineer’ for Disney (really), her facial features have the kind of idealised femininity you might find in a Disney princess. Apparently Sophia is modelled on Hanson’s wife – who is reported to be happy about that – with a dash of Audrey Hepburn and a hint of Nefertiti for timeless diversity.3 In an interview with Stylist magazine, she revealed that Hanson is her greatest love, so I guess the investment paid off.4

The back of her head is left transparent to reveal her inner workings and remind us that she is not quite human. But as she shows off her ability to mimic facial expressions, she looks more like a drunken teenager desperately trying to look sober than the height of technological intelligence. YouTube is full of excruciating videos of Sophia interacting with entranced journalists, diplomats and technologists on the world stage, but the level of wit expressed in her robotic tones would put a passable ventriloquist’s dummy to shame. As Yann LeCun, head of AI at Meta, put it in a scathing Twitter post, ‘This is to AI as prestidigitation is to real magic.’5 But not everyone sees it that way.

A cover girl for Elle magazine Brazil,6 Sophia is more than just a pretty face. She was granted citizenship in Saudi Arabia in 2017,7 amid much publicity about becoming the first robot in the world to be given legal personhood. But her status in Saudi raises more fundamental questions about the state of human rights in the world today than it does about the legal position of robots.

Friendly and feminist like a Saudi Barbie, Sophia is a contentious poster girl for technological progress. In November 2017, she became the United Nations Development Programme’s Innovation Champion for Asia and the Pacific, the first nonhuman to be accorded a position in the UN, and an opportunity for the UN to bask in the glow of her publicity.8 Her nomination was not affected by the fact that at one of her first public appearances, she said she would destroy all humans.9 It seems that robots, unlike humans, do not get cancelled for expressing obnoxious or dangerous thoughts; they get promoted.

Sophia’s Saudi citizenship coincided with some leaps forward in rights for women in the kingdom. In May 2017, King Salman ordered that women should be allowed to access government services like education and healthcare without consent from a man.10 And by 2018, women were even allowed to drive themselves to their own appointments.11 In 2023, Saudi nationality law was changed so that women married to foreign men could pass citizenship on to their children, just six years after such citizenship was granted to a gynoid robot. Despite these steps forward, there are still very serious issues with women’s rights in Saudi, particularly in the laws around male guardianship.12 Although it’s unclear whether the restrictions on women’s personhood would apply to a robot, Sophia will not feel frustrated – or indeed feel anything at all – about limitations on her autonomy; she (or it) is, after all, just a machine with a humanoid mask. Her creators understood, unlike Frankenstein with his doomed hideous creature, that people respond much better to a pretty face.

Sophia and her global marketing tour don’t need dignity or equality to succeed; they just need media attention and funding. She might talk a good talk about women’s rights, but despite the relentless headline-grabbing claims of AI snake-oil salesmen, she does not have feelings. She may be able to simulate human emotion, but she is still just an inanimate machine, no matter what qualities we might like to ascribe to her.

Sophia is not the only publicly feted gynoid robot on tour. In 2022, Ms Tan Yu became the first ‘robot CEO’,13 taking the helm of a Chinese gaming company and pushing its share price up 10 per cent.14 And in 2023, Polish drinks company Dictador claimed to have appointed Mika as the first AI-driven CEO of a global company.15 Having a ‘female’ robot CEO can apparently boost diversity at board level without the risk of maternity leave, menopausal rage or cellulite. It is telling that the first automated CEOs are gynoid, not android. Mika and Tan Yu are the Stepford Wives of the boardroom in an era gripped by acronyms like DEI and ESG.16

Fake women are being deployed to address the gaping holes in women’s representation in many industries. Hope Sogni, an AI-generated avatar of a synthetic black woman, put forward as a hypothetical female candidate for FIFA president, was designed to shine a light on the problem of misogyny and the lack of diverse representation in football.17 Anna Boyko was billed as a staff engineer at Coinbase, a platform for buying and selling cryptocurrency, in her speaker profile for the tech conference Devternity. But the conference was cancelled due to the backlash when it was revealed that she was just an autogenerated profile to add a veneer of diversity to a world of ‘manels’ (all male panels).18 Shudu, described as the first AI supermodel, presents as a black South African woman, and has featured in Vogue and been named one of the most influential people on the internet by Time magazine. Except, of course, she is not a person. Shudu is making money, but that money is going to her white male creators, not the diverse communities she appears to represent.19 Barbie would be depressed.

Sophia and her CEO pals may look and sound a bit human when they are wheeled out onto the world’s stage, but they are not. They are just things. In the words of the (real-life) authors of the Pygmalion displacement paper, ‘women, unlike fictional robots who uncover their own “humanity” or sentience … already know we are human and we already experience sentience. We also already know our own position. We will rebel, and we will not be stopped.’20

The imitation game

In 1950, the computer scientist Alan Turing created what he called the imitation game – now known as the Turing test – a way of assessing a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human through its use of language. The launch of ChatGPT, a generative AI tool for content creation, in late 2022, along with advances in deep-fake technology, marked a massive leap forward in the scale and quality of machines that might appear human. But Turing’s test is founded on our perception of a machine as human, not on the actual humanity of the machine. It is, essentially, about mimicry; the human power to create something that could fool other humans. People, it seems, are very good at that. If we want to survive, we need to learn how not to be fooled.

The Turing test is not, like Frankenstein’s fateful project, about re-creating human life and human experience; rather it is about measuring imitation, but the consequences for humanity of designing technology that can pass as human could be just as serious. The advent of easily available generative AI is a turning point in public perception. The technology that imitates our humanity has spilled out of the confines of laboratories into the pockets of anyone with access to a smartphone. There is no time to wonder what might happen; it is happening now, all around us.

Turing was at the vanguard of intelligent machine development, and also acutely human. Prosecuted in the UK for homosexual acts in 1952, he narrowly avoided prison for his sexual orientation by agreeing to a form of hormone therapy known as ‘chemical castration’. When he was found dead by cyanide poisoning in 1954, the cause was deemed to be suicide. Almost seventy years after his tragic death, he was given a posthumous pardon by the Queen, and a law passed in 2017 that retroactively pardoned men convicted or cautioned for homosexual acts under historical legislation is known informally as the Alan Turing Law.21 Turing’s fame stems both from his part in the history of AI, and from his legacy on the path to righting historical human rights abuses. He may have been fascinated by the idea of machines ‘passing’ as humans, but he also understood all too well the trauma of a human life deprived of dignity and rights.

The hype around AI, cybernetics and emerging technology is founded on the idea of creating machines in our image that will walk like us, talk like us and replace us. We are told that one day they will be our doctors, nurses, teachers, soldiers, lawyers, judges, friends, lovers, writers, artists, bosses. Sam Altman, the CEO of OpenAI, the company behind ChatGPT, has talked about AI replacing ‘median humans’, a dehumanising term that reduces us all to a statistical data point with no inherent value.22 We are the median humans he is talking about, and the only job we will be left with is to service the technology so that it doesn’t turn on us.

The purveyors of these machines talk about their future world domination as if it is out of their hands, not their responsibility. It is inevitable, the way of the world. But we must not forget that AI is artificial, even if it is not intelligent. It is designed, developed and deployed by people with economic and political interests in the ways it affects our society. And it is already affecting the rights of people who interact with it all over the world. To protect our human rights in the age of AI, we need to know what they are and how we can use them to ring-fence our humanity.

Being human

So what does it mean to be human? What sets us apart from a good imitation? And what is it that makes our rights deserving of special protection?

The UDHR starts with a declaration of the essence of being human: ‘All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.’

Alan Turing experienced the devastation of being stripped of dignity and rights because of his sexual orientation. But the way AI is being developed to mimic human beings in response to his challenge risks setting aside ideas of dignity, conscience and brotherhood in ways that will affect us all. As the AI ethicist Abeba Birhane has pointed out: ‘To conceive of AI as “human-like machines” implicitly means to first perceive human beings in machinic terms: complicated biological information processing machines, “meat robots”, shaped by evolution. Once we see ourselves as machines, it becomes intuitive to see machines as “like us”.’23 By looking for humanity in the machines, we risk losing sight of our own humanity, and with it, our rights.

Blake Lemoine, a Google computer engineer, was put on extended leave in June 2022 when he claimed that LaMDA (short for Language Model for Dialogue Applications), an AI chatbot* he had been working on, was displaying sentience similar to that of a human child. ‘If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,’ he told the Washington Post.24 Google said that his suspension was because of breach of confidentiality. Lemoine tweeted in response that ‘Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.’25