Erhalten Sie Zugang zu diesem und mehr als 300000 Büchern ab EUR 5,99 monatlich.
As we go about our day-to-day lives, digital information about who we are is gathered from all angles via biometric scans, passport applications, and, of course, social media. This data can never fully capture our complex, fluid identities over decades of our lives. Yet, this data populates numerous databases we may not even be aware of that can make life-or-death decisions such as who is allowed access to welfare benefits or who is granted food parcels as they pass war-torn borders. Machine Readable Me considers how and why data that is gathered about us is increasingly limiting what we can and can't do in our lives and, crucially, what the alternatives are.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 115
Veröffentlichungsjahr: 2023
Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:
Machine Readable Me
Published by 404 Ink Limited
www.404Ink.com
@404Ink
All rights reserved © Zara Rahman, 2023.
The right of Zara Rahman to be identified as the Author of this Work has been asserted by her in accordance with the Copyright, Designs and Patent Act 1988.
No part of this publication may be reproduced, distributed, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without first obtaining the written permission of the rights owner, except for the use of brief quotations in reviews.
Please note: Some references include URLs which may change or be unavailable after publication of this book. All references within endnotes were accessible and accurate as of Sept 2023 but may experience link rot from there on in.
Editing: Laura Jones-Rivera
Typesetting: Laura Jones-Rivera
Proofreading: Laura Jones-Rivera
Cover design: Luke Bird
Co-founders and publishers of 404 Ink:
Heather McDaid & Laura Jones-Rivera
Print ISBN: 978-1-912489-82-4
Ebook ISBN: 978-1-912489-83-1
Machine Readable Me
The Hidden Ways Tech Shapes our Identities
Zara Rahman
Contents
Introduction
Chapter 1: How does data try to capture who we are?
Chapter 2: The assumptions within our data
Chapter 3: Who do they think you are? Categories, classification and profiling
Chapter 4: Growing and changing – or not
Chapter 5: Data from the body: biometrics
Chapter 6: Our data-decided futures
Conclusion: Defining ourselves for ourselves
References
Acknowledgements
About the Author
Also in the Inklings series
Introduction
As we move through the world, information about who we are is gathered from all angles – our fingerprints when we cross borders, our health information when we register at the local pharmacy or doctors’ surgery, whenever we post on social media. This information is used to shape the paths available to us in life though we often don’t even know that data is being gathered, much less what it’s being used for.
In London, UK, children are being entered into databases that affects what happens to them if they get stopped by the police in the future, based purely on where they live, who their families associate with, and the fact they are Black. In India, farmers are being denied access to their pensions, money that they depend on as a matter of life or death, because a machine doesn’t recognise their fingerprints, worn down from decades of manual labour. Iris scans taken in Niger travel to the EU faster and with fewer restrictions than bodies of the people they belong to.
Generally speaking, participation in modern life depends on being part of these systems but data that is gathered about us is increasingly limiting what we can and can’t do, in ways that we rarely see. These systems have been built to prioritise large-scale efficiency for corporations over personal user experience, giving more weight to what machines say than to what the people themselves say in response.
Machine Readable Me looks at how and why that happens. Based on over a decade of research into how data about who we are is gathered, stored and used by governments and international agencies, and the often-unintentional consequences thereof, my research has taken me to over twenty different countries, where I’ve had the honour of working with activists, journalists and communities who have been negatively affected by data or technology. I’ve spoken with refugees who felt they had to choose between giving their fingerprints or gaining shelter for their family for the night, and with people who designed these systems, many with the best of intentions, though that is rarely apparent in operation.
My research tends to be most interested in the margins, in the people who exist outside of the mainstream, or who aren’t what a computer or digital system ‘expects’, communities whose experiences are often ignored, or those who are in a minority. I believe that studying who a digital system doesn’t work for tells us much more about it than who it does, and I interrogate how power moves within a system, drawing from feminist critiques and feminist methods, motivated by wanting to change social systems writ large to be more equitable and fair.
This topic is personally motivated too, as my own identity doesn’t quite fit in the usual given boxes. I grew up in the UK as a British citizen to Bangladeshi parents and as an adult I moved to Germany, becoming a German citizen after the disaster that was Brexit. I’ve answered the question ‘where are you from’ more times than I’d like to count, and I’ve had my own answer contested by strangers an astonishing number of times. I now have two mixed-race children whose own identities are even more between-boxes than my own. They’re young but already part of digital systems in both the UK and Germany, and it’s astonished me how many decisions my partner and I have had to make about how they should be represented or have data collected about them in the digital world. For me, data and digital systems represent what societies value, and it’s clear that something needs to change as social inequality increases and the climate crisis deepens. We all have our part to play in changing that.
Every day, we have to engage with digital systems that seek to identify and/or verify us for different reasons, known as ‘digital identification systems’. Most of the systems I talk about here are run by governments or international organisations like agencies within the United Nations. They are termed differently in different countries – the Aadhaar card in India, the Personalausweis in Germany, and more generic identification cards elsewhere. I’ll refer to ‘identification’, the act of distinguishing and/or recognising someone, which usually happens through someone else.1
Digital identification systems tend to work by gathering, generating and sorting information which can include a huge range of items like age, nationality, location, financial status, and so much more. For a machine to process information, it is ‘translated’ into data. Computers understand such data in binary, it’s either a 1 or a 0 – there’s no grey area, no middle ground. For a computer to process data it needs to be in discrete categories or hold discrete values and, usually, humans decide what those categories are.
For example, when we fill in a form, there’s usually a spreadsheet or database somewhere that’s populated with the answers – like when you buy a train ticket, giving your information so that your seat is registered in your name. There’s no going ‘between’ the cells or creating new categories to meet everyone’s needs. By definition, this kind of data is most useful when it categorises and classifies people, making large populations much easier for states, or those in power, to ‘understand’, and thus control.
As my work has repeatedly shown me, data can never truly capture a full picture of its subject, because who we are changes depending on context, who we’re talking to, where we are, and what we need. We’re not the same people that we were ten years ago, nor will we be the same in ten years’ time. But as long as systems assume that such data can provide a holistic assessment and play such a crucial role in modern life, they continue to cause harm to those who don’t fit the system’s expectations. Consider trans people whose gender identities might not fit binary expectations, or climate refugees who decide to leave their homes in search of a better life, whose movement is restricted by the very same people who caused their homes to be uninhabitable. When governments use this data to determine our lives in ways we cannot see, our ability to decide for ourselves who we want to be and what we want to do, is denied. Our self-determination, our autonomy, all overlooked and ultimately called into question, often through egregious violations of human rights.
Technology and data don’t exist in a vacuum – they’re designed and implemented in societies that are shaped by politics and culture. Issues that might seem far away from those of technology or data but which, as we’ll see, are actually deeply intertwined. Structural inequalities, where there are systemic disparities in how power or resources are allocated based not on an individual’s actions, but on how the very system is designed and built. These inequalities can happen when the group of people deciding how resources are allocated either intentionally or unintentionally set the rules so that they discriminate against a certain group. For example, how women are paid less than men, and how within that group, Black women are paid less than white women.
I’ll also talk about race, and racial identity within data systems. Race is a social construct, a way of classifying humans that is purely invented by humans, rather than anything biological or inherent.2 Were it not for how society perceives race, it wouldn’t be an issue. As Tukufu Zuberi describes in White Logic, White Methods, it is ‘the international belief in race as real that makes race real in its social consequences.’3 If we didn’t live in a society where being Black or white or brown matters, the melanin in our skin would not impact our lives. But we do. Structural racism is real, and as a result, our racial identity within and outside of data matters.
We are so much more than data gathered. Who we are changes fluidly, and other people’s impressions of us should not be codified or set in stone in ways that impact the paths available to us. In many cases, the governmental digital systems that gather data about who we are are built upon problematic policies and laws, the types that discriminate against skin colour, sexuality, sexual preferences, or other aspects of who we are. They cannot be improved without purposeful attention and undoing, without intentional work to unpack whom these policies harm and why. But all too often, data-focused approaches are being used to reinforce those unfair policies and approaches, thereby making structural inequalities stronger.
It’s crucial to remember that data is flawed. A digital version of us will never capture our full, messy, fluid selves and when states, companies or organisations try to use those digital versions as proxies for profiling, they’re causing harm through misrepresentation and misjudgement that has long, tangible implications.
Chapter 1: How does data try to capture who we are?
In 2012, I read a tweet from the United Nations High Commission for Refugees (UNHCR), one of the world’s biggest refugee agencies. The tweet described the reaction of a Senegalese refugee who had just received an identification card, issued by UNHCR.
‘At least I have an identity now. I exist.’4
The accompanying article went on to extol the ways in which gaining an identification card changed lives, allowing recipients to integrate into local society, gaining access to loans and local schools for their children.5 It also noted that the cards weren’t just simple identification cards, they also held the owners’ fingerprints, photo and biographical data. Why, I wondered, would a card need that much information? And what might happen if it fell into the wrong hands? Those musings, and that statement of ‘existing’ in large part due to that card, stayed with me and ended up being the spark that shaped my research over the coming years.
The way that data is organised, via categories and labels, can have a huge impact on our lives. Journalist Lena Groeger writes, ‘decisions about how to design a form have all kinds of hidden consequences’, citing many examples of how form design affected important data collection, such as data about different races gathered via the census in the United States, even influencing who people are more likely to vote for.6 Category creation and curation has always been the source of a great deal of power, long before digital technology spread it faster and further than we’d ever imagined.
In the 1920s, Belgian colonial powers took it upon themselves to institutionalise racial categories that had not, up until that point, played a significant role in Rwandan society. Key to that was carrying out a census that ‘classified the entire population as Tutsi, Hutu, or Twa, and issued each person with a card proclaiming his or her official identity’,7 before they moved on to reforming local administration according to these new racialising categories, as Ugandan scholar Mahmood Mamdani explores in his book, When Victims Become Killers.
The Belgians, and other colonising powers before them, understood the power of drawing lines in society where there were previously none, and of adding levels of bureaucracy to make life easier for the few that were in power. Under their rule, ‘the colonial power constructed the Tutsi as nonindigenous and the Hutu as indigenous … This had a crucial social effect: neither kwihutura (the social rise of an individual Hutu to the status of a Tutsi) nor gucupira (the social fall from a Tutsi to a Hutu status) was any longer possible. For the first time in the history of the state of Rwanda, the identities “Tutsi” and “Hutu” held permanently. They were frozen.’8
Ghanaian-American philosopher and academic Kwame Anthony Appiah calls this the ‘Medusa Synd-rome’, writing that ‘what the state gazes upon, it tends to turn to stone.’9 He describes this inadequate but somewhat inevitable strategy that the nation-state adopts as the only way a state has of making its people legible or, in other words, of ‘watching’ its population. But watching is not the same as seeing.
Nowadays, our identities are codified in a wide range of systems which are primarily controlled by large institutions through documents like passports and ID cards. States are one of the main players in the identification game, using data to discern those worthy of welfare payments, to charge taxes, to offer health services and pay pensions, and this is nothing new. What is
