Ethics of Socially Disruptive Technologies - Ibo van de Poel - E-Book

Ethics of Socially Disruptive Technologies E-Book

Ibo van de Poel

0,0
6,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung


Technologies shape who we are, how we organize our societies and how we relate to nature. For example, social media challenges democracy; artificial intelligence raises the question of what is unique to humans; and the possibility to create artificial wombs may affect notions of motherhood and birth. Some have suggested that we address global warming by engineering the climate, but how does this impact our responsibility to future generations and our relation to nature?


This book shows how technologies can be socially and conceptually disruptive and investigates how to come to terms with this disruptive potential.


Four technologies are studied: social media, social robots, climate engineering and artificial wombs. The authors highlight the disruptive potential of these technologies, and the new questions this raises. The book also discusses responses to conceptual disruption, like conceptual engineering, the deliberate revision of concepts.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



ETHICS OF SOCIALLY DISRUPTIVE TECHNOLOGIES

Ethics of Socially Disruptive Technologies

An Introduction

Edited by Ibo van de Poel, Lily Frank, Julia Hermann, Jeroen Hopster, Dominic Lenzi, Sven Nyholm, Behnam Taebi, and Elena Ziliotti

https://www.openbookpublishers.com/

©2023 Ibo van de Poel, Lily Frank, Julia Hermann, Jeroen Hopster, Dominic Lenzi, Sven Nyholm, Behnam Taebi, and Elena Ziliotti (eds). Copyright of individual chapters is maintained by the chapters’ authors.

This work is licensed under an Attribution-NonCommercial 4.0 International (CC BY-NC 4.0). This license allows you to share, copy, distribute and transmit the text; to adapt the text for non-commercial purposes of the text providing attribution is made to the authors (but not in any way that suggests that they endorse you or your use of the work). Attribution should include the following information:

Ibo van de Poel, Lily Frank, Julia Hermann, Jeroen Hopster, Dominic Lenzi, Sven Nyholm, Behnam Taebi, and Elena Ziliotti (eds), Ethics of Socially Disruptive Technologies: An Introduction. Cambridge, UK: Open Book Publishers, 2023, https://doi.org/10.11647/OBP.0366

In order to access detailed and updated information on the license, please visit https://doi.org/10.11647/OBP.0366#copyright

Further details about the CC BY-NC license are available at http://creativecommons.org/licenses/by-nc/4.0/

All external links were active at the time of publication unless otherwise stated and have been archived via the Internet Archive Wayback Machine at https://archive.org/web

Any digital material and resources associated with this volume may be available at https://doi.org/10.11647/OBP.0366#resources

ISBN Paperback: 978-1-80511-016-3

ISBN Hardback: 978-1-80511-017-0

ISBN Digital (PDF): 978-1-80511-057-6

ISBN Digital ebook (EPUB): 978-1-78374-789-4

ISBN Digital ebook (XML): 978-1-80511-050-7

ISBN Digital ebook (HTML): 978-1-80064-987-3

DOI: 10.11647/OBP.0366

Cover image: Blue Bright Lights, Pixabay, 6th April 2017, https://www.pexels.com/photo/blue-bright-lights-373543/

Cover design: Jeevanjot Kaur Nagpal

Contents

List of abbreviations vii

Contributor Biographies 1

Acknowledgements 7

Foreword 9

1. Introduction 11

Lead author: Ibo van de Poel Contributing authors: Jeroen Hopster, Guido Löhr, Elena Ziliotti, Stefan Buijsman, Philip Brey

2. Social Media and Democracy 33

Lead author: Elena Ziliotti Contributing authors: Patricia D. Reyes Benavides, Arthur Gwagwa, Matthew J. Dennis

3. Social Robots and Society 53

Lead author: Sven Nyholm Contributing authors: Cindy Friedman, Michael T. Dale, Anna Puzio, Dina Babushkina, Guido Löhr, Arthur Gwagwa, Bart A. Kamphorst, Giulia Perugia, Wijnand IJsselsteijn

4. Climate Engineering and the Future of Justice 83

Lead authors: Behnam Taebi, Dominic Lenzi Contributing authors: Lorina Buhr, Kristy Claassen, Alessio Gerola, Ben Hofbauer, Elisa Paiusco, Julia Rijssenbeek

5. Ectogestative Technology and the Beginning of Life 113

Lead authors: Lily Eva Frank, Julia Hermann Contributing authors: Llona Kavege, Anna Puzio

6. Conceptual Disruption and the Ethics of Technology 141

Lead author: Jeroen Hopster Contributing authors: Philip Brey, Michael Klenk, Guido Löhr, Samuela Marchiori, Björn Lundgren, Kevin Scharp

Glossary 163

Index 167

List of abbreviations

AI:

Artificial Intelligence

AIBO:

Artificial Intelligence Robot; in Japanese aibō means “pal” or “partner”

BECCS:

Bioenergy with Carbon Capture and Storage

CCS:

Carbon Capture and Storage

CDR:

Carbon Dioxide Removal

ChatGPT:

Chat Generative Pre-training Transformer

CRISPR-cas9:

CRISPR-associated protein 9, where CRISP stands for Clustered Regularly Interspaced Short Palindromic Repeats

DACCS:

Direct Air Capture with Carbon Storage

ESDiT:

Ethics of Socially Disruptive Technologies (research program)

EUFI:

Unified Extensible Firmware Interface

EW:

Enhanced Weathering

GBAM:

Ground-based Albedo Modification

GHG:

Greenhouse Gasses

LaMDA:

Language Model for Dialogue Applications

LGBTQ+ :

Lesbian, Gay, Bisexual, Transgender, Queer and many other terms (such as non-binary and pansexual)

MCB:

Marine Cloud Brightening

IAU:

International Astronomical Union

IPBES:

Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services

IPCC:

Intergovernmental Panel on Climate Change

IVF:

In Vitro Fertilization

NGO:

Non-Governmental Organization

NH:

New Hampshire

OF:

Ocean Fertilization

SAI:

Solar Aerosol Injection

SRM:

Solar Radiation Management

STEM:

Science, Technology, Engineering and Mathematics

VSD:

Value Sensitive Design

WEIRD:

Western, Educated, Industrialized, Rich, and Democratic

Contributor Biographies

Dina Babushkina is an Assistant Professor in philosophy of technology and society at the University of Twente. She researches the ways AI (and social robotics) affect, change, and disrupt interpersonal relationships, personhood and human lived experiences, with special attention to human cognitive practices and decision making. ORCID: 0000-0003-4899-8319

Philip Brey is a Professor in philosophy and ethics of technology at the University of Twente and leader of the ESDiT programme. His research is in general ethics of technology, in which he investigates new approaches for ethical assessment, guidance and design of emerging technologies, and in ethics of digital technologies, with a focus on AI, robotics, internet, virtual reality and the metaverse. ORCID: 0000-0002-4789-4588

Lorina Buhr is a Postdoctoral Researcher at Utrecht University. Her research examines conceptual, ontological and normative aspects of finitude and irreversibility in nature, using the examples of extinction and technologies for de-extinction. ORCID: 0000-0002-0718-6026

Stefan Buijsman is an Assistant Professor at Delft University of Technology and works on explainable AI and related epistemic challenges to responsible AI. ORCID: 0000-0002-0004-0681

Kristy Claassen is a PhD candidate at the University of Twente. Her research focuses on intercultural philosophy, Ubuntu and artificial intelligence. ORCID: 0000-0001-5162-2529

Michael T. Dale is an Assistant Professor of philosophy at Hampden-Sydney College. He is currently interested in exploring to what extent empirical findings can have implications for the ethics of artificial intelligence and social robotics. ORCID: 0000-0001-7827-5248

Matthew J. Dennis is an Assistant Professor in ethics of technology at Eindhoven University of Technology. His research investigates how technology can be designed to promote autonomy, fairness, and well-being. ORCID: 0000-0002-4212-6862

Lily Eva Frank is an Assistant Professor of philosophy and ethics at Eindhoven University of Technology where she works on technologies of the body and ways in which they can be ethically and socially disruptive. ORCID: 0000-0001-8659-2390

Cindy Friedman is a PhD candidate at the Ethics Institute, Utrecht University. Her research focuses on the ethics of social robots, with a particular focus on humanoid robots, and the ethics of human-robot interaction. ORCID: 0000-0002-4901-9680

Alessio Gerola is a PhD candidate in the Philosophy Group of Wageningen University. He explores the philosophical and ethical impacts of biomimetic design, the intentional imitation of nature for technological innovation. ORCID: 0000-0003-4417-9367

Arthur Gwagwa is a PhD candidate at the Ethics Institute at Utrecht University. His research focuses on anti-domination approaches in new frontier technological and data relationships between the Global North and China and the Global South. ORCID: 0000-0001-9287-3025

Julia Hermann is an Assistant Professor of philosophy and ethics at the University of Twente where she works on ectogestative technology, care robots, technomoral change and progress, and new methodologies in the ethics of technology. ORCID: 0000-0001-9990-4736

Ben Hofbauer is a PhD candidate at Delft University of Technology, in the faculty for Technology, Policy & Management. His work focuses on the ethical implications of the research on, and potential deployment of solar climate engineering technologies. ORCID: 0000-0003-4839-5315

Jeroen Hopster is an Assistant Professor of ethics at Utrecht University. His research centers on climate ethics and on investigating the nature of socially disruptive technologies. ORCID: 0000-0001-9239-3048

Wijnand IJsselsteijn is a Full Professor of cognition and affect in human-technology interaction at Eindhoven University of Technology (TU/e), scientific director of the Interdisciplinary Center for Humans and Technology at TU/e, scientific board member of the Eindhoven AI Systems Institute (EAISI), and part-time professor at the Jheronimus Academy of Data Science (JADS). He researches the impact of media technology on human psychology and the use of psychology to improve technology design. ORCID: 0000-0001-6856-9269

Bart A. Kamphorst is a Postdoctoral Researcher at Wageningen University & Research. He works on philosophical, ethical, and societal questions related to AI-driven behavior change technologies, particularly in the field of health. ORCID: 0000-0002-7209-2210

Llona Kavege is a Fulbright research fellow in the Netherlands based at Delft University of Technology and the University of Twente where she investigates the moral and social dimensions of partial-ectogestation. ORCID: 0009-0000-6074-3912

Michael Klenk is an Assistant Professor of ethics and philosophy of technology at Delft University of Technology. He works on the intersection of metaethics, epistemology, and moral psychology, and most recently on the topic of (online) manipulation. ORCID: 0000-0002-1483-0799

Dominic Lenzi is an Assistant Professor in environmental ethics at the University of Twente. His research focuses on ethics and political philosophy in the Anthropocene, including topics related to climate ethics, planetary boundaries and natural resource justice, and environmental values and valuation. ORCID: 0000-0003-4388-4427

Guido Löhr is an Assistant Professor of logic and AI at Vrije University Amsterdam. They work on various topics in philosophy of language, social ontology, and philosophy of technology with a focus on concepts. ORCID: 0000-0002-7028-3515

Björn Lundgren is a Postdoctoral Researcher at Utrecht University. He is working on methods of ethics of technology. ORCID: 0000-0001-5830-3432

Samuela Marchiori is a PhD candidate in conceptual engineering in the philosophy of technology at Delft University of Technology. She is developing methods to address and overcome the disruption of moral concepts in relation to socially disruptive technologies. ORCID: 0000-0002-6426-7690

Sven Nyholm is a Professor of the ethics of artificial intelligence at the Ludwig Maximilian University of Munich. His research explores how new developments in artificial intelligence and robotics are related to traditional topics within moral philosophy, such as moral responsibility, well-being and meaning in life, and our human self-understanding. ORCID: 0000-0002-3836-5932

Elisa Paiusco is a PhD candidate in philosophy at the University of Twente, where she investigates the social and ethical implications of carbon dioxide removal. Her work focuses on climate change and intergenerational justice. ORCID: 0009-0008-2369-294X

Giulia Perugia is an Assistant Professor at the Human-Technology Interaction Group of Eindhoven University of Technology. As a social scientist, her research lies at the intersection of social robotics, social psychology, and ethical and inclusive HRI. ORCID: 0000-0003-1248-0526

Ibo van de Poel is a Professor in ethics of technology at Delft University of Technology. His research focuses on values, technology and design and how values, and related concepts that address ethical issues in technology (can) change over time. ORCID: 0000-0002-9553-5651

Anna Puzio is a Postdoctoral Researcher of philosophy and ethics at the University of Twente where she works on the anthropology and ethics of technology, transhumanism, new materialism, robotics, reproductive technologies, diversity in AI, and environmental ethics. ORCID: 0000-0002-8339-6244

Patricia D. Reyes Benavides is a PhD candidate in philosophy of technology at the University of Twente. Her research delves into the technopolitics of the global climate movement, in particular the appropriation of internet platforms by climate activists. ORCID: 0009-0008-6867-864X

Julia Rijssenbeek is a PhD candidate in philosophy of technology at Wageningen University & Research. She investigates the philosophy and ethics of synthetic biology, focusing on the conceptual and normative shifts in thinking about biological matter and lifeforms that the field creates and their contribution to a bio-based future. ORCID: 0000-0001-7377-2667

Kevin Scharp is a Postdoctoral Researcher at the University of Twente. He has published widely on the topic of conceptual engineering. ORCID: 0000-0003-3900-4087

Behnam Taebi is Professor of energy and climate ethics at Delft University of Technology. Taebi is the co-editor-in-chief of Science and Engineering Ethics, and co-editor of The Ethics of Nuclear Energy (Cambridge University Press, 2015) and the author of Ethics and Engineering. An Introduction (Cambridge University Press, 2021). ORCID: 0000-0002-2244-2083

Elena Ziliotti is an Assistant Professor of ethics and political philosophy at Delft University of Technology. Her research focuses on Western democratic theory and comparative democratic theory, with a particular focus on contemporary Confucian political theory. ORCID: 0000-0002-8929-9728

Acknowledgements

This publication is part of the research program Ethics of Socially Disruptive Technologies (ESDiT), which is funded through the Gravitation program of the Dutch Ministry of Education, Culture, and Science and the Netherlands Organization for Scientific Research (NWO grant number 024.004.031).

Although this book has numerous fellows of the ESDiT consortium as authors, it also has been made possible by ESDiT fellows who did not participate as authors. In particular, we want to thank Ingrid Robeyns who contributed to the conception of this book and commented extensively on a near-final version, Peter-Paul Verbeek who contributed to the conception of this book, and Sabine Roeser and Joel Anderson who commented on earlier drafts.

We also thank Elisabeth Pitts of Open Book Publishers for proofreading and corrections, and Freek van der Weij for correcting references and text formatting.

The cartoons in this book have been drawn my Menah Wellen (www.menah.nl). Fig. 5.2 has been drawn by Ilse Oosterlaken.

Foreword

Technologies shape who we are, how we organize our societies, and how we relate to (other parts of) nature. Changes in technologies, and how they are implemented, can be profoundly unsettling. Social media is transforming conceptions of democratic politics; artificial intelligence challenges ideas about what is unique to humans; the possibility of creating artificial wombs may transform notions of motherhood and birth; and proposals for using climate engineering to address global warming may well reconfigure our responsibility to future generations and our relation to nature.

This book investigates how four technologies ―social media, social robots, artificial wombs and climate engineering― can be socially and conceptually disruptive, and what new issues these raise, theoretically as well as practically. It discusses different modalities of conceptual disruption and possible responses, such as conceptual engineering (the deliberate revision of concepts for certain purposes). It argues that socially disruptive technologies raise new questions and may require new approaches and methods in philosophy.

This volume is the result of an intensely collaborative effort by members of the ESDiT (Ethics of Socially Disruptive Technologies) consortium, a large multi-year research program that is led by five universities in the Netherlands (University of Twente, Delft University of Technology, Eindhoven University of Technology, Utrecht University, and Wageningen University).

The ESDiT consortium aims to reassess, revise, and develop approaches in ethics and related philosophical subfields to deal with social and ethical challenges brought about by socially disruptive technologies (SDTs), such as artificial intelligence, robotics, synthetic biology, and climate technology. This book contributes to some of the key objectives of the ESDiT program as it proposes an understanding of conceptual disruption and discusses the disruptive effects of some key twenty-first century technologies. It also sets out that in order to adequately deal with socially disruptive technologies, there may be the requirement of developing new approaches for ethical assessment and guidance.

Although this book has many authors, the authors have worked from a focused set of shared themes and have employed agreed-upon definitions of key terms such as social and conceptual disruption. Chapters 2–5, which each discuss the disruptive potential of a specific technology (social media, social robots, artificial wombs, and climate engineering), have a consistent structure, and address the same questions: What are (potential) impacts and social disruptions brought about by this technology? How is this technology conceptually disruptive? What new questions and issues does this technology raise theoretically as well as practically?

We would like to thank all the members of the ESDiT consortium for making this book possible. This includes the authors of the various chapters, but also all the other fellows who have contributed to the research program as well as to the cooperative intellectual spirit in which a book like this became a real possibility. A special thanks goes to the lead authors who coordinated and edited the contributions to their respective chapters.

The Management Board of ESDiT Current and former members:Joel Anderson Vincent Blok Philip Brey Julia Hermann Sven Nyholm Ingrid Robeyns Sabine Roeser Andreas Spahn Ibo van de Poel Peter-Paul Verbeek Marcel Verweij and Wijnand IJsselsteijn

1. Introduction

© 2023 Ibo van de Poel et al., CC BY-NC 4.0 https://doi.org/10.11647/OBP.0366.01

Lead author: Ibo van de Poel1Contributing authors: Jeroen Hopster, Guido Löhr, Elena Ziliotti, Stefan Buijsman, Philip Brey

Technologies have all kinds of impacts on the environment, on human behavior, on our society and on what we believe and value. But some technologies are not just impactful, they are also socially disruptive: they challenge existing institutions, social practices, beliefs and conceptual categories. Here we are particularly interested in technologies that disrupt existing concepts, for example because they lead to profound uncertainty about how to classify matters. Is a humanoid robot — which looks and even acts like a human — to be classified as a person or is it just an inert machine? Conceptual disruption occurs when the meaning of concepts is challenged, and such challenges may potentially lead to a revision of concepts. We illustrate how technologies can be conceptually disruptive through a range of examples, and we argue for an intercultural outlook in studying these socially disruptive technologies and conceptual disruption. Such an outlook is needed to avoid a Western bias in labeling technologies socially or conceptually disruptive, as this outlook takes inspiration from a broad range of philosophical traditions.

Fig. 1.1 Conceptual disruption. Credit: Menah Wellen

1.1 Introduction

When the birth control pill was introduced in the 1960s, society changed (Diczfalusy, 2000; Van der Burg, 2003; Swierstra, 2013). Women could suddenly delay pregnancy or decide not to have children at all, whereas earlier methods such as Aristotle’s cedar oil or ancient Egypt’s crocodile dung never really offered women a choice. With the pill there was a choice, and sex became increasingly divorced from reproduction. As a result, family sizes changed. The introduction of the pill also had larger social ramifications, alongside other social factors. It became feasible to invest long periods of time in studying, without having to worry about children that needed to be cared for. The proportion of women studying subjects such as law and medicine rose dramatically briefly after the pill became available to unmarried women (Bernstein and Jones, 2019). Marriage practices changed as well now that prolonged dating was feasible. Everyone, including those not on the pill, married later. In short, a single invention changed not just our reproduction, but also wider aspects of society such as gender equality and sexuality. Technology has always had these profound implications for human beings and society and will continue to have them.

What’s more, technologies don’t just alter the way we behave. They can also change the way in which we think by challenging concepts and ways of dividing up the world that we had taken for granted. Consider an example that has been discussed in many recent works of ethics of technology: the notion of ‘brain death’, which emerged in response to the invention of the mechanical ventilator halfway the twentieth century (Baker, 2013; Nickel et al., 2022). As a result of this technology, situations could emerge where a person could retain a capacity to breathe and have a beating heart, yet lack any kind of responsiveness. These patients displayed features considered paradigmatic of being dead (a lack of behavioral capacities; a lack of brain activity), but also some features considered typical of being alive (a heartbeat) (Belkin, 2003). A medical committee discussed the implications of this new state and the medical norms that should be followed, including the ethics of organ transplantation (should this patient be treated as being dead or alive?). In the course of these discussions they considered various options about how these patients should be conceptualized, including redefining the concept of ‘death’, and assessed the ethical ramifications of various conceptual strategies. They ended up proposing the new notion of ‘brain-death’ — a new concept that emerged directly as a consequence of the new situation created by the mechanical ventilator.

Still other technologies challenge what is considered ‘natural’. With the advent of geoengineering, also called climate engineering, the set of technologies that tries to solve some of the issues brought by climate change through deliberate intervention in the Earth’s climate system, it is becoming less clear what ‘nature’ really is. If we can change the composition of the atmosphere and dim the light of the sun through technology, then where does the natural begin and the artificial end? Some have suggested that in the twentieth century we have been witnessing ‘the end of nature’ (McKibben, 1990). While such a claim may rest on a too simplistic notion of ‘nature’, and a too dualistic distinction between ‘natural’ and ‘artificial’, it nevertheless signals that something fundamentally is changing in the relation between humans and the living environment (Preston, 2012). When our actions change the environment so drastically, questions arise regarding whether we should allow the ‘natural’ course of things, where species go extinct and changing temperatures wreak havoc? Or should we adopt a notion of ‘nature’, in which we can control and steer it? Again, the advent of technology and the far-reaching implications of the new capabilities present some tough issues. Both in terms of how we ought to apply the technologies we have, and in terms of how we ought to think about entities such as nature, death, reproduction, and so on.

This new situation is the main concern of this book. How can we investigate and conceptualize the socially disruptive implications of new technologies? And how can we expand the ethical concepts, frameworks and theories that we use to assess these implications, and guide the development, implementation and use of these technologies?

We will discuss these issues in six chapters. This first, introductory chapter will introduce the notions of socially disruptive technologies and of conceptual disruption, and discuss them against the background of the philosophy and ethics of technology as they have developed so far. Chapters 2, 3, 4, and 5 will discuss four socially disruptive technologies, i.e., social media, social robots, climate engineering and new reproductive technologies, following a similar structure. Each of these chapters will start by analyzing the ways in which these technologies are socially disruptive: what are their implications for human beings, nature, and societies, and how can we investigate these impacts? We will then investigate the conceptual disruption that these technologies bring by focusing on the ways in which technologies challenge our understanding of humanity, nature, and society.

Furthermore, we will examine the disruption of ethical or normative concepts: which normative concepts are at stake, and to what extent do they need to be revised or expanded? Finally, these chapters will investigate the further implications of these social and conceptual disruptions. The final chapter of this book will draw some conclusions by explicitly addressing the theme of conceptual disruption and the need for conceptual engineering and conceptual change. What kinds of conceptual disruption can be envisaged? How can these disruptions be addressed? And what do they imply for ethical theory and for philosophy at large?

1.2 Impacts of technology and social disruption

We have discussed how a wide range of technologies can have a huge impact, both on how people behave and on how people think. Disciplines such as Ethics of Technology, Technology Assessment and Science and Technology Studies have long conceptualized this in terms of impacts. This might suggest that there is a one-directional and deterministic relation between the emergence of new technology and all kinds of social and environmental impacts. But, as empirical studies have shown, this relation is often more complex and haphazard (Bijker et al., 1987; Smith and Marx, 1994). For example, blockchain2 is often portrayed as an energy-intensive but privacy preserving technology, but its actual impact depends on the purposes it is used for, and how it is exactly designed. It might be used for tax evasion (through electronic currencies like Bitcoin) but it can also help farmers in Africa with land registration (Mintah et al., 2020), and its energy use is highly dependent on how exactly it is designed (Sedlmeir et al., 2020). As this example illustrates, there are many choices that humans and societies make, or at least can make, on the path from the conception of new technological possibilities to actual impacts. In fact, one of the main tenets of current ethics of technology is that we should move ethical reflection upstream in this process, to the early phases of technological research and development, to avoid or mitigate moral problems upfront.

Despite the best efforts of ethicists and developers, we still feel an increasing impact of technology on our daily lives and societies. Sometimes for the better (as with the pill), and sometimes for the worse (as with social media), but often at a large scale that makes it worth calling these impacts social disruptions. What do we mean by ‘social disruption’? The Cambridge dictionary defines the verb ‘to disrupt’ as ‘[t]o prevent something, especially a system, process, or event, from continuing as usual or as expected’. Similarly, the Merriam-Webster dictionary defines it as ‘[t]o break apart; to throw into disorder; to interrupt the normal course or unity of; to cause upheaval in … ’. Expanding on these definitions, social disruptions may be understood as changes that prevent important aspects of human society (broadly understood) from continuing without change, thereby generating disorder or upheaval.3 In the wake of a social disruption, business as usual can no longer proceed: a rupture occurs that instigates substantial social, institutional, existential, or ethical challenges.

Disruptions involve both a ‘disruptor’, i.e. whatever it is that instigates the disruption, as well as an object of disruption. The disruptor may be a single technology, but typically, it is better understood by considering the wider context of sociotechnical systems, in which emerging technologies play a distinctive role. Warfare and pandemics can be seen as disruptors to human societies of the recent past, and emerging technologies have in turn disrupted how we acted during war and pandemics. Think of the unmanned drones used at the battlefront in Ukraine and the social media campaigns instigated to win public sympathy or to discredit fake news during wartime. Or consider the contact tracing apps and mass vaccination programs that were instigated to curb the COVID-19 pandemic that disrupted human societies globally. As these examples suggest, technologies often exert their transformative potential as part of larger systems.

To be clear, we don’t mean disruption here in the economic sense. Scholars on disruptive innovation (especially Christensen, 2013) have pushed the idea that new technologies can disrupt markets, creating new kinds of products or services that make older companies obsolete. That definitely has an impact, but the impacts we’re critiquing are more fundamental. Technologies can also affect strongly held values and beliefs, core concepts, theories, norms, institutions and human capabilities. These deep disruptions (Hopster, 2021) merit study at least as much as the economic ones. Disruptions may occur in various domains, three of which centrally figure in this book: the domains of the individual human, society, and nature.

The domains of human, of society, and of nature are not neatly delineated. Nonetheless, their distinction provides a useful starting point for thinking of the different levels and contexts in which technology may exert disruptive effects.

The human domain pertains to questions of human nature and human existence, as well as human capabilities, sensory experiences, and human self-understanding, all of which may be implicated by technology. Some scholars speculate that in the future, artificial womb technology may serve to decouple pregnancy from the (female) body (Enriquez, 2021). Obviously, such decoupling would also have major repercussions to human society.

The domain of society pertains to the quality of social life at a larger scale, including the cultural, institutional, and political practices that weave human social life together. An important concern at this level is that of differential disruption (Nickel et al., 2022): different groups may not be similarly affected by technological changes. For example, the use of artificial intelligence by commercial banks to make decisions about who receive a loan or mortgage may affect already underprivileged groups more than the average citizen because this technology may have a discriminatory bias (Garcia et al., 2023).

The domain of nature, in turn, extends to technological disruptions in the non-human realm, which affect other animals and the natural environment. Powerful new genetic technologies employing the CRISPR-cas9 gene-editing technique, as well as the perils of global warming and the resulting technologies that are contemplated and developed to stabilize the earth’s climate, make disruptions in the natural domain a main topic of philosophical and ethical concern.

Deep disruptions challenge established natural boundaries, entrenched social categories, stable social and normative equilibria, as well as our conceptual schemes. They often engender deep uncertainty and ambiguity, as they make us lose our normative, theoretical, and conceptual bearings. Accordingly, deep disruptions call for reflection and reorientation. They require us not only to engage with new philosophical and ethical issues but also to rethink the very concepts and theories we use to think about these issues.

1.3 Conceptual disruption

This brings us to a core theme of this book: conceptual disruption. Concepts are the basic constituents of thought and theorizing. We use words and concepts to give expression to moral and social values, human capabilities, virtues and vices, as well as several other phenomena and features we deem morally relevant. At first sight, it seems that important concepts — agency, freedom, life, vulnerability, well-being, to name just a few (see Fig. 1.2 for a more extensive list) — are rather stable: philosophers may quibble about their precise meaning and application, but in outline their contents seem clear and fixed. But under closer scrutiny, this does not appear to be the case. Ethical concepts are frequently up for debate, and subject to uncertainty, as well as change. Some have even suggested that normative concepts are fundamentally contested (e.g., Gallie, 1955). We claim that technological development often plays a notable role in disrupting fundamental concepts — a role that has only recently been appreciated, but will be given pride of place in this book.

What is conceptual disruption? We take it to be a challenge to the meaning of a concept, which may prompt its future revision. Just as with other disruptions, it means that business as usual cannot continue. Our thinking has to change. Often this means that because of the disruption we are no longer certain how to apply a concept. We face classificatory uncertainty (Löhr, 2022), in the same the way doctors were not sure whether people with a heartbeat but without brain activity were still alive.

Fig. 1.2 Concepts in three domains that are studied in the ESDiT research program (Picture redrawn and adapted from original research proposal)

When technologies are conceptually disruptive, this may be an invitation to rethink the very concepts we use to comprehend and ethically judge these technologies. The conclusion of such reflection need not be a new concept or even a revision of existing concepts. It is equally possible that we have good reasons to retain an existing concept or to make it more precise.

Conceptual disruptions can come in different types (Hopster and Löhr, 2023). First, technological change may yield gaps in our conceptual repertoire. Such a conceptual gap occurs if a new technology yields artifacts, actions, relations, etc., on which we do not have an adequate conceptual grasp. In other words, existing concepts do not provide the needed descriptive or action-guiding tools; therefore, their revision or the introduction of new concepts is needed. Consider humanoid artificial agents like social robots and voice assistants that can evoke affective reactions (Nyholm, 2020; Lee et al., 2021; see also Chapter 3). People can feel upset when a robot is kicked or when a voice assistant is abused. Yet are such responses appropriate? They would be if the social robot and voice assistant were considered to be a ‘person’ or ‘agent’: after all, if a person is harmed, this calls for an empathetic response. Yet concepts like ‘personhood’ or ‘agency’ have traditionally been reserved for humans, and it is not yet established whether they can be extended to humanoid artificial agents, which may lack other relevant features of ‘agency’ and ‘personhood’, such as ‘intentionality’ or ‘free will’.

One solution would be to extend attributions of ‘personhood’ and ‘agency’ to humanoid robots. But doing so also requires us to rethink what these concepts mean, and what their application conditions are, given the distinct characteristics of new digital technologies. Consider that at the same time, people have called for the responsible design of voice assistants: the fact that they often have female voices and continue to patiently respond and politely to harassment and insults could result in misogyny, and is therefore considered an undesirable design feature (Kudina, 2021; Nass and Brave, 2005; West et al., 2019). Thus humanoid robots simultaneously give rise to two rather different responses: an affective response, and an urge to design them as responsible and assertive agents. How should we deal with such entities, in descriptive and normative terms? Arguably, here we are confronted with a conceptual gap: we seem to lack a concept for entities that both evoke an affective response and that we should design in a responsible way.4 For example, persons should be treated with empathy (and dignity), but it would seem improper to think of them as the object of responsible design.5 Therefore, in order to account for the new roles of artificial agents, we need to recalibrate our concepts of ‘personhood’ and ‘agency’.

Secondly, technological change may also give rise to conceptual overlaps. A conceptual overlap emerges when there is more than one concept that describes a new type of artifact, action or event. This might be unproblematic if two non-conflicting concepts apply, but in some cases conflicting concepts may seem to apply to one and the same artifact, action, or event. In turn, this may prompt us to decide as to which concept to apply. As an example, consider the traditional distinction between natural and artificial, and nature and artifact.6 Particularly in Western conceptions of nature, there is a tendency to imagine part of the world untouched by human hands as natural, and picture human-made objects as artificial. Both concepts have various normative connotations. What is ‘natural’ is considered healthy, but also wild and dangerous, and what is ‘artificial’ might be less healthy, but is also safer and more regulated, and falls under the responsibility of human beings. However, very few things in the world are either fully natural or fully artificial, and those that are become more hybrid by the day. For example, few forests in the world are old growth forests; most are restored or newly planted forests that have been heavily influenced by human activity. Many animals and plants are the results of selective breeding. Recent developments in genetic engineering and synthetic biology make some organisms even more the subject of human design. Human-made artifacts tend to make use of organic materials and natural resources, with or without further processing. With the advent of geoengineering, even the climate may be partially brought under human control (see Chapter 4). Here, we seem to be confronted with a conceptual overlap because some entities — such as genetically modified tomatoes — are both ‘natural’ and ‘artificial’. Perhaps the nature-artifact distinction is no longer useful, and the Western conception of the world might benefit from a new conceptual framework that does not fall into this simple dualism but instead is able to assess the world in a more nuanced way.

Thirdly, technological change that generates conceptual change may give rise to conceptual misalignments, i.e., situations where certain concepts are no longer aligned with our values and other concepts. Consider the concept of responsibility. Recent technologies, such as semi-autonomous weapons (drones) and self-driving cars, have raised questions about responsibility, and particularly the relation between control and responsibility. Traditionally, control is seen as a precondition for responsibility: without control, there is no responsibility (Sand, 2021). However, drones and self-driving cars are semi-autonomous and make their own ‘decisions’ independent from human operators; humans thus lack control and can seemingly not be held responsible. At the same time these systems lack reflective capacities and an awareness of their actions that we usually consider necessary to be responsible. Does this mean that we face a responsibility gap, where nobody is responsible for an action (Matthias, 2004)?

Consider military drones. The people ordering or overseeing a drone attack may lack control over it if the drone is programmed to autonomously decide what and when to attack. Suppose the drone mistakenly attacks a civil target, confusing it with a military target. Who is responsible for this mistake? Might we hold the commander responsible, or perhaps the designers of these systems? Maybe, but a broader issue seems at stake.

What we are witnessing here might well be a case of conceptual misalignment; the way we tend to think about responsibility (and control) in these cases might no longer align with certain values and moral convictions, such as the conviction that we should avoid responsibility gaps because their occurrence is undesirable. There are several ways we might resolve such misalignments. We may, for example, give up the moral conviction that responsibility gaps are always bad (Danaher, 2023). Another reply is the proposal for a new notion of control, so-called ‘meaningful human control’, that should ensure that autonomous systems remain under human control, so that humans remain responsible for them (Santoni de Sio and Van den Hoven, 2018). This latter might be seen as a form of concept revision (of ‘control’) in response to conceptual disruption (Veluwenkamp et al., 2022).

Or consider the concept of democracy. Democratic practices, such as elections, are increasingly influenced, if not undermined, by the use of social media technologies (see Chapter 2). But technologies like climate engineering may also raise questions about democracy. Such technologies may be extremely risky, not only for human beings but also for other living beings, and for entire ecosystems. How can we represent non-humans in democratic decision-making? Do they have moral rights, just like humans do? And how should we represent beings who are not alive yet, but who might experience the impact of climate engineering technology in the future? Upholding democratic decision-making might require us to expand our concept of the ‘demos’ that should be given power, and our notion of the democratic rights and duties that belong to the ‘demos’. This might again be seen as a case of conceptual misalignment: it seems that the traditional notion of ‘demos’ may no longer align with the values we want to attain with ‘democratic decision-making’ and ‘democratic representation’. Here, intercultural ethics might play a role in rethinking the concept of democratic representation. Ubuntu ethics, for instance, makes it possible to include ancestors and future generations in the moral community (Behrens, 2012; Pellegrini-Masini et al., 2020), while Maori ethics offers a basis to conceptualize the rights of ecosystems (Patterson, 1998; Watene, 2016).

As these examples demonstrate, technology has major potential to yield conceptual disruptions of various sorts. Technological change yields new entities, practices, and relations, which in turn call for the introduction of new concepts, or for rethinking and refining our current ones. Technological change may leave us with conceptual gaps, overlaps, and mismatches. In the face of these challenges, it is not enough to analyze the meaning of our concepts. Instead, we have to engage in normative and ethical reflection about the concepts we use to think about a rapidly changing world. These are the questions that conceptual disruption prompts and which we will address in the next chapters.

These conceptual changes resulting from technological change are often accompanied by shifts in values. The way we fundamentally think about the world is closely bound to what we find important in the world. So, when we change our concepts, this can have profound moral and social implications. Our value system is challenged, and this may result in profound changes in the way we evaluate the world and act on it (van de Poel and Kudina, 2022). For example, in the last century, we have witnessed the emergence of new moral concepts such as ‘intergenerational justice’ and ‘planetary justice’ (Hickey and Robeyns, 2020). Such concepts express new values and moral convictions, or at least values and moral convictions that have become much more prominent than in previous ages.

These new values and concepts, which express new responsibilities and obligations towards nature and future generations, may be seen as a response to the disruptive effect of certain technologies on the natural environment. However, while technology is a powerful instigator of conceptual disruption, it is not the only one. Concepts and conceptual schemes can also be challenged by other mechanisms. One such mechanism is intercultural dialogue. Conceptual disruption may occur through the interaction of communities that rely on somewhat different values, or on different ontologies. These prompt a rethinking of dominant concepts, and possibly a future revision of these very concepts. This is one of the reasons that underpins our emphasis, throughout this book, on the importance of intercultural philosophy in the ethics of socially disruptive technologies.

1.4 Intercultural outlook

What constitutes a social or conceptual disruption depends on the status quo, i.e., something is a disruption relative to a certain society, or certain practices, or a certain conceptual framework. However, too often, philosophers (of technology) have tacitly assumed their own society and their own conceptual framework as point of departure when talking about disruption. This issue has become more pressing than ever as more and more voices call for decolonizing and deparochializing the field of philosophy (Van Norden, 2017; Pérez-Muñoz 2021; Williams, 2020). We therefore have to ensure that when reshaping the way we think about the world in response to conceptual disruptions, we don’t fall in the same trap of looking only at our own conceptual frameworks.

For decades, normative concepts and frameworks of thought derived from European historical experiences have dominated the international debate on philosophy and ethics of technology. This has led many students and scholars to assume that ‘Western philosophy’ is the definition of ‘philosophy’ and that Western normative paradigms apply universally to most human beings. However, for many, this modus operandi has become intolerable. Centering ethical and political discussions solely on issues affecting Western societies amplifies Eurocentrism. Furthermore, assuming that Western normative paradigms apply universally to the vast majority of human beings perpetrates coloniality — the epistemic repression intrinsic to colonial ideology (Wiredu, 1996; Mignolo, 2007; Quijano, 1992). The momentum of contemporary decolonising and deparochialising movements suggests that today’s pressing question is not ‘whether’ philosophical debates must be pluralized, but ‘how’ to achieve this.

Although disruption is, in the following chapters, often discussed from a more Western perspective, we also pay attention to intercultural perspectives. An intercultural perspective helps prevent Eurocentric biases and fully understand technological disruptions’ ethical implications. To the extent that the social consequences of the technologies discussed in this book affect the ways of life and social practices of inhabitants of both the Global North and South, an intercultural approach is key to assess this novel phenomenon appropriately.

There are two complementary strategies to pursue interculturality. One strategy uses experiences from a culture different from one’s own to understand the magnitude of technology’s social disruptions. This strategy contributes to decentering academic debates and helps uncover conceptual disruptions that would otherwise be harder or impossible to identify. For example, in Chapter 2, the analysis of social media in African societies is key to grasping the conceptual disruption that social media causes on the democratic idea of the public sphere. The dramatic situation of many African communities where public political debates unfold on foreign-owned digital infrastructure under very weak national institutional checks raises the question of whether the concept of public sphere is misaligned with the concept of demos. Viewed from this perspective, social media’s disruption is broader than if it were viewed from a purely Western perspective.

The second strategy is to ask whether technology-driven social changes disrupt non-Western concepts and conceptual frameworks, in other ways than simply affecting Western philosophical discourse. This strategy contributes to the reorientation of the academic debates by increasing the relevance of non-Western philosophical concepts in contemporary philosophical theorizing and showing that Western conceptual frameworks are one among many possible alternatives. Thus, if the first strategy aims to change the terminology of the philosophical debate, the second strategy uses non-Western concepts to change the terms of the philosophical debate. For example, an Ubuntu perspective exposes new implications of social robots in Chapter 3. It reveals that if social robots crowd out human relations, this can impact our moral character and personhood, as these terms are understood within Ubuntu philosophy. In Ubuntu philosophy, interdependent relations are essential for personal cultivation. Thus, such a goal is harder to reach if robots crowd out human relations because humans cannot develop interdependent relationships with robots. Centering these terms as important, then, exposes the magnitude of this disruption.

Similarly, Buddhist traditions may help to overcome the inability of traditional Western ethical perspectives to articulate forcefully the full scope of some social disruptions, such as the character of our attention, which is transformed by new technologies (Bombaerts et al., 2023). In turn, this raises fundamental questions about how ethical practices of attention are related to self-control and willpower: the very idea of exercising control over one’s thoughts is a fundamental moral issue within Buddhism, and this can inspire conceptual innovation in values such as responsibility and autonomy, as they relate to how we attend to others, ourselves, and the world.

By pursuing these two methodological strategies, we do not claim that this book presents an ‘objective’ understanding of technologies’ conceptual disruption. Nor do we believe that the book is immune to Eurocentrism. However, these two strategies can be a step forward in developing a more respectful and effective methodological basis for dealing with technology-driven conceptual disruption.

1.5 Expanding the research agenda of ethics of technology

The drive for more intercultural perspectives in the debate is part of a broader aim for the book, and the underlying research program ESDiT. We want to expand the research agenda of philosophy and ethics of technology. The point we want to drive home is that ethics of technology in the twenty-first century requires a conceptual turn by explicitly addressing social and conceptual disruption through technology, as well as attention to the question of when it is appropriate to revise concepts and how this should be done.

In philosophy, such questions about conceptual change have recently been addressed under the headings of ‘conceptual engineering’ and ‘conceptual ethics’. We will discuss these approaches in more detail in Chapter 6. For now, the important point is that the advocated expansion of the research agenda of ethics of technology also implies closer collaboration between philosophy and ethics of technology and other subdisciplines of philosophy, like conceptual engineering, which explicitly thematizes how to adapt or ameliorate concepts. It also implies closer collaboration with philosophical disciplines that have traditionally developed and analyzed (core) concepts in the domains of nature, the human condition and society, such as philosophical anthropology, environmental ethics and political philosophy. In the past, these other subdisciplines of philosophy have often only paid scant attention to technology.

Ethics of technology has a long and fruitful tradition of collaborating with STEM disciplines, where STEM stands for science, technology, engineering and mathematics. Particularly since the 1980s, ethics of technology has developed from an emphasis on critique to an emphasis on more constructive, proactive and applied approaches. Oftentimes it is aimed not just at criticizing technology or putting a brake on technological developments, but rather at improving technological development by proactively addressing ethical issues and values in close collaboration with engineers, technology developers and policy makers.

Expanding the research agenda of ethics of technology also requires, we submit, new methods and approaches, for example for the ethical assessment of new technologies (Brey, 2012) or for addressing ethical issues and values through design (Friedman and Hendry, 2019; Van den Hoven et al., 2015). It may also have implications for other important themes in the ethics of technology such as the acceptability and management of technological risks (Roeser et al., 2012), moral responsibility (of engineers and others), social control and regulation of technology (Collingridge, 1980), the mediation of human perception and behavior through technology (Verbeek, 2005), and how to deal with (technological) uncertainty, to name just a few.

During the past few decades there has been increased attention on ethical issues brought about by specific technologies, which has led to the establishment of new fields of ethical inquiry. We now not only have computer ethics and bioethics, but also nanoethics, robot ethics, energy ethics, climate ethics, neuro-ethics, AI ethics, digital ethics, and so forth. While there is added value in specialized ethical inquiries into specific technologies, there is also a danger that larger themes go unnoticed and do not receive the theoretical treatment they deserve. This book therefore delves into the details of specific technologies in the following chapters, but does that in order to bring to the fore and to better understand a general phenomenon: the potential socially and conceptually disruptive character of new technological developments, and what new conceptual, theoretical and normative questions this raises.

Here we should not forget the dynamic interaction between technology, society and morality (Van de Poel, 2020). On the one hand, technologies reflect social choices and values, and therefore can be deliberately designed for certain positive moral values or to address ethical issues. On the other hand, technologies will not only raise new, sometimes unpredictable, ethical issues, but will also affect how people act and think, and what they consider desirable and undesirable. Mediation theory has argued that technology may change our perceptions and actions (Verbeek, 2005). For example, an echo of the fetus during pregnancy will affect people’s perceptions of the unborn child, as well as their actions and choices. Others have pointed out that technology may induce technomoral change, i.e., a change in moral values, norms or routines that is triggered by technological advancements (Swierstra, 2013). This book takes the dynamic relation between technology, society and morality a step further by not just paying attention to the socially disruptive character of technology, but also by focusing on how technology may disrupt the very concepts by which we philosophically and ethically reflect on technology.

Further listening and watching

Readers who would like to learn more about the topics discussed in this chapter might be interested in listening to these episodes of the ESDiT podcast (https://anchor.fm/esdit) and other videos:

Jeroen Hopster on ‘The nature of socially disruptive technologies’:

https://podcasters.spotify.com/pod/show/esdit/episodes/Jeroen-Hopster-on-The-Nature-of-Socially-Disruptive-Technologies-e19g3d8/a-a6pto8m

Olya Kudina on ‘Voice assistants’: https://youtu.be/ve6qJGt1_kk

References

Baker, Robert. 2013. Before Bioethics: A History of American Medical Ethics from the Colonial Period to the Bioethics Revolution (New York: Oxford University Press)

Behrens, Kevin Gary. 2012. ‘Moral obligations towards future generations in African thought’, Journal of Global Ethics, 8: 179–91, https://doi.org/10.1080/17449626.2012.705786

Belkin, Gary S. 2003. ‘Brain death and the historical understanding of bioethics’, Journal of the History of Medicine and Allied Sciences, 58: 325–61, https://doi.org/10.1093/jhmas/jrg003

Bernstein, Anna, and Kelly Jones. 2019. ‘The economic effects of contraceptive access: A review of the evidence’, Institute for Women’s Policy Research (IWPR) Report #B381, https://iwpr.org/iwpr-issues/reproductive-health/the-economic-effects-of-contraceptive-access-a-review-of-the-evidence/

Bijker, Wiebe, Thomas P. Hughes, and Trevor Pinch (eds). 1987. The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology (Cambridge: MIT Press)

Bombaerts, Gunter, Joel Anderson, Matthew Dennis, Alessio Gerola, Lily Frank, Tom Hannes, Jeroen Hopster, Lavinia Marin, and Andreas Spahn. 2023. ‘Attention as practice’, Global Philosophy, 33: 25, https://doi.org/10.1007/s10516-023-09680-4

Brey, Philip. 2012. ‘Anticipatory ethics for emerging technologies’, Nanoethics, 6: 1–13, https://doi.org/10.1007/s11569-012-0141-7

Christensen, Clayton M. 2013. The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (Boston: Harvard Business Review Press)

Collingridge, David. 1980. The Social Control of Technology (London: Frances Pinter)

Danaher, John, 2023. ‘The case for outsourcing morality to AI’, Wired, https://www.wired.com/story/philosophy-artificial-intelligence-responsibility-gap/

Diczfalusy, Egon. 2000. ‘The contraceptive revolution’, Contraception, 61: 3–7

Enriquez, Juan. 2021. Right/Wrong: How Technology Transforms our Ethics (Cambridge: MIT Press)

Friedman, Batya, and David Hendry. 2019. Value Sensitive Design: Shaping Technology with Moral Imagination (Cambridge: MIT Press)

Gallie, W. B. 1955. ‘Essentially contested concepts’, Proceedings of the Aristotelian Society, 56: 167–98

Garcia, Ana Cristina Bicharra, Marcio Gomes Pinto Garcia, and Roberto Rigobon. 2023. ‘Algorithmic discrimination in the credit domain: what do we know about it?’ AI & Society, https://doi.org/10.1007/s00146-023-01676-3

Hickey, Colin, and Ingrid Robeyns. 2020. ‘Planetary justice: What can we learn from ethics and political philosophy?’, Earth System Governance, 6: 100045, https://doi.org/10.1016/j.esg.2020.100045

Hopster, Jeroen. 2021. ‘What are socially disruptive technologies?’, Technology in Society, 67, https://doi.org/10.1016/j.techsoc.2021.101750

Hopster, Jeroen, and Guido Löhr. 2023. ‘Conceptual engineering and philosophy of technology: Amelioration or adaptation?’, Unpublished manuscript

IPBES. 2022. ‘Summary for policymakers of the methodological assessment of the diverse values and valuation of nature of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES)’, IPBES Secretariat, https://doi.org/10.5281/zenodo.6522392

Kudina, Olya. 2021. ‘“Alexa, who am I?”: Voice assistants and hermeneutic lemniscate as the technologically mediated sense-making’, Human Studies, https://doi.org/10.1007/s10746-021-09572-9

Lee, Minha, Peter Ruijten, Lily Frank, Yvonne de Kort, and Wijnand IJsselsteijn. 2021. ‘People may punish, but not blame robots’, in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Article 715. Yokohama, Japan: Association for Computing Machinery, https://doi.org/10.1145/3411764.3445284

Löhr, Guido. 2022. ‘Linguistic interventions and the ethics of conceptual disruption’, Ethical Theory and Moral Practice, 25: 835–49, https://doi.org/10.1007/s10677-022-10321-9

Matthias, Andreas. 2004. ‘The responsibility gap: Ascribing responsibility for the actions of learning automata’, Ethics and Information Technology, 6: 175–83, https://doi.org/10.1007/s10676-004-3422-1

McKibben, Bill. 1990. The End of Nature (Anchor Books: New York)

Mignolo, Walter. 2007. ‘Delinking: The rhetoric of modernity, the logic of coloniality and the grammar of de-coloniality’, Cultural Studies, 21(2–3): 449–514, https://doi.org/10.1080/09502380601162647

Mintah, Kwabena, Kingsley Tetteh Baako, Godwin Kavaarpuo, and Gideon Kwame Otchere. 2020. ‘Skin lands in Ghana and application of blockchain technology for acquisition and title registration’, Journal of Property, Planning and Environmental Law, 12: 147–69, https://doi.org/10.1108/JPPEL-12-2019-0062

Nass, Clifford Ivar, and Scott Brave. 2005. Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship (Cambridge: MIT Press)

Nickel, Philip, Olya Kudina, and Ibo van de Poel. 2022. ‘Moral uncertainty in technomoral change: Bridging the explanatory gap’, Perspectives on Science, 30: 260–83, https://doi.org/10.1162/posc_a_00414

Nyholm, Sven. 2020. Humans and Robots: Ethics, Agency, and Anthropomorphism (London: Rowman & Littlefield)

Patterson, John. 1998. ‘Respecting nature: A Maori perspective’, Worldviews: Global Religions, Culture, and Ecology, 2: 69–78

Pellegrini-Masini, Giuseppe, Fausto Corvino, and Lars Löfquist. 2020. ‘Energy justice and intergenerational ethics: Theoretical perspectives and institutional designs’, in Energy Justice Across Borders, ed. by Gunter Bombaerts, Kirsten Jenkins, Yekeen A. Sanusi, and Wang Guoyu (Cham: Springer International Publishing), 253–72

Pérez-Muñoz, Cristian. 2022. ‘The strange silence of Latin American political theory’, Political Studies Review, 20(4), 592–607, https://doi.org/10.1177/14789299211023342

Preston, Christopher. 2012. ‘Beyond the end of nature: SRM and two tales of artificiality for the Anthropocene’, Ethics, Policy & Environment, 15(2): 188–201, https://doi.org/10.1080/21550085.2012.685571

Quijano, Anibal. 1992. ‘Colonialidad y modernidad-racionalidad’, Perú Indígena, 13(29): 11–20

Roeser, Sabine, Rafaela Hillerbrand, Martin Peterson, and Per Sandin. 2012. Handbook of Risk Theory: Epistemology, Decision Theory, Ethics, and Social Implications of Risk (New York: Springer)

Sand, Martin. 2021. ‘A defence of the control principle’, Philosophia, 49: 765–75, https://doi.org/10.1007/s11406-020-00242-1

Santoni de Sio, Filippo, and Jeroen van den Hoven. 2018. ‘Meaningful human control over autonomous systems: A philosophical account’, Frontiers in Robotics and AI, 5: 15, https://doi.org/10.3389/frobt.2018.00015

Sedlmeir, Johannes, Hans Ulrich Buhl, Gilbert Fridgen, and Robert Keller. 2020. ‘The energy consumption of blockchain technology: Beyond myth’, Business & Information Systems Engineering, 62: 599–608, https://doi.org/10.1007/s12599-020-00656-x

Smith, Merritt Roe, and Leo Marx (eds). 1994. Does Technology Drive History? The Dilemma of Technological Determinism (Cambridge: MIT Press)

Swierstra, Tsjalling. 2013. ‘Nanotechnology and technomoral change’, Etica & Politica / Ethics & Politics, XV: 200–19

Van den Hoven, Jeroen, Pieter E. Vermaas, and Ibo Van de Poel (eds). 2015. Handbook of Ethics and Values in Technological Design. Sources, Theory, Values and Application Domains (Dordrecht: Springer), https://doi.org/10.1007/978-94-007-6970-0

Van de Poel, Ibo. 2020. ‘Three philosophical perspectives on the relation between technology and society, and how they affect the current debate about artificial intelligence’, Human Affairs, 30(4): 499–511, https://doi.org/10.1515/humaff-2020-0042

Van de Poel, Ibo, and Olya Kudina. 2022. ‘Understanding technology-induced value change: A pragmatist proposal’, Philosophy & Technology, 35: 40, https://doi.org/10.1007/s13347-022-00520-8

Van der Burg, Wibren. 2003. ‘Dynamic ethics’, Journal of Value Inquiry, 37: 13–34

Van Norden, Bryan. 2017. Taking Back Philosophy: A Multicultural Manifesto (New York: Columbia University Press), https://doi.org/10.7312/van-18436

Veluwenkamp, Herman, Marianna Capasso, Jonne Maas, and Lavinia Marin. 2022. ‘Technology as driver for morally motivated conceptual engineering’, Philosophy & Technology, 35: 71, https://doi.org/10.1007/s13347-022-00565-9

Verbeek, Peter-Paul. 2005. What Things Do. Philosophical Reflections on Technology, Agency, and Design (Penn State: Penn State University Press)

Watene, Krushil. 2016. ‘Valuing nature: Māori philosophy and the capability approach’, Oxford Development Studies, 44: 287–96, https://doi.org/10.1080/13600818.2015.1124077

West, Mark, Rebecca Kraut, and Han Ei Chew. 2019. ‘I’d blush if I could: Closing gender divides in digital skills through education’, UNESCO,

Williams, Melissa. 2020. Deparochializing Political Theory (Cambridge: Cambridge University Press), https://doi.org/10.1017/9781108635042

Wiredu, Kwasi. 1996. Cultural Universals and Particulars (Indianapolis: Indiana University Press)

1 All mentioned lead authors and contributors contributed in some way to this chapter and approved the final version. IvdP is the lead author of this chapter. He coordinated the contributions to this chapter and did the final editing. He also wrote the first version of Section 1.5. SB wrote a first version of Section 1.1. and contributed to and commented on several other sections. JH wrote a first version of Section 1.2 and further contributed mainly to Section 1.3. GL wrote a first version of Section 1.3. EZ wrote Section 1.4. PB contributed to some of the examples given in Section 1.3.

2 Blockchain is a kind of digital database that allows storing data in blocks that are linked together in a chain. The individual blocks are cryptographically linked together after the newest block is verified and added to the chain. This makes it very difficult to tamper with the chain and makes any alterations to the chain permanent. It allows safely storing data with digital signatures but without central control.

3 In this book, we will also consider disruptions to nature and to non-human species as ‘social disruptions’ if they do not allow continuing as usual or cause disorder or upheaval. One may think of climate change or loss of biodiversity as an example of social disruption.

4 This assumes that both aforementioned responses are appropriate and normatively relevant. But one could also take a different stance and argue that our affective responses (and/or, perhaps, the appeal to responsible design) are misguided. In this case, one might instead want to speak of conceptual overlap. More generally, there seems room for different interpretations of examples in terms of conceptual gap, conceptual overlap and conceptual misalignment. For further discussion, see Chapter 6.

5 Technologies like CRISPR-cas9 might challenge the notion that humans cannot be designed. However, the genetic make-up of humans (that might perhaps be altered with such technologies) only partly determines their personality (nurture plays an important role as well). Moreover, it is questionable whether such design can be ‘responsible’ as genetic modification of humans is usually considered morally unacceptable.

6 Here we are mainly referencing Western folk conceptions of ‘natural’ and ‘artificial’. There are, of course, much more nuanced and diverse conceptions to be found in the philosophical literature. Also note that some cultures do not have the natural-artificial distinction (IPBES, 2022). For further discussion, see Chapter 4.

2. Social Media and Democracy

© 2023 Elena Ziliotti et al., CC BY-NC 4.0 https://doi.org/10.11647/OBP.0366.02

Lead author: Elena Ziliotti1Contributing authors: Patricia D. Reyes Benavides, Arthur Gwagwa, Matthew J. Dennis

Has social media disrupted the concept of democracy? This complex question has become more pressing than ever as social media have become a ubiquitous part of democratic societies worldwide. This chapter discusses social media’s effects at three critical levels of democratic politics (personal relationships among democratic citizens, national politics, and international politics) and argues that social media pushes the conceptual limits of democracy. This new digital communication infrastructure challenges some of the fundamental elements of the concept of democracy. By giving citizens and non-citizens equal substantive access to online political debates that shape the political agenda, social media has drastically expanded and opened up the notion of demos and the public sphere (the communicative space where