This is Technology Ethics - Sven Nyholm - E-Book

This is Technology Ethics E-Book

Sven Nyholm

0,0
36,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

An approachable introduction to the philosophical study of ethical dilemmas in technology

In the Technology Age, innovations in medical, communications, and weapons technologies have given rise to many new ethical questions: Are technologies always value-neutral tools? Are human values and human prejudices sometimes embedded in technologies? Should we merge with the technologies we use? Is it ethical to use autonomous weapons systems in warfare? What should a self-driving car do if it detects an unavoidable crash? Can robots have morally relevant properties?

This is Technology Ethics: An Introduction provides an accessible overview of the sub-field of philosophy that focuses on the ethical implications of new technologies. Requiring no previous background in the subject, this reader-friendly volume explores ethical questions concerning artificial intelligence, robots, self-driving cars, brain implants, social media and communication technologies, and more. Throughout the book, clear and engaging chapters describe and discuss key discussions, issues, and themes while inviting readers to develop their own perspectives on a wide range of moral and ethical questions.

  • Discusses how various technologies influence and shape individuals and society both positively and negatively
  • Illustrates how emerging technologies affect traditional ideas about ethics and human self-understanding
  • Addresses the ethical complications of creating technologies that may lead to morally problematic consequences
  • Considers if the benefits of new technologies outweigh potential drawbacks, such as how people interact online through social media
  • Explores how established moral and ethical theories relate to new questions concerning advanced technologies

Part of the popular This is Philosophy series published by Wiley-Blackwell, This is Technology Ethics: An Introduction is a must-read for undergraduate students taking a Technology Ethics course, researchers in the field, engineers, technology professionals, and general readers looking to learn more about the topic.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 576

Veröffentlichungsjahr: 2022

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Series Page

Title Page

Copyright Page

Dedication Page

PREFACE

ACKNOWLEDGMENTS

1 WHAT IS TECHNOLOGY (FROM AN ETHICAL POINT OF VIEW)?

1.1 A Hut in the Black Forest

1.2 The Question Concerning Technology: The Instrumental Theory of Technology from Martin Heidegger to Joanna Bryson

1.3 “Post‐Phenomenology” and the Mediation Theory of Technology

1.4 Technologies Conceived of as Being More Than Mere Means or Instruments

1.5 Technologies Regarded as Moral Agents

1.6 Technologies Regarded as Moral Patients

1.7 Some of the Key Types of Technologies That Will Be Discussed at Greater Length in Later Chapters of the Book

Annotated Bibliography

2 WHAT IS ETHICS? (AND, IN PARTICULAR, WHAT IS TECHNOLOGY ETHICS)?

2.1 Two Campaigns

2.2 The Ethics of Virtue and Human Flourishing in Ancient Greece

2.3 Ancient Chinese Confucianism and Traditional Southern African Ubuntu Ethics

2.4 Kantian Ethics

2.5 Utilitarianism and Consequentialist Ethical Theories

2.6 If Ethics More Generally Can Be All the Things Discussed in the Previous Sections, then What Does this Mean for Technology Ethics in Particular?

2.7 How Technology Ethics Can Challenge and Create a Need for Extensions of More General Ethical Theory

Annotated Bibliography

3 METHODS OF TECHNOLOGY ETHICS

3.1 Methodologies of Ethics?

3.2 The Ethics of Self‐Driving Cars

3.3 Ethics by Committee

3.4 Ethics by Analogy: The Trolley Problem Comparison

3.5 Empirical Ethics

3.6 Applying Traditional Ethical Theories

3.7 Which Method(s) Should We Use in Technology Ethics? Only One or Many?

Annotated Bibliography

4 ARTIFICIAL INTELLIGENCE, VALUE ALIGNMENT, AND THE CONTROL PROBLEM

4.1 Averting a Nuclear War

4.2 What Is Artificial Intelligence and What Is the Value Alignment Problem?

4.3 The Good and the Bad, and Instrumental and Non‐Instrumental Values and Principles

4.4 Instrumentally Positive Value‐Alignment of Technologies

4.5 Instrumentally Negative Misalignment of Technologies

4.6 Positive Non‐Instrumental Value Alignment of Technologies

4.7 Negative Non‐Instrumental Value Misalignment of Technologies

4.8 The Control Problem

4.9 Control as a Value: Instrumental or Non‐Instrumental? And Are There Some Technologies It Might Be Wrong to Try to Control?

Annotated Bibliography

5 BEHAVIOR CHANGE TECHNOLOGIES, GAMIFICATION, PERSONAL AUTONOMY, AND THE VALUE OF CONTROL

5.1 A Better You?

5.2 Behavior Change Technologies and Gamification

5.3 Control: Three Basic Observations

5.4 Key Dimensions of Control Discussed in Different Areas of Philosophy

5.5 Behavior Change Technologies and the “Subjects” and “Objects” of Control

5.6 The Value and Ethical Importance of Control

5.7 Concluding This Chapter

Annotated Bibliography

6 RESPONSIBILITY AND TECHNOLOGY

6.1 Two Events

6.2 What Is Responsibility? Different Ways in Which People Can Be Held Responsible and Different Things for Which People Can Be Held Responsible

6.3 Responsibility Gaps: General Background

6.4 Responsibility Gaps Created by Technologies

6.5 Filling Responsibility Gaps by Having People Voluntarily Take Responsibility

6.6 Should We Perhaps Welcome Responsibility Gaps?

6.7 Responsible Machines?

6.8 Human–Machine Teams and Responsibility

6.9 Concluding This Chapter

Annotated Bibliography

7 CAN A MACHINE BE A MORAL AGENT? SHOULD ANY MACHINES BE MORAL AGENTS?

7.1 Machine Ethics

7.2 Arguments in Favor of Machine Ethics and Types of Artificial Moral Agents

7.3 Objections to the Machine Ethics Project

7.4 Possible Ways of Responding to the Critiques of the Machine Ethics Project

7.5 Concluding This Chapter

Annotated Bibliography

8 CAN ROBOTS BE MORAL PATIENTS, WITH MORAL STATUS?

8.1 The Tesla Bot and Erica the Robot

8.2 What Is a Humanoid Robot? And Why Would Anybody Want to Create a Humanoid Robot?

8.3 Can People Act Rightly or Wrongly Toward Robots?

8.4 Can Robots Have Morally Relevant Properties or Abilities?

8.5 Can Robots Imitate or Simulate Morally Relevant Properties or Abilities?

8.6 Can Robots Represent or Symbolize Morally Relevant Properties or Abilities?

8.7 Should We Be Discussing—Or Perhaps Better Be Avoiding—the Question of Whether Robots Can Be Moral Patients, with Moral Status?

Annotated Bibliography

9 TECHNOLOGICAL FRIENDS, LOVERS, AND COLLEAGUES

9.1 Replikas, Chuck and Harmony, and Boomer

9.2 Ethical Issues That Arise in This Context Independently of Whether Technologies Can Be Our Friends, Lovers, or Colleagues

9.3 Technological Friends

9.4 Technological Lovers and Romantic Partners

9.5 Robotic Colleagues

9.6 Are These All‐or‐Nothing Matters? Respect for Different Points of View

9.7 The Technological Future of Relationships

Annotated Bibliography

10 MERGING WITH THE MACHINE

10.1 The Experience Machine

10.2 Different Ways of Merging with—Or Merging with the Help of—Technology

10.3 Transhumanism, Posthumanism, and Whether We Should Become—Or Perhaps Already Are—Cyborgs

10.4 Some Critical Reflections on the Proposals to Merge with Technologies and the Arguments and Outlooks Used in Favor of Such Proposals

10.5 Concluding Reflections: Revisiting the Hut in the Black Forest

Annotated Bibliography

Index

End User License Agreement

Guide

Cover Page

Series Page

Title Page

Copyright Page

Dedication Page

PREFACE

ACKNOWLEDGMENTS

Table of Contents

Begin Reading

Index

WILEY END USER LICENSE AGREEMENT

Pages

ii

iii

iv

v

xiii

xiv

xv

xvi

xvii

xviii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

THIS IS PHILOSOPHY

Series editor: Steven D. Hales

Reading philosophy can be like trying to ride a bucking bronco—you hold on for dear life while “transcendental deduction” twists you to one side, “causa sui” throws you to the other, and a 300‐word, 300‐year‐old sentence comes down on you like an iron‐shod hoof the size of a dinner plate. This Is Philosophy is the riding academy that solves these problems. Each book in the series is written by an expert who knows how to gently guide students into the subject regardless of the reader's ability or previous level of knowledge. Their reader‐friendly prose is designed to help students find their way into the fascinating, challenging ideas that compose philosophy without simply sticking the hapless novice on the back of the bronco, as so many texts do. All the books in the series provide ample pedagogical aids, including links to free online primary sources. When students are ready to take the next step in their philosophical education, This Is Philosophy is right there with them to help them along the way.

This Is Philosophy: Second EditionSteven D. Hales

This Is Philosophy of Mind: Second EditionPete Mandik

This Is Ethics: An IntroductionJussi Suikkanen

This Is Political Philosophy: An IntroductionAlex Tuckness and Clark Wolf

This Is Business Ethics: An IntroductionTobey Scharding

This Is MetaphysicsKris McDaniel

This Is Bioethics: An IntroductionUdo Schuklenk

This Is Philosophy of ReligionNeil Manson

This Is EpistemologyClayton Littlejohn and Adam Carter

This Is Philosophy of ScienceFranz‐Peter Griesmaier and Jeffrey A. Lockwood

This Is Environmental EthicsWendy Lee

This Is Early Modern PhilosophyKurt Smith

This Is Philosophy of Mind, Second EditionPete Mandik

Forthcoming:

This Is AestheticsThi Nguyen and Nick Riggle

This Is Ancient PhilosophyKirk Fitzpatrick

This Is LogicSara Uckelman

THIS ISTECHNOLOGY ETHICS

AN INTRODUCTION

SVEN NYHOLM

Copyright © 2023 by John Wiley & Sons, Inc. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey.Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per‐copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750‐8400, fax (978) 750‐4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748‐6011, fax (201) 748‐6008, or online at http://www.wiley.com/go/permission.

Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.

Limit of Liability/Disclaimer of WarrantyWhile the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762‐2974, outside the United States at (317) 572‐3993 or fax (317) 572‐4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging‐in‐Publication Data applied forPaperback ISBN 9781119755579

Cover Image: WileyCover Design: Wiley

For Katharina

PREFACE

Suppose that a self‐driving car detects that a crash is unavoidable and that all options open to it will involve harming human beings. What should a self‐driving car do in such a situation? Is it ever permissible to use autonomous weapons systems in warfare, which are specifically designed to kill human beings and that select their own targets? What if a robot—e.g. a sex robot—is designed to look and act like a human being? Does the resemblance mean that we should treat that robot with any of the moral consideration we should show to a human being?

What if the artificial intelligence we develop gets out of control? Who is then responsible and what should we do about it? Are technologies always simply value‐neutral tools? Or are they sometimes more than mere instruments? Are human values—and human prejudices—sometimes embedded into our technologies? Are the technologies we use extensions of ourselves? Have we always been cyborgs? Should we merge with the technologies we use—for example, by connecting our brains directly to the internet? What kind of future should we aim to bring about with the help of the most advanced technologies at our disposal?

These are the kinds of questions that are discussed within the ethics of technology, or technology ethics, for short. These are the kinds of questions that this book explores. The aim of the book is to introduce the reader to technology ethics. The book aims to do so in a way that is accessible to readers who may not have any previous familiarity with academic discussions of technology ethics. This could be undergraduate students taking a course in technology ethics. Or it could be anybody who just happens to be interested in this topic.

Researchers who work on technology ethics might hopefully also find this book interesting and useful—for example, because it provides overviews of some of the key ethical issues related to the technologies discussed in the book. There are occasional deep dives into specific issues, which those with more background might also find interesting. But the main aim is to introduce readers to technology ethics, and to make them interested in exploring technology ethics further. Hence the title This Is Technology Ethics: An Introduction.

This book is part of the series This Is Philosophy. The title of this series is inspired by a series of very popular jazz albums that Columbia Records used to put out. It was called This Is Jazz. Different albums in that series introduced listeners to the greats of jazz, such as Miles Davis, Sarah Vaughn, Billie Holiday, Duke Ellington, and so on. The aim was to give listeners who might previously have been unfamiliar with jazz music samples of the best jazz performances by great artists, such as the ones just mentioned and many others. This inspired many people to go on and also listen to the deep‐cuts and explore the catalogs of these artists in greater detail.

In the same way, the This Is Philosophy series aims to introduce readers to the most interesting areas within philosophy, so as to inspire them to want to go on to explore these parts of philosophy further. Other entries into the series include a general introduction to philosophy called This Is Philosophy: An Introduction, and books like This Is Philosophy of Mind, This Is Political Philosophy, and This Is Metaphysics. There is a general introduction to ethics called This Is Ethics, and also more specific ethics books such as This Is Bioethics, This Is Business Ethics, and This Is Environmental Ethics. The current volume is similar to those just‐mentioned more specific ethics books.

A title like This Is Technology Ethics can be read in two ways. It might be taken to indicate that the book is covering all of what technology ethics is: the sum‐total, so to speak, of what technology ethics is. Or it might be taken to indicate that the book is offering samples of some of the key discussions and themes that are part of technology ethics. This book does the latter. Accordingly, the phrase “this is technology ethics” is here supposed to mean “these are some of the key discussions and themes that are central within the field of technology ethics.”

Just like the albums in the This Is Jazz series did not mean to cover every recording by the artists featured in the series, this book does not cover every single issue within technology ethics. The book does aim to give readers a solid idea of many of the most interesting parts of technology ethics. But there is more to technology ethics than we will be able to cover in our exploration of it below. The topics that will be covered in what follows are among the most fascinating parts of technology ethics.

Another choice an author of a book like this faces is the following. One can try to write in a way that is as neutral as possible, which does not present any new arguments and does not offer any opinions or make any judgments. Alternatively, one can offer an “opinionated introduction” to something. That is, one can introduce a topic while also sometimes taking a stance on some of the issues, offering arguments and views of one's own, so as to enter into a dialogue with the reader. This book takes the latter approach. In other words, this is an opinionated introduction to technology ethics.

The idea is that if a philosophy book is not neutral but instead sometimes offers some opinions and arguments, it is likely to be more engaging than a book that tries to be completely neutral at all times. The reader is not only free but also invited to disagree with any and all of the ideas in the book. One of the main points of any work in philosophy—including an introductory book like this one—is to make the reader think for him or herself. One way of achieving that aim that often works is to try to provoke the reader a little bit. Accordingly, the book tries to do that here and there—but always in a way that is intended to be friendly and respectful toward the views and ideas discussed throughout the book.

Some contemporary technologies—such as the social media platforms many of us use—encourage short and snappy exchanges, where emotions often run high and people sometimes get angry at each other and start insulting one another. This is not the type of debate this book takes as its model. One of the wonderful things about philosophy is that it is possible to have deep disagreements and offer arguments against one another’s views, while doing this in a way that is friendly and respectful toward all those involved in the discussion. That is the model this book aims to emulate.

I also mention this example about how contemporary technologies may inspire a certain form of behavior for another reason. It helps to illustrate one of the ideas that will be a running theme throughout the book: namely, that the technologies we use influence and shape us, sometimes in ways we find positive, but sometimes also in ways that we may find negative upon reflection. On social media platforms, for example, one sometimes sees people expressing regret that discussions that start on such platforms so easily escalate into non‐productive, angry exchanges.

That raises another issue discussed in technology ethics: is it unethical to create technologies that may lead some people to act in morally problematic ways? Or should the fact that many other users will use these technologies in harmless or positive ways be taken to outweigh the risks involved in introducing new technologies that affect how people interact with each other, whether it is online or offline?

Those are some more examples of what technology ethics is about. And those are hopefully examples that help to illustrate that not only is technology ethics relevant to all of us. It is probably also something most of us have already thought about, if not explicitly under the heading of “technology ethics,” then at least implicitly. It is very hard to not think about technology ethics issues when one watches the news, or when one simply goes about one's daily life in a world with so much modern technology everywhere.

This book tackles these kinds of issues in an explicit way. It spells out some of the main issues that those interested in technology ethics tend to spend their days thinking about—or perhaps lie awake at night worrying about. This book is about robots, artificial intelligence, autonomous weapons systems, self‐driving cars, brain implants, information and communication technologies, technologies related to love and sex, technologies for medical treatments or for human enhancement, technologies that might change what we are as human beings or what we can become, and more. Those are the sorts of things the book will be discussing from an ethical point of view. This is technology ethics.

ACKNOWLEDGMENTS

My work on this book has benefited greatly from conversations and exchanges with, among others, the following people: Joel Anderson, Antonio Bikić, Jan Broersen and the MA students in our 2021 core seminar on the philosophy of AI, Joanna Bryson, Mark Coeckelbergh, John Danaher, Brian Earp, Arzu Formenek, Lily Frank, Cindy Friedman, John‐Stewart Gordon, David Gunkel, Caroline Helmus, Ziagul Hosseini, Naomi Jacobs, Geoff Keeling, Maximilian Kiener, Marjolein Lanzing, Kritika Maheshwari, Giulio Mecacci, Anna Melnyk, Kęstutis Mosakas, Jilles Smids, Joshua Smith, Daniel Tigard, Darja Vrščaj, Lucie White; the students in my technology ethics course and other courses; the ethics researchers involved in the Human Brain Project; and my colleagues in the Ethics of Socially Disruptive Technologies research program in the Netherlands. Part of my work on this book was supported by the Gravitation grant program of the Dutch Ministry of Education, Culture, and Science and the Netherlands Organization for Scientific Research (NWO grant number 024.004.031).

Some of this material has been presented at workshops and events organized by Lukas Brand, Niël Conradie, Olle Häggström, Tomi Kushner, Julia van der Linde, Janina Loh, Wulf Loh, Leo Menges, Saskia Nagel, Anna Puzio, Markus Rüther, and Arleen Salles, as well as at colloquia and conferences at the following universities: the University of Amsterdam, Delft Technical University, Eindhoven University of Technology, Karlsruhe Institute of Technology, LMU Munich, the University of Münster, the University of Oxford, the University of Porto, Utrecht University, and the University of Vienna. My thanks to the audiences at those events for their helpful feedback. I am also grateful to three anonymous peer reviewers of the whole book manuscript, whose feedback was very beneficial to me while I was preparing the final version of the manuscript.

Many thanks also to the editor of this book series, Steven Hales, as well as to Laura Adsett, Mandy Collison, Charlie Hamlyn, Vinitha Kannaperan, Marissa Koors, Will Croft, Hannah Lee, and the other members of the team at Wiley‐Blackwell who have been involved in this book project.

Furthermore, I have benefited from the support of my wife and our families in Sweden and Germany. This book is dedicated to my wife, Katharina Uhde, with love.

1WHAT IS TECHNOLOGY (FROM AN ETHICAL POINT OF VIEW)?

1.1 A Hut in the Black Forest

1.1 Around 100 years ago, in the Black Forest of Southern Germany, there stood a small and simple three‐room cabin, to which an eccentric philosopher would retire in order to escape from the modern world. From 1922 onward, he went there to work on philosophical texts about the nature of “being,” and he felt deeply inspired by these surroundings. The philosopher in question was Martin Heidegger. He composed some of his most well‐known writings in this cabin, which he called “die Hütte” (the hut). While in his Hütte, Heidegger would sometimes dress up in traditional farmers’ clothing. And he would wander about in the Black Forest and ponder abstract philosophical questions, far away from bothersome modern technologies and other distractions.

1.2 This might seem like a strange place to start this book, and a strange choice of philosopher. However, it makes sense to start a book about technology ethics with Heidegger and his hut for a number of reasons. One reason is that one of the most‐cited writings about what technology is was written by Heidegger (most likely in the cabin!). Another reason is that it is good to have a reminder early on about the following. Even though we usually think about the latest and most advanced forms of cutting‐edge technology when we hear the word “technology,” even the simplest and oldest forms of technology are also technologies.

1.3 For example, Heidegger would go and get water from a well when he visited his hut. (There was no electricity or running water when he first started frequenting his cabin.) The well where Heidegger got his water is a technology, which can perform a very important purpose, just as something like a self‐driving car or the latest smartphone is a technology. Heidegger himself tended to romanticize the past. He made a sharp distinction between modern technologies (of which he was skeptical) and more traditional technologies (which he happily embraced). But when we think about the general question of what technology is, we should keep in mind that there are many different kinds of technology, both old and new. Old technologies are technologies, just like new ones are.

1.4 If a traditional water well is a technology just as much as a self‐driving car or a smartphone is a technology, then what exactly is a technology? This question—“what is a technology?”—is what we will focus on in this first chapter. Since this is a book about technology ethics, it is important to have an idea of what we should understand by “technology” in the first place. The ancient philosopher Aristotle recommended that one should always start by defining one’s key terms. That certainly seems like good advice.

1.5 It will also be important to try to define or explain what we mean by “ethics,” so that we can put “technology” and “ethics” together to form the idea of “technology ethics.” But we will save the question of what ethics is until the next chapter and here focus on what technology is. When we do so, however, that will already start raising ethical issues even before we get around to trying to say what ethics is, as we will do in the next chapter. One reason for this is that when one tries to explain important concepts, the choices we make about how to define these concepts might prove themselves to be controversial, and raise questions about what is important or valuable.

1.6 For example, consider the questions “what is a human being?” and “what is a person?” On the face of it, these might not seem like inherently ethical questions. But depending on who or what you think qualifies as a human being or person and for what reasons, you might find that before you know it, there will be others who find your views and their implications to be highly controversial. Ethical controversies about abortion, for example, tend to turn partly on who or what counts as a human being or as a person.

1.7 Choices about whose views about something to listen to—e.g., about what “technology” should be taken to be—can also be controversial and raise ethical questions. Heidegger, for example, has in recent times become a highly controversial figure. While he was rector of the University of Heidelberg during the German Nazi era, Heidegger was a member of the Nazi party. That by itself does not necessarily mean that Heidegger was a Nazi. Anybody in a leading position may have needed to be a member of that party during that era. But the recently discovered “Black Notebooks” (some notebooks of Heidegger's) have revealed that for a while, Heidegger was not only a member of the Nazi party but also convinced of some of the ideological ideas associated with Nazism. Some even think that Heidegger's romanticism and his musings about the simple life close to nature and his hut in the forest are somehow connected with Nazi ideals.

1.8 Heidegger scholars claim that Heidegger fairly quickly came to reject the ideas of the Nazis. Furthermore, the essay “The Question Concerning Technology” that is of relevance here was written after Heidegger abandoned any sympathies he had with Nazi ideas for a while. Yet it might seem controversial to start a discussion of what technology is with the ideas about this topic from a one‐time Nazi. Nevertheless, as noted above, Heidegger's just‐mentioned essay is one of the most‐often discussed contributions to the main topic of this chapter. So, it is good to be aware of some of the things he says about what technology is, for the sake of context. Moreover, as we will see below, there are also philosophical reasons to start with Heidegger's discussion, despite whatever flaws Heidegger himself might have had as a person, at least during certain stages of his life.1

1.2 The Question Concerning Technology: The Instrumental Theory of Technology from Martin Heidegger to Joanna Bryson

1.9 The further into Heidegger's “The Question Concerning Technology” one gets while reading it, the harder the text becomes to understand. But there are some bits at the beginning of that essay that are fairly straightforward. For example, let us consider this passage:

Everyone knows the two statements that answer our question. One says: Technology is a means to an end. The other says: Technology is a human activity. The two definitions of technology belong together. For to posit ends and procure and utilize the means to them is a human activity. The manufacture and utilization of equipment, tools, and machines, the manufactured and used things themselves, and the needs and ends they serve, all belong to what technology is. The whole complex of these contrivances is technology.

(Heidegger 1977: pp. 4–5)

A couple of lines down, Heidegger adds:

The current conception of technology, according to which it is a means and a human activity, can therefore be called the instrumental and anthropological definition of technology.

(Heidegger 1977: p. 5)

Notice that what Heidegger does here is in effect to first formulate two different theories of technology, and to then combine them into a hybrid theory. The instrumental theory of technology defines technologies as tools or instruments that human beings use as means to their ends. The anthropological theory of technology defines technologies as parts of distinctively human activities. The combined theory (“the instrumental and anthropological definition of technology”) represents technologies as means to ends within human activities.

1.10 Another thing that Heidegger by implication also does is to highlight that we can think about technology from different points of view. The instrumental theory of technology, for example, fits with what one might call an engineering mindset. Engineers love to identify concrete problems, and to then seek out or design the best tools for solving these problems. From such an engineering point of view, it is very natural to think about technologies as tools used as means to the end of solving concrete problems. The anthropological theory, in contrast, adopts the mindset of an anthropologist. Anthropologists and other social scientists study human practices and activities. So, it is natural for them to portray technologies as parts of human practices or activities.

1.11 Another noteworthy thing about the instrumental theory of technology, as it is usually interpreted, is that it represents technologies as being inherently value‐neutral. Think about the saying “guns don't kill people, people do.” That slogan has been used by the American National Rifle Association as a response to the charge that having lots of guns in society heightens the risk of violence. The idea behind “guns don’t kill people, people do” seems to be that guns are in themselves value‐neutral tools. It is people with bad intentions who might use these tools in bad ways. It is also possible to use these value‐neutral tools in good ways, according to this part of the instrumental theory of technology.

1.12 In general, any tool can be put to good or bad uses, according to this way of seeing things. Accordingly, if bad things happen and technologies are involved, we should not blame the technologies, but the people who use them for their own ends, which might be morally problematic. On the flipside, when technologies are used for good ends, the instrumental theory implies that we should not thank or praise the technologies but the people who create or use them. Technologies can deserve neither praise nor blame, according to this way of thinking. That might seem obviously true. But as we will see below, there are those who take a different view.

1.13 Speaking of how technology can be interpreted from different points of view, below we will consider how we should define or understand technology when we take up a distinctively ethical or perhaps more broadly philosophical point of view. But let us first linger just a little bit longer with the instrumental theory of technology, setting aside the anthropological perspective on technology for a moment. In particular, it is worth noting that even though a purely instrumental theory is often criticized in contemporary technology ethics, there are also prominent defenders of the instrumental theory of technology in current discussions. By considering one example, we can illustrate how the instrumental theory of technology can (1) be used in ethical arguments and (2) be further extended.

1.14 Joanna Bryson is a computer scientist and roboticist who has recently transitioned into the role of a professor of ethics. Among other things, she is the author of a striking essay entitled “Robots Should be Slaves” and a number of follow‐up articles that further refine the argument in her original piece.2 In her characteristically forceful language, Bryson applies an updated version of the instrumental theory to the question of whether we should ever regard robots with artificial intelligence as worthy of rights or moral consideration.

1.15 Bryson often tells an anecdote about how she came to be interested in this topic that helps to illustrate what she is concerned about. When Bryson worked in a robotics lab at MIT early in her career, there was a robot called “Cog” that people in the lab used to talk about in ways that suggested that they owed moral consideration to it. They would say things such as “don't pull the plug or turn Cog off—that would be to kill Cog!” Occasionally, they would say such things without realizing that Cog the robot was actually already unplugged. These people were projecting humanlike qualities onto a robot.

1.16 As Bryson sees things, this way of behaving around robots is a big mistake. Robots—like any other technologies—are tools that we create for our human purposes. Moreover, and here comes the key dimension that Bryson adds to the instrumental theory of technology: robots and other technologies are the property of people. They can be bought and sold. Since robots are property that people can buy and sell and tools we use for human purposes, technologies are like “slaves” or “servants” we own, just like some human beings were once regarded as tools, who could be bought and sold, in some societies of the past.

1.17 In the case of human beings, Bryson thinks that it is wrong and horrible that there were ever slaves anybody could own: tools that others could buy and sell. But that, Bryson thinks, is what all human‐created technologies are and should be. One must add “should be” here, because Bryson has the interesting view that it is possible to create robots or other artificially intelligent technologies toward which we could have duties, but that we should avoid doing so.

1.18 If we could create machines that can experience suffering or that are intelligent and sensitive in the ways that human beings and many animals are, then Bryson thinks we would have obligations towards these creations. But we should avoid creating such robots. We should only create technologies that it is okay to treat as tools, which we can buy and sell. Since Bryson adds these ideas about what should and should not be done to the instrumental theory, we can say that she presents a normative version of the instrumental theory of technology.

1.19 One of the real‐world robots and related events Bryson has reacted strongly to is the robot Sophia and the 2017 event where this robot was given honorary citizenship of the Kingdom of Saudi Arabia. Sophia is a robot that looks like a human being (a “humanoid robot”). The robot has a humanlike face and is able to imitate human speech. The back of the robot's head is transparent, so that one can see that it is a machine. But the robot is put in many distinctively human situations: for example the robot has appeared on talk shows such as The Tonight Show with Jimmy Fallon. And many human beings—including highly influential people—have treated the robot as if it were a human person. For instance, the former Chancellor of Germany, Angela Merkel, once took a “selfie” photograph together with Sophia. Moreover, the robot has been invited to speak in front of high‐profile international political bodies, like the United Nations and the Munich Security Council.

1.20 For somebody like Bryson, who adopts a normative version of the instrumental theory of technology, the just‐described events are highly problematic from an ethical point of view. In an interview, Bryson commented on this in the following way:

What is this about? It's about having a supposed equal you can turn on and off. How does it affect people if they think you can have a citizen that you can buy?3

Again, on the instrumental theory of technology, a robot and any other technology is a tool, which is value‐neutral in itself, and that you can buy or sell. Since Bryson subscribes to that view, she thinks that the way people behave around Sophia the robot is highly problematic. However, not everyone agrees with the instrumental theory of technology. Let us now explore some other ideas about what technology is or can be.

1.3 “Post‐Phenomenology” and the Mediation Theory of Technology

1.21 As was mentioned above, we can adopt different mindsets when we reflect on what technology is or should be taken to be. One thing we can do is to ask what we should think of technology as being from an ethical point of view. For most of the rest of this chapter, that is the point of view from which we will consider what technology is. In other words, our question will be: “when we think about technology from the point of view of ethical theorizing, what should we then understand technology as being?” Taking up this ethical point of view on technology, many authors have criticized the purely instrumental theory of technology.

1.22 Some sociologists and philosophers who consider what technology is, from an ethical point of view—and who criticize the instrumental theory— do so in a way that fits with what Heidegger calls the anthropological way of understanding technology. That is, they are looking at and analyzing human practices involving technologies. They are interested in how the technologies we have around us affect us. And they argue that within many human practices, technologies are not always merely tools that are completely value‐neutral in nature. Technologies, they argue, can also play other roles in our lives. One such group of thinkers are the members of the so‐called post‐phenomenological school of thought.

1.23 The term “post‐phenomenology” is not only hard to pronounce. It may also be hard to understand for those who have not heard it before. So, let’s try to break this idea down into parts. The first thing to know is that “phenomenology” was a philosophical movement that started around the time that Heidegger was writing his philosophy in his hut. This movement was premised on the idea that in reflecting on human life, we should not start with abstract philosophical theories, but with the phenomena that we encounter in the world. In our philosophizing, we should be investigating how reality and how we ourselves appear to us from the point of view of how we experience things as human beings. Post‐phenomenology takes this idea a step further and asks how technologies shape how we experience reality, ourselves, and what we are able to do or aim for.

1.24 Members of this school of thought—such as Don Ihde, Bruno Latour, and, more recently, Peter‐Paul Verbeek—are interested in how technologies “mediate” the way we experience things and what we are able to do.4 Technologies, they say, are a medium in between us and what we perceive and what we can do. There are two different ideas here. One has to do with inputs, the other with outputs. Let us consider them one after the other.

1.25 One key idea is that technologies shape how we perceive or experience the world and even ourselves. To use a simple example, if we are wearing glasses, our visual experiences and perceptions are going to be different than if we remove our glasses. Or, to use another example, if you visit the Louvre in Paris and you view the Mona Lisa through the camera on your phone, as many tourists who visit the Louvre do these days, then your perception of the Mona Lisa will be mediated or shaped by your phone. It will be different from what it would have been if you simply look straight at the painting without your phone in between it and your eyes. More generally, what we pay attention to as we go about our lives is shaped by the technologies we use. For example, these days, what news (or fake news!) people hear about and pay attention to tends to be strongly influenced by algorithms that are part of the social media that they use.

1.26 The second key idea is that technologies shape what we are able to do and how we are able to do it. Relatedly, technologies shape what we see ourselves as being able to do. They also sometimes “suggest” certain actions to us. For example, these days, we can take weekend trips to faraway places because we have airplanes that can quickly take us to a distant country. So, the idea of taking a weekend trip to some place far away might occur to a person, because the technologies available to us make this possible. This would not have been possible, nor anything that would have occurred to anybody to do, before these technologies were available. Speaking of social media—to return to the example at the end of the previous paragraph— that is also an example of a technology that might shape how we act, for example how we communicate with other people.

1.27 A lot of people would never think of, or even dare to say certain things to other people face‐to‐face that they think of saying and dare to say to them on social media. This can be both good and bad: good because people might dare to speak out against something, bad because people might hurl abuse at each other. This is another example of how technology shapes what we do and what we think of doing.

1.28 Actually, there is an idea in Heidegger's “The Question Concerning Technology” essay that is very much in line with this post‐phenomenological idea that technologies shape what we can do and how we perceive the world. Heidegger talks about how new technologies—e.g., a water powerplant—might change how we view something like a river. Suddenly, the river is presented by the modern water powerplant as a source of electricity, or as a means to an end to which it could not have been viewed as a means before.

1.29 Heidegger takes this idea to the extreme. He argues that the more modern technologies we surround ourselves with, the more nature will appear to us to be a large set of means to human ends as opposed to an end in itself. While a river might have appeared to us as a beautiful piece of nature before, a new technology might suddenly present it as a means to the end of producing electricity. As a result, the way we perceive the river might then have been changed by the technology.

1.30 This general idea is picked up by post‐phenomenologists when they criticize the part of the instrumental theory of technology that portrays technologies as being in themselves value‐neutral. Members of this school of thought argue that technologies are not purely value‐neutral for the reason that technologies influence how we value things around us. Peter‐Paul Verbeek, for example, is fond of using the example of ultrasound images of fetuses. Verbeek claims that such images present fetuses in a new light (different from the mere knowledge that there is a fetus in the womb). In particular, those images present the fetus as a patient, whose health status the parents need to worry about and make decisions about.

1.31 Another type of example of how technologies are not value‐neutral tools according to post‐phenomenologists is often illustrated with the help of an idea that Don Ihde discusses. The idea is that technologies sometimes contain intended or unintended “scripts.” That is to say, they may tell us what to do and perhaps even force us to act in certain ways. For example, the French sociologist and philosopher Bruno Latour talks about speed bumps as telling drivers to slow down, paper cups as communicating to users that they should be thrown away after use, and the beeping noises cars make when we do not put on our seatbelts as forcing us to wear our seatbelts.

1.32 Members of this post‐phenomenological movement also criticize the instrumental theory of technology by arguing that the instrumental theory tends to falsely represent humans and technologies as being wholly separate from each other when it portrays humans as acting on the basis of ends and technologies as being means used to these ends. This is not correct, many post‐phenomenologists argue. It is not correct because human beings and technologies form “assemblages” or units that can do things that humans alone, without the technologies, neither could nor would do on their own.

1.33 Verbeek, for example, takes issue with the above‐mentioned idea that “guns don’t kill people, people do.” According to Verbeek, a man with a gun easily becomes what Americans tend to call a “gunman,” i.e., somebody with a gun and a disposition to shoot. The man and his gun form a unit, as Verbeek sees things. And this unit can do things, and may become inclined to do things, that the man cannot or would not do without his gun.5

1.34 Notably, the science and technology researcher Donna Haraway suggests that is helpful to think of us human beings using the metaphor of cyborgs. There is, as Haraway sees things, no sharp distinction between humans and technologies. Many of us have heard the expression “you are what you eat.” Haraway's view seems to be something along the lines of “you are, in part, the technologies you use.” The idea that human beings might merge with technologies, or that we have already done so, is something we will have occasion to return to later in this book. Post‐phenomenologists are not the only ones fascinated by that idea. As we will see in Chapter 10, many others are fascinated by it as well.

1.35 To summarize the key points in this section: according to post‐phenomenologists, technologies are not value‐neutral tools that are wholly separate from human beings, who set ends and use technologies as means to their ends. Technologies are more than that. Technologies shape how we perceive the world, what we are able to do, and what we value and how we value it. Humans and the technologies we use are not wholly separate from each other. They form units that are able to do things, and that are also disposed to do things, that humans without these technologies would neither be able nor inclined to do. This can be seen as an updated and refined version of the anthropological theory of technology since this is a theory about what roles technologies play in human practices. These interesting ideas from the post‐phenomenologists will be popping up here and there throughout this book. But let us set them aside for now, and return to the instrumental theory of technology once more.

1.4 Technologies Conceived of as Being More Than Mere Means or Instruments

1.36 Recall how Joanna Bryson talks about how robots and other technologies are, and should be conceived of as, tools that we own, which we can buy or sell, and which should be treated as means to human ends. This brings to mind some ideas that the illustrious Enlightenment philosopher Immanuel Kant discusses in his classic, very influential ethical treatise Groundwork for the Metaphysics of Morals. What Bryson says about how we should treat technologies can be seen as an inversion, or the opposite, of what Kant says about how we should treat human persons. Since Bryson talks about technologies and Kant talks about persons, their views can be seen as complementing each other.

1.37 In that just‐mentioned book, Kant divides up the world into two separate categories: what he calls “persons” and “things.” Persons can think and act, be rational, and make moral decisions. They have an absolute value and dignity, and should be treated with care and respect. Persons, in Kant's phrase, should be regarded as “ends‐in‐themselves.” They can be responsible for their actions, and are members of the moral community. Everything else in the world, Kant argues, belongs to the category of things. Anything that is not a person only ever has a relative value—namely, a value relative to the desires or wishes of persons.

1.38 As Kant sees things, all things can be treated as mere means to the ends of persons. Persons, in contrast, should always be treated as ends and never as mere means. Kant also relates this to our distinctive humanity. When he does so, Kant formulates the following well‐known and very popular moral principle: “so act that you always treat the humanity in each person, never merely as a means, but always at the same time as an end‐in‐itself.” This is sometimes called the “formula of humanity” or, alternatively, Kant's principle of “persons as ends‐in‐themselves.”6

1.39 We can restate Bryson's normative version of the instrumental theory of technology in these Kantian terms in the following way: unlike human beings, all technologies are and should be regarded as things, and not persons. They should always be treated as mere means and never as ends‐in‐themselves. Moreover, we should avoid creating any technologies that would ever be anything other than mere things that we can treat as mere means. We should never create technologies that are or appear to be persons, which are ends‐in‐themselves. Bryson’s normative instrumental theory of technology, in other words, can be seen as an interesting inversion of Kant's principle of humanity.

1.40 It makes sense to bring up this Kantian terminology and these Kantian ideas here, not only because they are interesting in themselves but also because they give us a philosophical vocabulary with which we can reinterpret the instrumental theory of technology. It also makes sense to bring up these things because Kant's philosophy, just like Heidegger's, has had a lot of influence within technology ethics, as we will see later on.

1.41 Moreover, it is also worth bringing up these ideas from Kant and their relation to Bryson's updated version of the instrumental theory of technology because of certain striking recent developments within the ethics of technology. Recently, several technology ethicists have defended ideas that clash starkly with pretty much all of the ideas just mentioned in the paragraphs above. In particular, some technology ethics researchers have recently defended one or more of the following ideas:

Some technologies can think and act, and be rational and make moral decisions.

Some technologies can be persons, as opposed to mere things.

Some technologies should be treated as ends (i.e., be shown moral consideration or given rights), and not as mere means.

Some technologies can be responsible for what they do, just like human persons can be responsible for what they do.

1.42 Such ideas can seem counterintuitive. They are perhaps shocking and even disturbing to those who accept any version of the instrumental theory of technology. But as noted, these ideas are gaining a foothold within contemporary technology ethics. To see why some philosophers of technology are beginning to take them seriously, we can now consider some examples of how some people interact with new forms of technologies like robots and other technologies equipped with artificial intelligence. We will save a full discussion on these controversial ideas until later chapters. But it makes sense to briefly also look at them in this first chapter, since we are here considering the question of what we should understand by the idea of technology.

1.5 Technologies Regarded as Moral Agents

1.43 We have already briefly considered the example of how some of Bryson's colleagues talked about the robot “Cog” (e.g., they were hesitant about pulling the plug or turning off the robot). And we have also considered how some people have been treating Sophia the robot (e.g., giving honorary citizenship to the robot, inviting it to appear on talk shows, taking selfie photos with it, or allowing it to speak in front of important political bodies). These are examples of people treating robots in ways that suggest that they regard them, or that they are willing to treat them, as being more than mere means or things that are categorically different from persons. Many more examples can be given. And more examples will be given throughout the book. But here are just a few more examples of how people think about technologies in ways that clash with the purely instrumental theory.

1.44 Many people working on questions in technology ethics are interested in what are sometimes called functionally “autonomous” machines. That is, machines that for some period of time are able to operate on their own, performing specific tasks, without direct human steering. Such technologies are occasionally put into situations where they seem to need to be able to make moral decisions. Notably, many philosophers and other researchers think that it is possible to create machines that can make moral decisions. A widely discussed example is that of the self‐driving car. Another is the example of military robots or what are sometimes called autonomous weapons systems. These technologies operate in contexts that are dangerous and where human lives are at stake.

1.45 We can imagine that a self‐driving car might need to “decide” whether to go left or right when a fork in the road occurs. The breaks of the car may have stopped working. The problem might be that each option involves a threat to human beings. On the road on the left, there might be five people, who would not be able to get away from the road in time and thereby avoid being hit by the car. On the road on the right, there might be one person, who would also not be able to get away from the road in time to avoid being hit by the car. The person riding in the car may have fainted, and therefore not be able to offer any input regarding what to do. So, it might seem that this self‐driving car, whose breaks are currently not working, needs to make a “moral decision.” Should it go left, and potentially injure or even kill five people? Or should it go right and thereby save the five, but potentially injure or kill one person? This is widely regarded as a type of situation in which a machine—in this case, a self‐driving car—needs to be able to make a moral decision and then act on it.

1.46 An interdisciplinary research field called “machine ethics” is devoted to investigating whether it is possible to create moral machines and if so, how this can and should be done. According to this way of thinking, the technologies in question are not simply regarded as value‐neutral tools that are used by human beings to achieve their ends. Rather, the technologies are regarded as what philosophers call “moral agents.” That is, they are regarded as entities or beings capable of making moral decisions and acting on those decisions.

1.47 Most ethicists of technology who take this idea seriously do not think that the technologies in question would be moral agents of the sorts that we human beings are. Rather, the technologies—the self‐driving cars, military robots, or whatever it might be—would be some other kind of moral agent. They might be able to make moral decisions and act on them, but perhaps they would not be morally responsible for their decisions in the ways that human beings can be. This is a common position. It is defended, for example, by the well‐known Italian technology ethicist and information philosopher Luciano Floridi.7

1.48 As Floridi sees things, any entity that can act in some sense and that can act in situations that are morally significant is a moral agent. But this is not enough, Floridi thinks, for regarding the entity as a morally responsible agent who can be blamed or praised for its actions or decisions. This idea of artificial agents that are not able to be responsible but that are nevertheless making morally sensitive decisions is sometimes thought to give rise to worries about so‐called responsibility gaps. This expression refers to the idea that a machine, e.g. a military robot, might autonomously make a morally important decision and do something (e.g., kill a human being), but then not be able to be held responsible for the outcome. There may also not be any human beings who can be held morally responsible for what the robot did. At the same time, it might seem as if somebody should be held responsible for what happened. Hence a gap in responsibility apparently arises. Worries about such responsibility gaps are widely discussed within technology ethics. And most ethicists of technology do not think that these problems can be solved by holding the machines themselves responsible.

1.49 There are others, however, who take the more extreme view that some technologies can not only make moral decisions, but sometimes also be responsible for those decisions in some sense. The philosopher Daniel Tigard, for example, has argued in a series of academic articles that there are many different “faces of moral responsibility” (that is, many different aspects of what is involved in being morally responsible). According to Tigard, some advanced technologies, such as some artificially intelligent robots or other autonomous systems, can be responsible in certain, at least limited, ways.8 A view such as Tigard's is very far removed from a purely instrumental view of technology, on which all technologies are value‐neutral tools or means to human ends. If we think that some technologies can be responsible for what they do or decisions they make, our understanding of technology is very different from the sort of view Bryson advocates.