A Companion to Applied Philosophy of AI -  - E-Book

A Companion to Applied Philosophy of AI E-Book

0,0
158,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

A comprehensive guide to AI's ethical, epistemological, and legal impacts through applied philosophy

Inartificial intelligence (AI) influences nearly every aspect of society. A Companion to Applied Philosophy of AI provides a critical philosophical framework for understanding and addressing its complexities. Edited by Martin Hähnel and Regina Müller, this volume explores AI's practical implications in epistemology, ethics, politics, and law. Moving beyond a narrow ethical perspective, the authors advocate for a multi-faceted approach that synthesizes diverse disciplines and perspectives, offering readers a nuanced and integrative understanding of AI's transformative role.

The Companion explores a broad range of topics, from issues of transparency and expertise in AI-driven systems to discussions of ethical theories and their relevance to AI, such as consequentialism, deontology, and virtue ethics. Filling a significant gap in the current academic literature, this groundbreaking volume also addresses AI's broader social, political, and legal dimensions, equipping readers with practical frameworks to navigate this rapidly evolving field.

Offering fresh and invaluable insights into the interplay between philosophical thought and technological innovation, A Companion to Applied Philosophy of AI:

  • Features contributions from leading philosophers and interdisciplinary experts
  • Offers a unique applied philosophy perspective on artificial intelligence
  • Covers diverse topics including ethics, epistemology, politics, and law
  • Encourages interdisciplinary dialogue to better understand AI's profound implications for humanity

A Companion to Applied Philosophy of AI is ideal for undergraduate and graduate courses in applied philosophy, AI ethics, political theory, and legal philosophy. It is also a vital reference for those working in areas including AI policy, governance, and interdisciplinary research.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 1276

Veröffentlichungsjahr: 2025

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Table of Contents

Series Page

Title Page

Copyright Page

Notes on Contributors

Acknowledgments

Part I: Methodological Foundations

1 Introduction to Applied Philosophy of AI: Foundations, Contexts, and Perspectives

References

Note

2 Philosophy of AI: A Structured Overview

1

2.1 Topic and Method

2.2 Intelligence

2.3 Computation

2.4 Perception and Action

2.5 Meaning and Representation

2.6 Rational Choice

2.7 Free Will and Creativity

2.8 Consciousness

2.9 Normativity

References

Notes

3 Applied Philosophy of AI as Conceptual Design

References

Notes

Part II: Relevant Areas of Research

Applied Epistemology of AI

4 AI and Knowledge of the Self

4.1 Introduction

4.2 Aspirations for AI

4.3 Implications of AI Use

References

Note

5 AI and the Philosophy of Expertise and Epistemic Authority

5.1 Introduction

5.2 The Definition Problem

5.3 The Identification Problem

5.4 The Deference Problem

5.5 The Transfer Problem

5.6 Conclusion

References

Notes

6 Deep Opacity in AI: A Threat to XAI and Standard Privacy Protection Mechanisms

1

6.1 Background: Opacity and Data Protection

6.2 Types of Opacity

6.3 Ethical Judgment under Opacity

6.4 Outlook: Data Science under Opacity

6.5 Conclusion

References

Note

7 Explainability in Algorithmic Decision Systems

7.1 Introduction

7.2 The Black Box Problem

7.3 Explainability Skepticism

7.4 The Due Consideration Approach

7.5 Conclusion

References

Notes

8 Epistemology and Politics of AI

8.1 Introduction

8.2 Epistemological Aspects of Machine Learning

8.3 Political Justification

8.4 Epistemic and Political Justification in the Age of Machine Learning

8.5 Conclusion

References

Notes

9 AI and Epistemic Injustice

9.1 Introduction

9.2 Mapping Variations of Epistemic Injustice

9.3 AI Features Relevant for Epistemic Injustice

9.4 Epistemic Injustice In and Through Algorithmic Systems

9.5 Implications for Epistemic Justice and Digitalization

References

Notes

Applied Ethics of AI I: Conceptual Sources

10 Ethical Theories for AI

10.1 Introduction

10.2 Ethical Theories for AI

10.3 Summary

References

Notes

11 Deontology in AI

11.1 Introduction

11.2 Loci of AI Ethics: Two Distinctions

11.3 The Ethical Moment for Rule‐based AI Ethics

11.4 Ethics for Symbolic AI

11.5 Defeasible Duties

11.6 Lessons from Robotics

11.7 Ethics for Subsymbolic AI

11.8 Extrinsic AI Ethics

11.9 Conclusion

References

Notes

12 Consequentialism and AI

12.1 Introduction

12.2 Consequentialism and Utilitarianism

12.3 Direct and Indirect Act‐Consequentialism versus Rule‐Consequentialism

12.4 Consequentialism and AI

References

Further Reading

Notes

13 Virtue Ethics and AI

13.1 Introduction

13.2 Virtue Ethics: A Very Short Introduction

13.3 Conceptions of Virtue in Virtue‐ethical Approaches to AI

13.4 Applied Virtue Ethics of AI

13.5 Concluding Remarks

References

Notes

14 Feminist Ethics and AI

14.1 Feminist Ethics

14.2 Feminist Ethics and Classical Moral Theories

14.3 Basic Feminist Ideas and their Entanglement with AI

14.4 Limitations and Outlook

References

Notes

Applied Ethics of AI II: Fields and Intersections of Application

15 Robots, Wrasse, and the Evolution of Reciprocity

15.1 Introduction

15.2 Social Robots and Reciprocity

15.3 The Evolution of Reciprocity

15.4 Implications for Social Robotics

15.5 Conclusion

References

Notes

16 Ethical Design of Datafication by Principles of Biomedical Ethics

16.1 Introduction

16.2 Datafication

16.3 Ethical Evaluation

16.4 Difficulties of Applicability

16.5 Use of the Principles

16.6 Conclusion

References

Notes

17 Embedding Ethics into Medical AI

17.1 Doing Medicine Means Making Value Judgments

17.2 Three Main Moral Theories

17.3 Implementing Ethical Theories into Medical AI

17.4 A Compromise Solution for Medical AI

17.5 Realizing Normative User Input

17.6 Conclusion

References

Notes

18 Simulating Moral Exemplars

18.1 Introduction

18.2 Implementing Machine Ethics

18.3 Capturing Normative Knowledge

18.4 Machine Learning and Games

18.5 From Excellence in

StarCraft II

to Moral Excellence?

18.6 Conclusion

References

Notes

19 Trust in AI

19.1 Introduction

19.2 The Philosophy of Trust and AI

19.3 What is Trust?

19.4 Whom to Trust? Three Paradigms – and Trust in AI

19.5 Conclusion

References

Notes

20 Are Large Language Models Embodied?

20.1 Introduction

20.2 The Perceptual Component

20.3 The Interactive‐pragmatic Component

20.4 What is Embodiment: Lessons from Robotics and Cognitive Science

20.5 What is Embodiment? A Phenomenological Approach

20.6 Conclusion

References

Notes

Applied Social, Political, and Legal Philosophy of AI

21 The Social Turn in the Ethics of AI

21.1 Introduction

21.2 Three Waves of AI Ethics?

21.3 Relational Approaches to Just AI

21.4 Deliberation and Structural Injustices

21.5 Deliberation and Relational Justice

21.6 Conclusion: Moving Forward

References

Notes

22 AI, Critical Theory, and the Concept of Progress

22.1 Critical Theory and the Topic of Technology

22.2 The Notion of Progress in Critical Theory

22.3 Critiquing Narratives of Progress in AI

22.4 Reconsidering Technological Progress

22.5 Conclusion

References

Notes

23 Artificial Power

23.1 Introduction: Power as a Topic in Political Philosophy

23.2 Power and AI: Toward a General Conceptual Framework

23.3 Marxism: AI as a Tool for Technocapitalism

23.4 Foucault: How AI Subjects Us and Makes Us into Subjects

23.5 Technoperformances, Power, and AI

23.6 Conclusion and Remaining Questions

References

Note

24 AI and Fundamental Rights

24.1 Introduction

24.2 Moral Status and Moral Rights

24.3 What are Fundamental Rights?

24.4 How can Conflicts Among Fundamental Rights be Reconciled?

24.5 Conclusion

References

Notes

25 Global Governance of AI, Cultural Values, and Human Rights

25.1 Introduction

25.2 Cultural Values and the Two Challenges to Global Governance of AI

25.3 Human Rights Approaches to Global Governance of AI

25.4 Where are Cultural Values in the Human Rights Approaches to AI Governance?

25.5 Conclusion

References

26 Collective Ownership of AI

26.1 Private and Collective Ownership of AI

26.2 Justice‐based Rationales

26.3 Democracy‐based Rationales

26.4 Objections and Replies

26.5 Conclusion

Acknowledgments

References

Notes

27 AI Personhood

27.1 Introduction

27.2 Who or What is a Person?

27.3 AI and Personhood

27.4 Disruptions and Alternatives

References

Note

Part III: The Future of Applied Philosophy of AI

28 The Future of Human Responsibility: AI, Responsibility Gaps, and Asymmetries Between Praise and Blame

28.1 Introduction

28.2 The Notion of Artificial Intelligence and Why It Gives Rise to Worries about Responsibility Gaps

28.3 Praise and Blame and Asymmetries Between Them

28.4 Four Kinds of Responsibility and Four Kinds of Potential Gaps in Responsibility

28.5 Could We Fill Responsibility Gaps by Letting People Volunteer to Take Responsibility for Outputs Created by AI Technologies?

28.6 Praiseworthiness and Blameworthiness for Good and Bad Outcomes Created by/with AI Technologies

28.7 Conclusion

References

Notes

29 Artificial Moral Agents

29.1 Introduction

29.2 Artificial Morality and Machine Ethics

29.3 Types of Moral Agents

29.4 Functional Moral Agency

29.5 Quasi‐intentionality

29.6 Large Language Models as Moral Agents

29.7 Ethical Assessment of AMAs

References

30 AI‐aided Moral Enhancement: Exploring Opportunities and Challenges

30.1 Introduction

30.2 AI‐based Moral Enhancement: General Background

30.3 Types of AIME

30.4 General Prospects for AIME

Acknowledgments

References

Notes

Index

End User License Agreement

List of Tables

Chapter 10

Table 10.1 Matrix for hybrid ethical approaches to AI by type of theory con...

Chapter 30

Table 30.1 Summary of section 3 analysis.

List of Illustrations

Chapter 16

Figure 16.1 Datafication in the context of digitization.

Guide

Cover Page

Table of Contents

Series Page

Title Page

Copyright Page

Notes on Contributors

Acknowledgments

Begin Reading

Index

WILEY END USER LICENSE AGREEMENT

Pages

ii

iii

iv

viii

ix

x

xi

xii

1

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

41

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

209

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

375

376

377

378

379

380

381

382

383

384

385

386

387

388

389

390

391

392

393

394

395

396

397

398

399

401

402

403

404

405

406

407

408

409

410

411

412

413

414

415

416

417

418

419

420

421

422

423

424

425

426

427

428

429

430

431

432

433

434

435

436

437

438

439

440

441

442

443

444

445

446

447

448

449

450

451

452

Blackwell Companions to Philosophy

This outstanding student reference series offers a comprehensive and authoritative survey of philosophy as a whole. Written by today's leading philosophers, each volume provides lucid and engaging coverage of the key figures, terms, topics, and problems of the field. Taken together, the volumes provide the ideal basis for course use, representing an unparalleled work of reference for students and specialists alike. For the full list of series titles, please visit wiley.com.

The Blackwell Companion to Philosophy, Second Edition

Edited by Nicholas Bunnin and Eric Tsui‐James

A Companion to Ethics

Edited by Peter Singer

A Companion to Aesthetics, Second Edition

Edited by Stephen Davies, Kathleen Marie Higgins, Robert

Hopkins, Robert Stecker, and David E. Cooper

A Companion to Epistemology, Second Edition

Edited by Jonathan Dancy, Ernest Sosa, and Matthias

Steup

A Companion to Contemporary Political Philosophy

(two‐volume set), Second Edition

Edited by Robert E. Goodin and Philip Pettit

A Companion to Philosophy of Mind

Edited by Samuel Guttenplan

A Companion to Metaphysics, Second Edition

Edited by Jaegwon Kim, Ernest Sosa, and Gary S. Rosenkrantz

A Companion to Philosophy of Law and Legal Theory,

Second Edition

Edited by Dennis Patterson

A Companion to Philosophy of Religion, Second Edition

Edited by Charles Taliaferro, Paul Draper, and

Philip L. Quinn

A Companion to the Philosophy of Language, Second

Edition (two‐volume set)

Edited by Bob Hale and Crispin Wright

A Companion to World Philosophies

Edited by Eliot Deutsch and Ron Bontekoe

A Companion to Continental Philosophy

Edited by Simon Critchley and William Schroeder

A Companion to Feminist Philosophy

Edited by Alison M. Jaggar and Iris Marion Young

A Companion to Cognitive Science

Edited by William Bechtel and George Graham

A Companion to Bioethics, Second Edition

Edited by Helga Kuhse and Peter Singer

A Companion to the Philosophers

Edited by Robert L. Arrington

A Companion to Business Ethics

Edited by Robert E. Frederick

A Companion to the Philosophy of Science

Edited by W. H. Newton‐Smith

A Companion to Environmental Philosophy

Edited by Dale Jamieson

A Companion to Analytic Philosophy

Edited by A. P. Martinich and David Sosa

A Companion to Genethics

Edited by Justine Burley and John Harris

A Companion to Philosophical Logic

Edited by Dale Jacquette

A Companion to Early Modern Philosophy

Edited by Steven Nadler

A Companion to Philosophy in the Middle Ages

Edited by Jorge J. E. Gracia and Timothy B. Noone

A Companion to African‐American

Philosophy

Edited by Tommy L. Lott and John P. Pittman

A Companion to Applied Ethics

Edited by R. G. Frey and Christopher Heath Wellman

A Companion to the Philosophy of Education

Edited by Randall Curren

A Companion to African Philosophy

Edited by Kwasi Wiredu

A Companion to Rationalism

Edited by Alan Nelson

A Companion to Pragmatism

Edited by John R. Shook and Joseph Margolis

A Companion to Ancient Philosophy

Edited by Mary Louise Gill and Pierre Pellegrin

A Companion to Phenomenology and Existentialism

Edited by Hubert L. Dreyfus and Mark A. Wrathall

A Companion to the Philosophy of Biology

Edited by Sahotra Sarkar and Anya Plutynski

A Companion to the Philosophy of History and

Historiography

Edited by Aviezer Tucker

A Companion to the Philosophy of Technology

Edited by Jan‐Kyrre

Berg Olsen, Stig Andur Pedersen, and

Vincent F. Hendricks

A Companion to Latin American Philosophy

Edited by Susana Nuccetelli, Ofelia Schutte, and Otávio Bueno

A Companion to the Philosophy of Literature

Edited by Garry L. Hagberg and Walter Jost

A Companion to the Philosophy of Action

Edited by Timothy O'Connor and Constantine Sandis

A Companion to Relativism

Edited by Steven D. Hales

A Companion to Buddhist Philosophy

Edited by Steven M. Emmanuel

A Companion to the Philosophy of Time

Edited by Heather Dyke and Adrian Bardon

A Companion to Experimental Philosophy

Edited by Justin Sytsma and Wesley Buckwalter

A Companion to Applied Philosophy

Edited by Kasper Lippert‐Rasmussen,

Kimberley Brownlee, and

David Coady

A Companion to Nineteenth‐Century

Philosophy

Edited by John Shand

A Companion to Atheism and Philosophy

Edited by Graham Oppy

A Companion to Free Will

Edited by Joe Campbell, Kristin M. Mickelson, and V. Alan

White

A Companion to Public Philosophy

Edited by Lee McIntyre, Nancy McHugh, and Ian Olasov

A Companion to Applied Philosophy of AI

Edited by Martin Hähnel and Regina Müller

A Companion to Applied Philosophy of AI

Edited by

Martin Hähnel and Regina Müller

Copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial intelligence technologies or similar technologies.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey.Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per‐copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750‐8400, fax (978) 750‐4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748‐6011, fax (201) 748‐6008, or online at http://www.wiley.com/go/permission.

The manufacturer’s authorized representative according to the EU General Product Safety Regulation is Wiley‐VCH GmbH, Boschstr. 12, 69469 Weinheim, Germany, e‐mail: [email protected]

Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762‐2974, outside the United States at (317) 572‐3993 or fax (317) 572‐4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging‐in‐Publication Data

Names: Hähnel, Martin, 1980– editor | Müller, Regina, 1987– editorTitle: A companion to applied philosophy of AI / edited by Martin Ha¨hnel, Regina Mu¨ller.Description: Hoboken, New Jersey : Wiley‐Blackwell, [2025] | Includes index.Identifiers: LCCN 2025019642 (print) | LCCN 2025019643 (ebook) | ISBN 9781394238620 hardback | ISBN 9781394238644 adobe pdf | ISBN 9781394238637 epubSubjects: LCSH: Artificial intelligence–PhilosophyClassification: LCC Q334.7 .C655 2025 (print) | LCC Q334.7 (ebook) | DDC 006.301–dc23/eng/20250425LC record available at https://lccn.loc.gov/2025019642LC ebook record available at https://lccn.loc.gov/2025019643

Cover Design: WileyCover Image: © DrAfter123/Getty Images

Notes on Contributors

John Basl is an Associate Professor of Philosophy at Northeastern University. He works primarily in moral philosophy and applied ethics with a focus on the ethics of emerging technologies. He leads AI and data ethics initiatives at the Northeastern Ethics Institute.

Kathi Beier is a Postdoctoral Research Fellow at the Department of Philosophy at the University of Bremen, Germany, where she is part of two research projects on AI in medicine and healthcare. She is Co‐editor‐in‐Chief of Zeitschrift für Ethik und Moralphilosophie/Journal for Ethics and Moral Philosophy. Her main research focus is virtue ethics, both old and new. Besides virtue ethics, her publications include books and papers on theories of practical rationality and irrationality, moral psychology and the philosophy of love.

Andrea Berber is a Research Associate at the Institute of Philosophy of the Faculty of Philosophy at the University of Belgrade. Her research interests are the philosophy of artificial intelligence, applied ethics, and applied epistemology. Specifically, she is working on ethical and epistemological issues surrounding the usage of opaque machine learning algorithms in various fields of human practice.

Paula Boddington has held academic posts at the University of Bristol, at the Australian National University, at Cardiff University, and at Oxford University. Much of her work has concerned the application of philosophy to ethical and policy issues. She is the author of AI Ethics: A Textbook (2023), Towards a Code of Ethics for Artificial Intelligence (2017), Ethical Challenges in Genomics Research (2012) and Reading for Study and Research (1999).

Larissa Bolte is a Research Associate and PhD candidate at the Bonn Sustainable AI Lab of the Institute for Science and Ethics at the University of Bonn. She currently works on the intersection of sustainability and technology from a critical theory perspective. Her research interests are in philosophy of technology, critical theory, and AI ethics, here especially sustainable AI.

Oliver Buchholz is a Postdoctoral Research Fellow at the Chair of Bioethics at ETH Zurich and an associate member of the Interchange Forum for Reflecting on Intelligent Systems (IRIS) at University of Stuttgart. He works mainly in epistemology and the philosophy of science, focusing on methodological issues of machine learning systems as well as potential remedies.

Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the Philosophy Department of the University of Vienna. He is also ERA Chair at the Institute of Philosophy of the Czech Academy of Sciences in Prague and Guest Professor at WASP‐HS and the University of Uppsala. Previously he was President of the Society for Philosophy and Technology. His expertise focuses on ethics and technology, in particular robotics and artificial intelligence.

Hugo Cossette‐Lefebvre is a Postdoctoral Researcher at McGill University and the Institute for Data Valorization (IVADO). He completed his PhD at McGill in 2022 and was a visiting researcher at Aarhus University from 2022 to 2024. His research explores three questions: (1) What does it mean to treat and regard others as equals? (2) Why value egalitarian relationships? (3) How do emerging technologies affect socio‐political relations from an ethical standpoint? His work has been published in the Journal of Social Philosophy, Bioethics, Journal of Medical Ethics, AI & Ethics, Public Affairs Quarterly, French Politics, and Options Politiques, among others.

Michael T. Dale is an Assistant Professor of Philosophy at Hampden‐Sydney College. His research explores to what extent empirical findings can have implications for ethics and metaethics. He has written on topics in normative ethics, metaethics, moral psychology, ethics of artificial intelligence, technology ethics, the evolution of morality, neuroscience of ethics, and virtue ethics.

Mirjam Faissner is a Research Associate at the Institute of the History of Medicine and Ethics in Medicine at Charité‐Universitätsklinikum Berlin, Germany. Trained in philosophy and medicine, she works on questions of structural and epistemic injustice in the context of health and healthcare, with a special focus on the situation of marginalized social groups. Her research areas include feminist bioethics, ethics in psychiatry, and social epistemology.

Luciano Floridi is the Founding Director of the Digital Ethics Center at Yale University, where he is also a Professor in the Cognitive Science Program. Outside Yale, he is a part‐time Professor of Sociology of Culture and Communication at the University of Bologna. His research concerns primarily digital ethics, the ethics of AI, the philosophy of information, and the philosophy of technology. Further research interests include epistemology, philosophy of logic, and the history and philosophy of skepticism.

Markus Furendal is a Researcher and Teacher at the Department of Political Science at Stockholm University and the Institute for Futures Studies. His research interests lie in the intersection between politics, economics, and philosophy, and focuses on the global governance of artificial intelligence, automated decision making in the public sector, and the future of work.

John‐Stewart Gordon is Chief researcher (full professor equivalent) at Kaunas University of Technology, an Associated Member of the IZEW at the University of Tübingen, Associate Fellow at the Academy of International Affairs NRW, and Permanent Visiting Professor at Vytautas Magnus University. John is a member of several editorial boards, including Bioethics and AI & Society, and acts as the general editor for the Philosophy and Human Rights book series at de Gruyter Brill. He has published over 100 works on practical philosophy through esteemed international publishers and leading journals.

David G. Grant is Assistant Professor in the Department of Philosophy at the University of Florida. He works mainly in applied ethics (especially ethics of AI) and philosophy of science (especially philosophy of AI). His research focuses on concerns about fairness and transparency that arise when institutions use artificial intelligence to make high‐stakes decisions.

David J. Gunkel is a Presidential Research, Scholarship and Artistry Professor in the Department of Communication at Northern Illinois University and Associate Professor of Applied Ethics at Łazarski University in Warsaw. He has been teaching and writing on several concepts in philosophy of technology with a focus on the moral and legal challenges of artificial intelligence and robots. His books include Handbook on the Ethics of AI (2024), Person, Thing, Robot (2023), An Introduction to Communication and Artificial Intelligence (2020), Robot Rights (2018), and The Machine Question (2012).

Martin Hähnel is Lecturer and Postdoctoral Researcher at the Universities of Bremen and Augsburg with expertise in applied philosophy, normative ethics, and the history of philosophy. He is currently coordinating the interdisciplinary research project "Dealing responsibly with AI‐assisted systems in medicine," funded by the German Federal Ministry of Education and Research. At the Augsburg Institute of Ethics and History of Health in Society, Hähnel is investigating the ethical implications of the development and use of digital tools in psychiatric healthcare. For years, he has been trying to introduce the approach of Neo‐Aristotelian ethical naturalism (Aristotelian Naturalism – A Research Companion [2020]) to various fields of applied ethics, especially bioethics.

Rico Hauswald is a Privatdozent at the Institute of Philosophy at TU Dresden and a co‐project leader in the interdisciplinary research project "Dealing responsibly with AI‐assisted systems in medicine," funded by the German Federal Ministry of Education and Research. His research focuses on the philosophy of science, epistemology, and medical theory, among other areas.

Marten H.L. Kaas is a Postdoctoral Researcher at the Charité‐Universitätsmedizin Berlin working as a member of the Science of Intelligence Excellence Research Cluster. He is interested in the effect of technology and science on society, with a particular focus on the ethics of artificial intelligence. His areas of expertise are philosophy of mind, philosophy of artificial intelligence, ethics, metaphysics, and philosophy of science.

Antonia Kempkens is a PhD candidate in the Department of Philosophy at the University of Bremen. Her research interests include digital ethics and philosophy of AI, focusing on the ethical issues of privacy and transparency. In her dissertation, she discusses how the digitalisation of public administration in Germany can be designed ethically.

Janne Lenk is a Student Assistant for the book project A Companion of Applied Philosophy of AI at the Institute for Philosophy at the University of Bremen and is studying for a Master's degree in philosophy at Carl von Ossietzky University, Oldenburg. Their areas of interest are feminist and queer theories and (in)justices.

Lukas J. Meier is a Fellow at the Harvard Center for Ethics, with main interests in neurophilosophy, artificial intelligence, medical ethics, and philosophy of mind. Previously, he was a Junior Research Fellow at Churchill College, University of Cambridge, and a Technology and Human Rights Fellow at the John F. Kennedy School of Government. Lukas has written on brain death and coma, artificial intelligence for clinical application, triage, and different topics at the intersection of philosophy and neuroscience.

Catrin Misselhorn is a Professor of Philosophy at the Georg‐August University of Göttingen. Her research focuses on philosophical problems of AI, robot and machine ethics, integrative philosophy of science, art, and technology. She is the author of Artificial Intelligence – the End of Art? (2023), Artificial Intelligence and Empathy. Living with Emotion Recognition, Sex Robots & Co (2024), and Basic Issues in Machine Ethics (2022) and co‐editor of Emotional Machines: Perspectives from Affective Computing and Emotional Human–Machine Interaction (2023).

Vincent C. Müller is an Alexander von Humboldt Professor for ethics and philosophy of AI at the Universität Erlangen‐Nürnberg (FAU), editor of Philosophy of AI, president of the European Association for Cognitive Systems, and chair of the euRobotics topics group on “ethical, legal and socio‐economic issues” (ELS)'. He is the Director of the Centre for Philosophy and AI Research at FAU. He has written and edited extensively on the philosophy and ethics of AI.

Regina Müller is a Research Associate at the Institute for Philosophy at the University of Bremen. Her research focuses on ethical and social aspects of technological advancement, particularly in medicine and healthcare. Additionally, she is interested in theories of (in)justice and their intersections with digital developments. She is the author of articles on medical ethics, digital ethics, and (in)justice and leads the network “Bioethics and Structural Injustice.”

Sven Nyholm is a Professor of Ethics of Artificial Intelligence at the Ludwig‐Maximilians‐Universität München and Principal Investigator of AI Ethics at the Munich Center for Machine Learning. His research and teaching encompass applied ethics (particularly ethics of artificial intelligence), practical philosophy, and philosophy of technology. His books include This is Technology Ethics: An Introduction (2023) and Humans and Robots: Ethics, Agency, and Anthropomorphism (2020).

Thomas M. Powers is an Associate Professor in the Department of Philosophy and Director of the Center for Science, Ethics and Public Policy at the University of Delaware. His interests lie in the ethical, social, legal, and political impacts of emerging technologies. He has published extensively in the areas of the ethics of information technology, especially AI and machine ethics, and contributed to scholarships in philosophy and engineering. He is the editor of Philosophy and Computing: Essays in Epistemology, Philosophy of Mind, Logic, and Ethics (2017).

Karoline Reinhardt is a Junior Professor for Applied Ethics at the University of Passau. She has written on several central concepts in normative and applied ethics, especially about trust and trustworthiness in AI ethics, migration ethics, political philosophy, Immanuel Kant and John Rawls.

Abootaleb Safdari is a Postdoc Researcher at the Institute for Philosophy at the University of Bremen, working at the intersection of philosophy of mind and AI/robotics. Based on the phenomenological tradition, he publishes on topics of ethics and philosophy of technology, philosophy of cognitive science, with a special focus on empathy.

Jörg Schroth is a Research Associate at the Department of Philosophy at the Georg‐August University of Göttingen. His research and teaching focus on practical philosophy, especially on deontology and consequentialism. He is the editor of Texte zum Utilitarismus (2016) and author of Konsequentialismus. Einführung (2022).

Rosalie Waelen is a Postdoctoral Researcher at the Sustainable AI Lab at the Institute for Science and Ethics at the University of Bonn. She is interested in applied ethics, philosophy of technology, and critical theory and tries to connect insights from these different fields in her work on AI.

Pak‐Hang Wong is an Assistant Professor at the Academy of Chinese, History, Religion and Philosophy, Faculty of Arts and Social Sciences and Research Fellow of Centre for Applied Ethics at the Hong Kong Baptist University. He works primarily in philosophy and ethics of technology, particularly with an inter‐ and cross‐cultural approach.

Acknowledgments

The realization of this companion has been a collaborative effort, and we wish to express our gratitude to everyone who contributed to its success. First, we want to thank all the authors, whose great expertise and contributions form the heart of this companion. We also want to acknowledge the invaluable input of our peer reviewers. Your thoughtful and critical evaluations have elevated the quality of this companion: we are very grateful for your feedback. Special thanks go to our publisher Wiley, in particular to Will Croft, Sarah Milton, and Pascal Raj Francois, whose professional guidance and support have been integral to bringing this project to fruition. Additionally, we wish to express our appreciation to the University of Bremen, in particular the support by the “Impulse Förderung” and our assistant Janne Lenk, and the VUKIM Project sponsored by the German Federal Ministry of Education and Research. Finally, we want to express our thanks to the Institute of Philosophy at the University of Bremen, whose intellectual environment provided the foundation for this endeavor.

Part IMethodological Foundations

1Introduction to Applied Philosophy of AI: Foundations, Contexts, and Perspectives

MARTIN HÄHNEL AND REGINA MÜLLER

The increasing integration of artificial intelligence (AI) into our social and everyday practices highlights the necessity of an “applied” philosophy of AI. It raises complex and fascinating questions, including the moral and legal status of AI, the relationship between humans and machines, the status of knowledge and issues of responsibility. These questions are relevant across all societal domains, for example, political decision making, medicine and healthcare, transportation, education and legal systems. The far‐reaching impacts of AI‐based technologies on individuals and communities underscore the need for careful philosophical consideration. Additionally, interdisciplinary collaboration is crucial in order to address the challenges posed and opportunities offered by AI in an effective, comprehensive, and responsible way. This book will therefore shed light on the topic of AI from a philosophical perspective in the practical areas of epistemology, ethics, politics, and law. This multi‐perspective approach is more necessary than ever since the topic of AI can neither be adequately understood by one discipline alone nor selectively as far as individual areas of its application are concerned.

The currently very fast‐changing epistemological, ethical, political, and legal conditions have an enormous impact on the responsible implementation of AI. They form the basis for the development of a contemporary and responsible approach to emerging AI technologies. AI issues are interdependent in disciplinary terms and a future, above all normative‐ethical, approach to AI can only succeed against the background of taking into account the multi‐perspective approach of an applied philosophy of AI presented in this companion.

The approach of an applied philosophy does not sacrifice basic philosophical reflection to the rash prioritization of practice. Applied philosophy sparks critical impulses and transfers them into a coherent approach, with which it is possible to analyze current technological developments in a realistic and problem‐conscious way. This companion is methodologically oriented toward an understanding of applied philosophy developed and promoted by Borchers (2014) and Lippert‐Rasmussen (2016). Lippert‐Rasmussen posits that applied philosophy is distinguished by its consideration of seven general theoretical and practical prerequisites for a thorough analysis of problems and methods within the scope of examining a subject or issue: activism, audience focus on nonphilosophy, empirical facticity, practicality, reflectivity, relevance, and specificity.1 Although Lippert‐Rasmussen makes important distinctions by introducing illuminating concepts in order to specify what applied philosophy is or ought to be, it remains unclear how these concepts relate to each other. Ultimately, any definition of applied philosophy in the form of a list is not distinctive and procedural enough. Although Lippert‐Rasmussen acknowledges reflectivity, he does not show how this could lead to connecting the different models and unfortunately, he does not use his list of items to discuss a specific subject of investigation. It would be interesting to see how this structure could be applied to the issue of AI.

Borchers (2014) contributes another approach to the issue of applied philosophy. Her approach is more procedural and tries to explain the connections between different perspectives in more detail. Unlike Lippert‐Rasmussen, Borchers reduces her approach to four main perspectives. (i) Heterogeneity, which means that the field of applied philosophy is internally diverse. In addition to the methods used there, internal heterogeneity also includes the self‐image with which research is conducted, the orientation of research interests and the topics and questions. (ii) Interactivity, which means that there are complex, close interrelationships between applied research and basic philosophical research. Both areas benefit greatly from each other. Without this close connection to basic research, a scientifically reliable and fruitful applied philosophy is not possible. (iii) Reflectivity, which refers to the need to foster a perspectival meta‐discourse on its methods and its self‐understanding, among other things to develop a meta‐theory of the application of philosophical thought. (iv) Operational independence, which means that applied philosophy is primarily not just an “application” of theories, principles, etc. from basic research, but an independent field of research that develops its own methods, theories, and concepts anew in dealing with its own questions.

In contrast to Lippert‐Rasmussen, Borchers provides a kind of programme that gives instructions on how to pursue applied philosophy. In this way, applied philosophy attempts to answer the relevant questions that come from outside using methods developed specifically for this purpose. Borchers views applied philosophy as positively Janus‐faced: one side focuses inwards to develop methods, while the other side engages with the public, making topics accessible to a nonphilosophical audience and collaboratively discussing ethical issues and solutions.

However, due to our special subject of investigation, artificial intelligence, some modifications and additions to an approach based on the insights of Lippert‐Rasmussen and Borchers are necessary. It is very important that our approach does not take AI as factum brutum, but asks in terms of fundamental reflection what form of technology AI is, whether it can be used globally and across application boundaries and contexts, or whether it has the character of a purpose‐bound and locally limited tool, what status this technology has in relation to humans and to what extent this technology is particularly epistemic in nature. This shows that our approach to developing methods draws on other philosophical subdisciplines and promotes networking. This also applies beyond the field of philosophy, where an applied philosophy of AI must ensure that concise philosophical conceptual analysis and the concrete, often nonphilosophical application perspective remain in dialogue with each other. It aims to comprehend the impact of AI systems on human existence, societal structures and our perceptions of fundamental philosophical notions, such as intelligence, consciousness, and moral responsibility. This is especially pertinent due to the intricate and unintended effects of AI on human rights, personhood, and our concept of the self. The cross‐disciplinary field of AI requires the collective efforts of philosophers, computer scientists, ethicists, social scientists, and policy makers, among others, to tackle the intricate issues and challenges that arise from AI's evolution and integration.

Conducting applied philosophy of AI is also essential for developing tailored ethical frameworks, such as ethics‐by‐design and embedded ethics. It allows for a meta‐discourse that is crucial for contemplating and scrutinizing the philosophical underpinnings of AI discussions. As the demand for applied philosophy increases, so does the necessity to evolve new philosophical methodologies alongside traditional ones. Gimmler et al. (2023) put it this way: “Applied philosophy is a branch of philosophy that applies traditional philosophical concepts, theories and methods to problems originating from situations that arise in practices outside academia itself. This, we believe, is a good starting point for thinking about what applied philosophy is and what it might become. The future of applied philosophy might also include developing new philosophical theories and even methods; a case in point would be empirical philosophy” (ibid., 108).

Applied philosophy effectively reconnects with empirical and nonphilosophical facts, while also tackling key questions and challenges related to AI. This approach equips philosophers with knowledge of nonphilosophical methods (such as empirical and political ones) and nonphilosophers with philosophical insights. Such an exchange of expertise has become increasingly crucial, especially in the field of AI, involving human experts, artificial systems, and the general public. By using the approach of an applied philosophy of AI, the companion responds to the growing need for a normative‐ethical discussion on questions relating to AI to not be conducted from an ethical perspective alone but to provide a realistic picture of the general value of AI in the interplay of the approaches gathered in this companion. Consequently, in accordance with the approach of an applied philosophy, the book is not only directed at (applied) philosophers and students of (applied) philosophy but also at practitioners and all stakeholders already working with AI or planning to work with AI who are interested in its epistemological, ethical, political, and legal aspects.

The individual chapters in this companion are characterized for the most part through basic philosophical contributions that emphasize AI's practical relevance. Some have introductory characters; some go deeper into detail. In sum, the diverse chapters represent the different discourses that AI penetrates and the connections that exist between disciplines that all try to understand and analyze AI issues in their own way. In addition to the discussion of individual fields of AI, the basic methodology and central concepts are presented at the beginning. In the main section, currently and hitherto underexposed topics are discussed and problematized from the perspectives of applied epistemology, ethics, and legal and political philosophy. In the last chapter, an outlook on future challenges and problem areas of an applied philosophy of AI is ventured.

The overview article by Vincent C. Müller opens the series of contributions. His contribution presents the main topics, arguments, and positions in the philosophy of AI at present (excluding ethics). Apart from the basic concepts of intelligence and computation, Müller suggests that the main topics of artificial cognition are perception, action, meaning, rational choice, free will, consciousness, and normativity. Through a better understanding of these topics, the philosophy of AI contributes to our understanding of the nature, prospects, and value of AI. Furthermore, these topics can be understood more deeply through the discussion of AI, so Müller provides a sketch of how what we call “AI philosophy” provides a new method for an applied philosophy of AI.

Luciano Floridi argues that current ways of thinking about society, economics, law, and politics are based on an outdated “Ur‐philosophy” rooted in Aristotelian and Newtonian concepts. This paradigm views society as composed of individual units (people or legal entities) that interact in absolute time and space, focusing on actions as the key drivers of social change. The chapter contends that this framework, while historically useful, is no longer adequate for understanding and addressing the challenges of mature information societies. Instead, Floridi proposes a shift toward a relational paradigm, drawing inspiration from developments in mathematics and physics. This new approach to conceptualizing an applied philosophy of AI emphasizes relationships rather than individual entities as the fundamental building blocks of society. It conceptualizes society as a network of interconnected relations rather than a mechanical assembly of discrete parts. This shift allows for a more flexible, inclusive, and comprehensive analysis of social phenomena, encompassing people, institutions, artifacts, and nature. The transition is necessary to address complex contemporary issues that defy simple, intuitive explanations. The relational approach implies a reconceptualization of political space and time. Political space becomes defined by social relations rather than geographical boundaries, while political time is understood in terms of the temporality of relations rather than absolute chronology.

This methodology and research programme, the development of which has not yet been completed, gives rise to numerous research‐relevant and application‐oriented questions with major philosophical and ethical implications. Hence, it is widely recognized that technologies, AI included, both bear traces of ourselves as humans and, in turn, influence us in various ways. Paula Boddington analyzes how AI impacts how we might gain knowledge of and experience the self, asking how AI in its different forms, and also in how we imagine it, might influence how we even conceptualize “the self.” We need to investigate both technology and the self, in both imagined and concrete contexts, noting that conceptions of the self have developed over time, are complex and have multiple ramifications for issues such as agency, self‐mastery, and the boundaries between individuals and the world. First, Boddington explores these issues by examining how imagined uses of AI draw upon and mold certain conceptions of the self. Second, she examines how concrete applications of AI impact our understanding of the self and agency, drawing upon work in the social sciences regarding the “digital self.” Lastly, she briefly illustrates the complex impacts of AI on how we imagine and experience the self by considering possible uses of AI technology in relation to dementia.

AI systems are increasingly being integrated into our system of epistemic division of labor. This will change the traditional relationship between experts and laypeople in several ways. It is no longer uncommon for human experts to be outperformed by AI systems in certain tasks, and AI is increasingly being used to assist experts in their work or even to replace them altogether. Rico Hauswald discusses key issues that have been explored by philosophers working on expertise and epistemic authority, focusing on the definition problem, the identification problem, the deference problem, and the transfer problem, and applies these considerations to AI‐based systems that are used for purposes previously reserved for human experts and epistemic authorities. Hauswald debates what changes will result from the emergence of AI and its integration into our system of epistemic division of labor and shows that reference to the philosophy of expertise and epistemic authority can provide valuable insights that can enhance our understanding of our relationship with epistemic AI systems.

Vincent C. Müller proposes that the “black box” becomes the “black box problem” in a context of justification for judgments and actions, crucially in the context of privacy. He suggests distinguishing between two kinds of classic opacity and introducing a third: the subjects may not know what the system does (“shallow opacity”), the analysts may not know what the system does (“standard black box opacity”), or even the analysts cannot possibly know what the system might do (“deep opacity”). If the agents, data subjects as well as analytics experts, operate under opacity, then they cannot provide some of the justifications for judgments that are necessary to protect privacy, e.g., they cannot give “informed consent” or assert “anonymity.” It follows that agents in big data analytics and AI often cannot make the judgments needed to protect privacy. So big data analytics makes the privacy problems worse and the remedies less effective. To close, Müller provides a brief outlook on technical ways to handle this situation.

Much has been made about the opacity of certain AI‐based decision systems. Many have argued that in high‐stakes decision contexts, a failure to be able to interpret, explain or justify the outputs of such systems results in a failure of our obligations to those over whom we deploy these decision systems. These obligations are typically understood as obligations to provide information to decision subjects (or their proxies) so they may assess whether they have been treated appropriately. Concerns about black box systems have motivated work on so‐called “explainable AI,” tools and techniques to render black boxes transparent. At the same time, these concerns have been met with skepticism about both the meaning and value of explainability, especially given the opaque nature of much human decision making. In their contribution, John Basl and David G. Grant summarize the current state of the debate between explainability proponents and skeptics. The authors then go on to articulate an alternative basis for basing explainability on appealing to duties of consideration – duties decision makers have to ensure that they are reasoning about decision subjects appropriately. Basl and Grant explain how this alternative approach helps address explainability skepticism and orient our thinking about how decision makers ought to integrate AI‐based tools into their decision‐making processes.

In their contribution, Oliver Buchholz and Karoline Reinhardt discuss the complex relationship between the epistemic and political dimensions of AI. Their specific question is how political decisions which are based on machine learning systems could be justified. Although AI is increasingly used in political fields, Buchholz and Reinhardt emphasize the different underlying rationale of politics and AI: politics is about making decisions, whereas machine learning is about making predictions. They reveal the ethical, epistemic, and methodological challenges of using machine learning‐based systems for political decisions. For example, because of the opacity of the processes, these systems often fail to meet certain requirements for public justification and therefore their use in political decision making. Buchholz and Reinhardt conclude that while machine learning systems might be epistemically justified in some contexts, their deployment in politically sensitive areas requires cautious and context‐sensitive consideration.

Mirjam Faissner, Janne Lenk and Regina Müller understand AI‐based systems primarily as epistemic tools due to their ability to analyze vast amounts of data, identify patterns and generate insights that contribute to knowledge acquisition and decision‐making processes in various social settings. The authors raise concerns regarding the integration of AI into epistemic practices, especially regarding injustices, and utilize research theorizing injustice within AI‐based epistemic systems and epistemic practices. They provide an overview of epistemic injustice and its variations in the context of AI. Then they describe three forms of epistemic injustice in more detail: testimonial injustice, hermeneutical injustice, and contributory injustice, and highlight the relevant characteristics of AI regarding epistemic injustice, aspects regarding training data, the use of categories and systems of classification, opacity, and epistemic fragmentation. Various examples, such as AI‐based health apps, algorithmic profiling, and automatic gender recognition, are used by the authors to illustrate how the forms of epistemic injustice they describe manifest in and through algorithmic systems.

Moving from the epistemological to the ethical is marked by a contribution written by Martin Hähnel and Regina Müller, who systematize ethical theories for AI, distinguishing between first‐order theories and second‐order approaches. First‐order theories, such as consequentialism, deontology, and virtue ethics, are complex moral conceptualizations that require significant adaptation for their practical application. Second‐order approaches, which are influenced by the normative demands of specific contexts, integrate elements of first‐order theories but are not reducible to them. These include regulations and guidelines (hard law, soft law, self‐regulation) and value‐sensitive design approaches (embedded ethics, ethics‐by‐design, community‐led ethics). Hähnel and Müller emphasize that no single theory can address all ethical questions in the context of AI. The authors also evaluate the combination of these models, discussing their relevance and effectiveness in resolving AI's ethical challenges.

The next three contributions are devoted to the classical first‐order theories of ethics: deontology, consequentialism, and virtue ethics. Thomas M. Powers explores duty‐based ethics in intelligent systems, integrating rules into symbolic and subsymbolic AI. Symbolic AI applies formal logic, while subsymbolic AI refines ethical behavior through data‐driven learning. Deontological principles can be intrinsic (built into AI) or extrinsic (externally regulated). Challenges include ensuring fairness, privacy, and mitigating biases, particularly as commercial AI prioritizes profit. Combining both AI approaches may yield optimal ethical frameworks, but urgent discourse is needed to balance ethical AI development with real‐world constraints.

Jörg Schroth's contribution explores the applicability and challenges of consequentialism, particularly utilitarianism, as an ethical framework for AI. Consequentialism's focus on the outcomes of actions aligns with AI's potential to evaluate and implement complex decision‐making processes aimed at optimizing welfare. However, his analysis highlights significant concerns with the theory's “negative dimension,” which permits instrumental harm to achieve optimal outcomes, creating tension with conventional morality. Schroth examines the distinction between direct and indirect act‐consequentialism, emphasizing how AI might overcome human cognitive and emotional limitations in applying direct consequentialist principles. Yet, ethical dilemmas, such as balancing welfare with values like dignity, justice, and autonomy, pose challenges to implementing utilitarianism in machine ethics. Schroth argues that no single ethical theory, including consequentialism, can serve as a universally accepted framework for AI, given its polarizing nature and the intrinsic controversies surrounding its principles. Consequently, while consequentialism offers valuable insights, its suitability for guiding AI ethics remains limited, requiring integration with other ethical considerations to address the diverse range of moral questions posed by AI systems.

After deontology and consequentialism, virtue ethics comes under critical scrutiny. Kathi Beier explores the connection between virtue ethics and AI, highlighting the diversity of virtue conceptions and current virtue ethical approaches to AI. She provides an overview of the key virtue concepts discussed in contemporary ethical debates on AI, tracing them from ancient perspectives to modern discussions. She then focuses on one specific virtue, honesty, and examines its application to both humans and AI. Finally, she offers some reflections on the potential of applied virtue ethics for AI, whereby her primary goal is to illustrate the diversity of these approaches.

In contrast to the classical ethical theories, feminist approaches to AI are relatively new. Nevertheless, in recent decades feminist approaches have emerged as a leading subfield in the scholarly examination of ethical issues regarding AI. It is informed by a wide range of theoretical and methodological approaches and enriched by scholars from different disciplines. As Regina Müller points out, the connection between feminist ethics and AI lies at the intersection of social justice and AI‐based systems. Müller emphasizes the importance of feminist thinking on developments in the context of AI, as feminist scholars take a critical look at the social, cultural, and political conditions and effects of such systems. She introduces basic feminist ideas about moral agency and epistemology and some characteristic features that are shared by feminist theories, such as intersectionality and context sensitivity, and contextualizes these ideas within AI. Therefore, she shows the relevance of feminist approaches in the context of AI, their connections to AI‐based systems, and their specific contributions to the surrounding ethical debates.

As social robots are playing an increasingly significant role in human daily life, Michael T. Dale discusses how to design them in such a way that humans will respond to them positively and accept them socially. In particular, he considers to what extent reciprocity might be important in human–robot interaction and whether it should be included as a design feature in social robots. Dale uses the lens of evolutionary biology, a perspective that has remained mainly unexplored in robotics literature. He examines what we already know about the evolution of reciprocity in humans and discusses to what extent this knowledge can weigh in on discussions about social robots. He argues that the evolutionary account of reciprocity can be used as a design feature in social robots and claims that social robots should be capable of both direct and indirect reciprocity if we want them to be socially accepted by humans. As we get closer to developing robots that have social capacities at or near the levels of adult humans in the future, Dale claims that models of reciprocity will play an increasingly significant role in research literature and that robot designers, for example, should take reciprocity into account if they want to most effectively enhance human–robot relations.

Antonia Kempkens looks critically at the phenomenon of datafication. Datafication turns individuals into data sources and prevents them from controlling their data. This loss of control seems ethically questionable to Kempkens. Consequently, she argues that ethical considerations should be integrated into the design of datafication. Kempkens chooses Beauchamp and Childress's principles of biomedical ethics, although it is critically discussed whether these bioethical principles would be transferrable to the digital sector. Kempkens finds the principles an important approach in digital ethics because Beauchamp and Childress's principles support all three predominantly used moral theories. In addition, as Kempkens shows, there is a lack of alternative ethical methods suitable for evaluating datafication. Despite the difficulties of their application, Kempkens use the biomedical principles to identify ethical issues in datafication and argues that regarding an ethical design of datafication, the principles of respect for autonomy, beneficence, nonmaleficence, and justice should be realized. Kempkens highlights that this realization depends on informing and empowering data sources and shows, therefore, that despite their problems, the principles can be helpful in ethically designing datafication.

Machine intelligence is also playing a more and more important role in medicine and the healthcare sector. Since medicine is intimately intertwined with value judgments, Lukas J. Meier emphasizes that algorithms will come into contact with normative aspects, and if we want to prevent an algorithmic form of medical paternalism, we will need to integrate ethics into medical AI. Meier argues for two steps that are crucial to this effort: first, implementing a moral theory that forms the frame of the ethical calculations, and second, equipping algorithms with input variables that are adequate for conveying context‐based value judgments and preferences to the machine. In that respect, he introduces the main moral theories – consequentialism, deontology, and virtue ethics – and assesses how well each of them could be integrated into healthcare algorithms. In the end, Meier suggests a compromise solution based on principlism and describes how medical AI could be designed to take into account the preferences of various stakeholders in healthcare.

There is a growing need to ensure that autonomous artificially intelligent systems are capable of behaving ethically. Marten H. L. Kaas argues that virtue ethics, but in particular the normative theory of aretaic exemplarism, can play a central role in cultivating the ethical behavior of machines. When coupled with the value inherent in and commonplace practice of training AI systems using simulated environments, it may be possible to raise ethical machines by training them to imitate simulated exemplars of moral excellence, like a digital Jesus or a virtual Confucius. This bottom‐up approach to implementing machine ethics has advantages over top‐down approaches and is similarly not beset by some of the challenges that arise when attempting to use real people as the imitable training set. In short, machines may be able to imitate simulated moral exemplars and thereby exhibit virtuous behavior themselves.

AI also poses unique challenges to traditional theories of trust. Karoline Reinhardt