142,99 €
This book presents a study carried out by the IEEE-SMC French chapter on ethics and digital transformation in industry and society. Based on a survey of researchers in ICT, artificial intelligence (AI), as well as on presentation seminars, this study examines the various aspects that should be considered when assessing ethical principles in approaches to digital transition, particularly with regard to intelligent systems.
Considering this, Ethics and Digital Transition presents the main technologies and uses of intelligent systems. Bringing together specialists from various fields, it explores the different dimensions of ethics that should be considered in the development of these systems, from the engineering sciences to law, sociology and philosophy. It also looks at the future challenges of ethics in the digital transition.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 347
Veröffentlichungsjahr: 2025
Cover
Title Page
Copyright Page
Foreword
Introduction Ethics and Digital Transition Challenges and Investigations
I.1. The digital transition and its challenges
I.2. Ethical principles
I.3. Ethics and artificial intelligence
I.4. Research surveys in France
I.5. The book’s layout
I.6. Acknowledgments
I.7. References
1 Digital Ethics: Empowering Agents and Taking Care of Systems
1.1. Introduction
1.2. Technology and ethical neutrality
1.3. What are the ethics of technology?
1.4. Calculation, algorithm
1.5. The ethical challenges of digital technology
1.6. Conclusion
1.7. References
2 Bias, Discrimination and Decision-Making: Fate or Responsibility?
2.1. Why this question?
2.2. Bias, history and typology
2.3. Types of bias
2.4. Bias and fairness – a measure of ethics?
2.5. “We are open, the door is just very heavy”
2.6. Biases: fatality or responsibility? Fatality and responsibility…
2.7. Debiasing machines
2.8. References
3 Digital Technology and Artificial Intelligence: How Can We Facilitate the Ethical Control of their Use?
3.1. Introduction1
3.2. Expected properties of an ethics-oriented DTD-AI
3.3. Illustrations of DTDs integrating activity reflexivity for individual and collective documentation purposes
3.4. Conclusion
3.5. References
4 Ethical Autonomous Agents: Literature Review and Illustration for Markov Decision Processes
4.1. Introduction
4.2. Issues specific to the integration of ethics
4.3. Ethical autonomous agents: state of the art
4.4. Ethical Markov decision processes
4.5. Establishing ethical principles in E-MDPs
4.6. Conclusion
4.7. References
5 Ethics and Ecology in Production Systems
5.1. Introduction
5.2. Ecological context
5.3. Integrating ethical issues into an industry in ecological transition
5.4. Proposed areas of work
5.5. Conclusion
5.6. Acknowledgments
5.7. References
6 Operational Ethics in Industrial Systems of the Future: Methodological Elements
6.1. Introduction
6.2. Ethics: definition, typologies and paradigms
6.3. Performance management for 4.0 industrial systems and its ethical risks
6.4. Toward the operational integration of ethics in the 4.0 industrial systems
6.5. Industrial testimony
6.6. Conclusion
6.7. Acknowledgements
6.8. References
7 AI for Industry: Transforming the Daily Lives of Maintenance Operators
7.1. Genesis of this innovation
7.2. Background of this innovation
7.3. The naysayers
7.4. Dealing with hazards
7.5. A decisive demonstration
7.6. The keys to success
7.7. Overcoming obstacles
7.8. Outlook
7.9. Acknowledgments
7.10. References
Conclusion
C.1. From ethics to responsibility
C.2. Principles and challenges behind responsible AI
C.3. Responsible AI: a threat or an opportunity?
C.4. Responsible AI challenges
C.5. An ecosystem for responsible AI
C.6. References
List of Authors
Index
Other titles from iSTE in Information Systems, Web and Pervasive Computing
End User License Agreement
Introduction
Figure I.1 The digital transition of organizations (Vial 2021)
Figure I.2 Main artificial intelligence techniques.
Figure I.3 Examples of AI application fields.
Figure I.4 Ethical dimensions to consider in AI.
Figure I.5 Recommendations for ethical machine learning.
Figure I.6 Trusted organizations.
Figure I.7 Disciplines involved in the ethical validation of digital systems
Chapter 2
Figure 2.1 Risk assessment.
Figure 2.2 The facts.
Chapter 3
Figure 3.1 Procogec trace management interface.
Chapter 4
Figure 4.1 Components of ethical decision-making, adapted from...
Figure 4.2 Ethical governor architecture from Arkin et al. (2009)
Figure 4.3 An example of a value-based argumentation framework from...
Figure 4.4 Prospectic logic architecture from Saptawijaya and Pereir...
Figure 4.5 EJP architecture from Cointe et al. (2016).
Figure 4.6 ACE architecture, adapted from Sarmiento et al. (2023)
Figure 4.7 General architecture for ethical learning from Chaput et...
Figure 4.8 The figure shows three policies: red, green and blue.
Figure 4.9 An example of a DCT framework. Consider the ethical...
Figure 4.10 An example of a PFD framework. Consider a context C...
Figure 4.11 An example of an MGT for a VE frame. Consider the con...
Figure 4.12 An example of an MBT for a VE frame.
Chapter 5
Figure 5.1 Planetary limits (Richardson et al. 2023).
Figure 5.2 Sharing safe space for humanity taken from (Hjalsted et...
Figure 5.3 Processes and stages in the standard (adapted from IEEE...
Figure 5.4 Value concepts (IEEE 7000-2021).
Figure 5.5 Illustration of the basic needs matrix from Clair (2017).
Chapter 6
Figure 6.1 Ethical risks associated with the use of modern techno...
Figure 6.2 Basic concepts of ethics
Figure 6.3 The performance triangle (Gilbert et al. 1980)
Figure 6.4 The performance tetrahedron.
Figure 6.5 Cross-section analysis
Figure 6.6 Operational integration of ethics through a “risk...
Figure 6.7 Example of a methodological guide to managing ethical ris...
Figure 6.8 The principles of ethical performance management
Figure 6.9 Integrating ethics into performance management (the case...
Figure 6.10 The dashboard for ethical performance management
Chapter 7
Figure 7.1 Stand demonstration.
Chapter 3
Table 3.1 Table summarizing the implicit and explicit positions of...
Chapter 6
Table 6.1 Illustrations of some ethical paradigms (inspired by Mec...
Table 6.2 Examples of ethical risks associated with the use of mod...
Table 6.3 Examples of ethical and performance mapping within the te...
Table 6.4 Examples of benefits and risks associated with the imple...
Table 6.5 Correspondence (ethics, performance) and examples of act...
Cover
Table of Contents
Title Page
Copyright
Foreword
Introduction Ethics and Digital Transition Challenges and Investigations
Begin Reading
List of Authors
Index
Other titles from iSTE in Information Systems, Web and Pervasive Computing
End User License Agreement
v
iii
iv
xi
xii
xiii
xv
xvi
xvii
xviii
xix
xx
xxi
xxii
xxiii
xxiv
xxv
xxvi
xxvii
xxviii
xxix
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
Edited by
Marie-Hélène Abel
Nada Matta
Hedi Karray
Inès Saad
First published 2025 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUKwww.iste.co.uk
John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030 USAwww.wiley.com
© ISTE Ltd 2025
The rights of Marie-Hélène Abel, Nada Matta, Hedi Karray and Inès Saad to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s), contributor(s) or editor(s) and do not necessarily reflect the views of ISTE Group.
Library of Congress Control Number: 2025930136
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISBN 978-1-78630-957-0
Ethics is a branch of philosophy that determines the rules to which we should all adhere with regard to our behavior. What should we do? How should we behave in society? What maxims should we adopt to guide us through the world? What do we mean by doing the right thing? Ethics helps – or so it claims – to answer these questions. Originally, ethics were based on lessons learned from old habits and precepts, or on people’s desire for perfection, their aspiration to virtue. From the Age of Enlightenment onward, there was a desire to find broad, general principles, on a par with those of the physical sciences, to which all recommendations could be reduced. Such is the case with “deontic”, which commands respect for a set of rules acceptable to all, or “utilitarianism”, which proposes maximizing a measure of utility. In the course of the 20th century, these principles entered a crisis: the complexity of the issues at stake meant that individuals were no longer able to assess the consequences or the very meaning of their actions, and were thus no longer able to guide themselves. Philosophers began to think about a wide range of issues in order to define ethics. Such was the case with Jurgen Habermas’ work on the sincere, disinterested deliberation view to agreeing on standards of acceptable behaviors, or Hans Jonas’ reflections on the “heuristics of fear” and responsibility toward future generations.
In the age of digital technology, social networks and targeted advertising, the way people live and interact with each other is challenged. The ties that weave the fabric of society – friendship, trust, reputation – are being rewritten. Globalization also means we need to refer to shared values on a planetary scale. In this context, agreeing on a common moral code is even more delicate than in the past. Nevertheless, even if it is still difficult today to establish the basis of our ideas of what is good and right, from which we decide to regulate our behavior, we remain free and therefore responsible for our actions. Unless we are deprived of the means to act, or lose the use of reason, we always assume the consequences of our actions.
With devices programmed using artificial intelligence techniques, in particular machine learning algorithms trained on large masses of data, we are now confronted with unprecedented situations: automata seem to decide for us, or at least whisper decisions to us, without being able to understand what drives them. These devices are sometimes referred to as “agents”, or even “autonomous agents”, to mean that they act on their own. However, this is a misnomer for two reasons. On the one hand, they are not agents in that they do not initiate action, even if they contribute to automatic decision-making; in other words, there is no human intervention between the acquisition of information and action. On the other hand, they are not autonomous in the strict sense of the word, as they have no will of their own. There are times when we strive to preserve a human presence in the decision-making process, but when this happens, the human presence is all too often mere tokenism. This is particularly true when the machine’s verdict remains as unclear as that of an oracle. We then have no choice but to submit to it or reject it outright. The fact remains, however, that we are increasingly acting through such devices, either by delegation or under their control. In this context, we are no longer always in a position to answer for our actions.
Therefore, if we are to assume our responsibilities, we need to take steps at an early stage in the design, production and monitoring of machines. For example, we need to check that they preserve individuals’ freedom of choice and protect privacy by not leaking personal data, that they are free from the weight of prejudices or what we now call biases, that they indicate the information elements that lead to the decision in each particular case so that human operators can exercise their judgment without being subject to the dictates of the machine, etc. Such “autonomous agents”, subject to all of these requirements, cannot be qualified as moral or ethical. To use the old Kantian distinction, they do not act “out of duty”, but “according to the rules of duty” that we impose on them.
If properly designed, however, they will help us to behave as moral agents ourselves. The various decisions we make through them will conform to our self-imposed moral prescriptions. The stakes are accordingly high. This book focuses on just that: how do we go about designing such devices in practice? Also, what ethical criteria should be applied to their design? We are thinking here of compliance with laws and regulations, the biases we have mentioned, the resolution of conflicts of norms and changes in professions and in working life.
Jean-Gabriel GANASCIA
February 2025
Ethics was originally defined in antiquity as “moral principles” (Singer 1986; Aristotle 2019; Frey and Wellman 2008) dictating the virtues of behavior (Hursthouse 1999). Today, it is increasingly identified with principles of deontology and social rules linked to the consequences of actions (Bonhoeffer 2012; Siau et al. 2020). The dimension of evaluating activity and systems has therefore become important in order to respond to these principles.
Furthermore, society’s digital transition is tending to exploit systems emanating from artificial intelligence (AI) and data processing. AI techniques tend to simulate behavior and, above all, reproduce thoughts and actions. The consequences of these actions must then be assessed in the light of social rules and ethics. In fact, AI techniques are mainly based on sampling and data analysis, on the one hand, and cognitive rules and procedures, on the other hand. The results of these approaches modify our everyday behavior by introducing new elements generated by the connections of massive knowledge processing and algorithms. Deep and machine learning, as well as ChatGPT1, are among the main examples of this invasion of our activity.
The main question debated in this book is: “what are the different aspects to be considered when assessing ethical principles in approaches to digital transition and intelligent systems in particular?”. To answer this question, we first explain the main technologies, techniques and uses of intelligent systems. We then explore the works addressing ethics in digital transition to highlight the ethical dimensions to be taken into account in the development of these systems. These are extracted from a survey of digital researchers. Finally, we introduce a summary of the seven chapters of this book that address these issues.
These investigations are being carried out as part of the activities of the French chapter of IEEE SMC2, where a number of initiatives focus on studying the relationship between digital technology and human activity.
Information technologies increasingly offer processing approaches that enable us to understand the internal and external socio-economic ecosystem. On the one hand, these techniques make it possible to capture and exploit data and information produced by an activity and/or existing in the environment, and, on the other, to provide decision-support tools. Socio-economic players are therefore called upon to grasp these technologies and integrate them into their organizations (Hesse 2018).
The digital transition is defined as the integration of information processing technologies, while conveying a profound change in habits, to enable an understanding of the ecosystem, thereby leading to better organizational performance (Hesse 2018; Zacklad 2020). We can cite, as an example, the massive exploitation of teleworking support tools (Zoom®, Microsoft Teams®, Webex®), notably during the Covid-19 health crisis. Similarly, our understanding of the environment is currently raising social awareness, leading to more sustainable action.
Advances in information processing technologies are bringing about radical changes in activities and behavior, particularly in communication and decision-making (Figure I.1) (Zacklad 2020; Vial 2021).
These technologies are largely based on AI approaches that have proven their worth in supporting ecosystem understanding and decision-making.
Figure I.1The digital transition of organizations (Vial 2021)
The basic principles of AI approaches are to represent human reasoning and behavior in computational techniques (Fetzer et al. 1990; Dick 2019). For instance, rule-based systems tend to illustrate mainly deduction, case-based reasoning stands for inferences by analogy, while machine learning algorithms tend to simulate induction.
We also mention multi-agent systems that represent the cooperation of bees and ant communities. This similarity in the behavior of living organisms is leading to real collaborations between humans and AI algorithms, and not just to help with decision-making and assistance.
Some AI approaches use cognitive dimensions and propose logical reasoning based on experience feedback knowledge (ontologies, rule-based and case-based systems). Other techniques use statistical data processing, based on data lakes to aggregate features and generate rules and reasoning models (neural networks, deep learning) (Hunt 2014).
These two approaches interconnect through supervised learning, currently referred to as hybrid AI (Figure I.2).
Figure I.2Main artificial intelligence techniques.
Important questions arise concerning the influence and relevance of these techniques for understanding the ecosystem:
Are the expertise and data used in these techniques enough complete to suggest models for reasoning and effective decision-making? Are the data global enough to represent real-life situations?
Can these models be so complete as to claim to represent different aspects of human reasoning?
Can these approaches recognize and avoid erroneous data and incomplete experiments?
From 1955 (the birth of the notion of AI and the Turing test) to the present day, the application of AI has grown exponentially, especially with the increasing computational capabilities of machines. In terms of applications, AI was initially used in healthcare and industry as knowledge- and case-based systems (Dick 2019). Other techniques, such as fuzzy logic and neural networks, are used in image and speech processing (Hunt 2014) and robotics. Similarly, multi-agent systems are used in networking and Cloud Computing.
We can cite a number of applications for these systems in certain fields (Figure I.3):
natural language processing: translation, information retrieval and text generation for medical, industrial, marketing and legal applications;
image processing: supervision, cultural and archaeological recognition, facial recognition, medical diagnosis, climate change, augmented reality, digital twins, etc.;
digital data processing: problem prediction, maintenance, behavior prediction, supervision, customer-market relations, recommendation, etc.;
the Semantic web: data web, information retrieval, text generation, chatbot, social networks and mutual aid, e-learning, etc.
These tools lead to a transformation in user behavior and a major organizational change while considering the influence of data characterization and the prediction on the ecosystem behavior. These approaches not only represent human behavior, but also emphasize their mutual influences with the environment. We can even mention the possibility of self-evolution of AI systems, just like the reasoning they represent. The apprehension of these techniques increasingly raises ethical questions, which are essential to the integration of these approaches in the socio-economic environment.
Figure I.3Examples of AI application fields.
Ethics, by definition, is related to morality, virtues of behavior and social rules (Hursthouse 1999). The issue of applying theories of morality and virtues as principles of ethics (Siau et al. 2020) is addressed in several sciences, including law, medicine, business and engineering (Frey and Wellman 2008). As a result, principles were then defined, wherein the consequences of these sciences in society are mainly studied. We can note, for example, that in the medical sciences, certain principles have been prescribed such as common goals, fiduciary duties, legal and professional standards and responsibility, as well as methods for transforming these principles into practice. Similarly, the notion of environmental ethics and the sustainable behavior of human in their ecosystem are studied in the engineering sciences (Palmer et al. 2014). In this science, ethical concepts are introduced as responsibility, autonomy, virtue, right and moral status (Powers and Ganascia 2020).
The application of theoretical ethical definitions in the digital transition primarily points to the analysis of the nature and social impact of this transition and the justification of this impact (Newell and Marabelli 2015; Majchrzak et al. 2016; Mittelstadt et al. 2016). Companies are invited to manage a trade-off between their performance and ethical principles (Vial 2021). These trade-offs should be at the operational and strategic levels (Zacklad 2020; Vial 2021). The application of ethical principles in the digital transition is strongly linked to the integration of data processing applications and AI technologies.
Currently, several scientific organizations such as the ACM and IEEE have defined certain ethical principles for the digital transition and AI. The OECD (2019) and the European Commission’s Artificial Intelligence Expert Group (AI HLEG 2019) have proposed four principles concerning human autonomy, harm avoidance, fairness and explicability. These principles are derived from those defined in certain sciences and especially in the medical sciences (Mittelstadt et al. 2016). Mittelstadt et al. (2016) argue that these principles cannot be applied directly in the digital transition. Indeed, confusion exists in the definition of the responsibility that must be shared between the multiple developers of AI technologies and the behaviors of these technologies users. Therefore, there are major challenges ahead in defining the standards and ethical rules for these technologies, particularly in order to ensure sustainable development and use. It will be essential to develop AI systems that interact ethically with humans and society. In his paper, Hagendorff (2020) presents a survey of various guidelines on AI ethics. Key principles are highlighted in this study such as accountability for explainable AI, fairness, privacy and discrimination related to data mining, and bias and robustness in machine learning.
Similarly, AI techniques can influence research and discovery in science, especially when data mining and machine learning are used. Powers and Ganascia (2020) mention “the expansion of knowledge due to AI seems to be a resource for epistemic study, and at the same time, we cannot fully understand what we are actually getting”. Ethical principles such as justice and morality are therefore necessary to deal with this exploration, in order to avoid discrimination and prejudice. The autonomy and dynamic evolution of AI agents and algorithms tend to simulate behavior. Fairness, robustness and explicability must play an important role in the evolution of these techniques to take account of social rules, sustainability and environmental impacts (Palmer et al. 2014).
We can note that studies on the application of ethics in the digital transition, and AI approaches in particular, still raise a number of challenges. In our study, we conducted a survey of the French AI research community on these issues. Some results of this survey are explored in the following section.
Surveys on ethics in AI have been carried out among French researchers in AI and digital systems, organized by the French chapter of the IEEE SMC scientific community.
A number of questions were asked on these subjects via the mailing list of the CNRS I3 research group “Modeling and interoperability of companies and information systems”3:
What ethical aspects do you think intelligent systems should take into account?
How can these aspects be modeled?
Can machine learning be considered ethical, and under what constraints?
Ethics can be considered at various stages: audit, knowledge acquisition and needs identification, data collection, processing of this knowledge and/or data, development of rules and decision support systems, operation and use of these systems. How can we ensure compliance?
Could the identification of a trusted third-party organization be one of the solutions to identify responsibilities? Which organizations could play this role?
Which disciplines do you think should take part in ethics research in the digital transition?
A group of 26 researchers answered these questions. Their answers are summarized below.
Various aspects of ethics have been identified in intelligent systems. Some of these aspects are linked to the philosophical definition of ethics, such as morality and value, but others correspond to social rules and behaviors.
Examples include prudence, loyalty, group ethics and responsibilities (Figure I.4).
Figure I.4Ethical dimensions to consider in AI.
The development of decision support systems is linked to the explicability and transparency of these techniques that are mentioned as guides for the following.
Modeling approaches:
design and modeling rules;
expression of model constraints and limits;
author and institution metadata search;
reaching compromises on standards and techniques;
reduction errors due to bias and abstractions.
Development of decision support systems:
sharing of responsibilities between human actors and digital systems;
the final decision will be made by a human;
specification and explanation of rules.
Actors who need to be involved:
legislators;
end users;
human and social scientists;
system designers and developers.
Machine learning offers decision-making and prediction algorithms based on the recognition and aggregation of data. Lack of completeness of data sources and representativeness of learning models, discovery and search can largely influence the results, whether prediction or decision-making. Awareness and alerts on bias, discriminated data, lack of resources and uncontrolled use have been identified (Figure I.5). Recommendations have been formulated to reduce these dimensions, in particular on the expressiveness and explanation of machine learning algorithms using semantic representations, in order to highlight the transparency of such processing and ethical validation.
Figure I.5Recommendations for ethical machine learning.
Two main dimensions were noted in response to this question:
The people involved in developing these systems should have integrity and show complete transparency in their processing. Similarly, there should be a clear separation of concerns and purposes, to reduce the financial interests underlying these developments.
Social and democratic debates are necessary, to validate the potential uses of these systems regarding privacy, intellectual property, certifications and legal regulations.
As a guarantee of compliance with such knowledge and recommendations, researchers propose the definition of trusted organizations with legal, governmental and scientific experts (Figure I.6).
Figure I.6Trusted organizations.
As shown in Figure I.6, developers of digital systems cannot apply ethical principles by themselves. They need to collaborate with researchers from other disciplines to introduce these aspects into their products, especially when we consider the significant impact of these products on the society.
Figure I.7 shows that law, social sciences and philosophy are needed to identify the rules and principles of evaluation, in collaboration with the engineering sciences that mainly develop and use these systems.
Figure I.7Disciplines involved in the ethical validation of digital systems
This book is organized into seven chapters that address a number of ethical issues in the digital transition. First, in Chapter 1, Bruno Bachimont discusses the philosophical dimension of ethics in technology, highlighting the notion of the non-neutrality of technology and ethical design. In Chapter 2, Florence Sèdes highlights the voluntary and involuntary biases in clustering algorithms and approaches that support decision-making, in particular the machine learning currently widely used and influencing decision-making strategies. Transparency and sustainability are discussed in this context. Chapter 3 by Alain Mille examines the use of digital techniques, particularly AI approaches. A number of characteristics are set out to guide the designers of digital tools in responding to ethical notions. In Chapter 4, Grégory Bonnet, Nadjet Bourdache, Abdel-Illah Mouaddib and Mihail Stojanovski discuss the integration of ethics in the design of autonomous systems, arguing the difficulty of providing these agents with the ability to judge their own actions. The applicability of digital approaches in the industry of the future is then presented in Chapter 5 by Emmanuel Caillaud and Lou Grimal. They underline the stakes of these approaches to face sustainability and ecological challenges in production systems. Some principles are then defined to guide industry actors facing these challenges. Chapter 6 by Damien Trentesaux, Lamia Berrah and Karine Samuel completes these reflections with a discussion of the ethical risks associated with the various uses of digital techniques in industry. The link between these risks and industrial performance is highlighted in order to propose an operational framework to integrate ethical concepts into the future industry. In Chapter 7, Anne Dourgnon, Eunika Mercier Laurent and Alain Antoine present an application of digital techniques within industry, illustrating the transformation brought by the digital transition in the daily actions in several organizations such as EDF, FRAMATOME, SPIE Nucléaire and TECHNICATOME, Boost Conseil and ASSYSTEM. The book concludes with a discussion of the future challenges of ethics in the digital transition by Hedi Karray.
This book is the fruit of the efforts of the French IEEE SMC chapter. We would particularly like to thank Jean-Paul Barthès, Jean-Paul Jamont, Mickaël Coustaty and François Rauscher for their help.
AI HLEG (2019). Ethics guidelines for trustworthy AI. Document, European Commission, Brussels.
Aristotle, A. (2019).
The Ethics of Aristotle
. BoD – Books on Demand, Norderstedt.
Boden, M.A. (1996).
Artificial Intelligence
. Elsevier, Amsterdam.
Bonhoeffer, D. (2012).
Ethics
. Simon and Schuster, New York.
Dick, S. (2019). Artificial intelligence.
Harvard Data Science Review
, 1(1), 1–8.
European Commission (2019). High level expert group on Artificial Intelligence. Ethics guidelines for trustworthy AI. Report, Brussels.
Fetzer, J.H. (1990).
What is Artificial Intelligence?
Springer, Amsterdam.
Frey, R.G. and Wellman, C.H. (2008).
A Companion to Applied Ethics
. Wiley-Blackwell, Oxford.
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines.
Minds and Machines
, 30(1), 99–120.
Hesse, A. (2018). Digitalization and leadership – How experienced leaders interpret daily realities in a digital world. In
Hawaii International Conference on System Sciences
, Waikoloa Beach, 1854–1863.
Hunt, E.B. (2014).
Artificial Intelligence
. Academic Press, Cambridge.
Hursthouse, R. (1999).
On Virtue Ethics
. Oxford University Press, Oxford.
Majchrzak, A., Markus, M.L., Wareham, J. (2016). Designing for digital transformation.
MIS Quarterly
, 40(2), 267–278.
Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S., Floridi, L. (2016). The ethics of algorithms: Mapping the debate.
Big Data & Society
, 3(2). doi:
10.1177/2053951716679679
.
Newell, S. and Marabelli, M. (2015). Strategic opportunities (and challenges) of algorithmic decision-making: A call for action on the long-term societal effects of ‘datification’.
Journal of Strategic Information Systems
, 24(1), 3–14.
OECD (2019). Forty-two countries adopt new OECD Principles on Artificial Intelligence. Document, Paris [Online]. Available at:
http://www.oecd.org/science/forty-two-countries-adopt-new-oecdprinciples-on-artificial-intelligence.htm
.
Palmer, C., McShane, K., Sandler, R. (2014). Environmental ethics.
Annual Review of Environment and Resources
, 39, 419–442. doi:
10.1146/annurev-environ-121112-094434
.
Powers, T.M. and Ganascia, J.G. (2020). The ethics of the ethics of AI. In
The Oxford Handbook of Ethics of AI
, Dubber, M.D., Pasquale, F., Das, S. (eds). Oxford University Press, Oxford.
Siau, K. and Wang, W. (2020). Artificial intelligence (AI) ethics: Ethics of AI and ethical AI.
Journal of Database Management (JDM)
, 31(2), 74–87.
Singer, P. (1986).
Applied Ethics
. Oxford University Press, Oxford.
Vial, G. (2021). Understanding digital transformation: A review and a research agenda. In
Managing Digital Transformation
, Hinterhuber, A., Vescovi, T., Checchinato, F. (eds). Routledge, Oxford.
Zacklad, M. (2020). Les enjeux de la transition numérique et de l’innovation collaborative dans les mutations du travail et du management dans le secteur public. In
Les transformations du travail dans les services publics
, Gillet, A. (ed.). Presses de l’EHESP, Paris.
Introduction written by Nada MATTA, Marie-Hélène ABEL, Hedi KARRAY and Inès SAAD.
1
See:
https://openai.com/blog/chatgpt?utm_source=bdmtools&utm_medium=siteweb&utm_campaign=chatgpt
.
2
See:
https://r8.ieee.org/france-smc/
.
3
See:
http://crinfo.univ-paris1.fr/ModESI/index.htm
.
Ethics has become a commonplace question in the context of technical design. However, technology has long been ignored in debates on ethics or moral philosophy on the grounds that it serves only as a tool in the service of an intention and an action, and that only the latter can possess an ethical or moral dimension: the possible mobilization of techniques is not involved in the moral scope of the action.
Indeed, technology suffers from a double deficit, between logos and ethos. The epistemic values of technology have fallen back on the science of which it is the application: the intellectual values of technology are those of the science from which it derives and which it equips. In the context of technology, moral values, which are specific to action and come under the heading of philosophy or practical reason, according to traditional terminology, are related to their use, to the actor, to implementation, but never to technology itself, which remains neutral in relation to its use.
It could be argued that this stance is still very much in evidence, for if technology is invited into debates on ethics, and if the latter asks about technology, it is often through the questioning of use, in particular via the challenge of design, which aims to translate practical goals into technical structure. The engineer should therefore be accountable for their inventions through the intentions of use materialized in the functionalities programmed or inscribed in the technical object.
If this posture persists, it is because it conceals a part of the truth that we do not wish to renounce, even if it does not allow us to deal satisfactorily with the question of the ethics of our systems. If we are to believe, Kranzberg (1986) and the six laws he enunciates about technology, particularly the first one, technology is neither good nor bad; nor is it neutral. In other words, while it is certainly difficult to associate an ethical value with a technical object in itself – a knife can slice my neighbor’s neck or a slice of bread – this object can nevertheless play a role and influence on the ethical behavior of agents, and thus have an ethical dimension. This raises, on the one hand, a legitimate question concerning contemporary digital systems and how they modify or reconfigure the ethical aspects of technology, and, on the other hand, the stakes of their use.
To this end, we will approach the issue gradually. First, we will clarify and deepen the thesis of the ethical non-neutrality of technology, and how the notion of ethical design can be misleading as a watered-down, modernized form of the neutrality of technology and its objects, with the ethical question arising only when humans intervene, in use and, to appear modern and responsible, in design.
In this context of the essential non-neutrality of technology, we will address the question of the digital and the algorithm: what role does computation play in the ethical functioning or otherwise of the artifacts it animates or instrumentalizes? We must first clarify what we mean by ethics here in order understand that ethics boils down to the question of moral imputation and responsibility. We propose an ethical typology that distinguishes between ethical agent, patient and witness. An ethical agent is someone who can answer for their actions. An ethical patient is someone for whom we must answer, because they are not in a position to assume ethical responsibility, although they could do so in certain circumstances. The ethical witness is our witness of morality: our behavior towards it is the mirror of our morality. The preferred ethical witnesses are nonhuman living beings, artifacts and the environment.
While digital systems are never ethical agents or patients as such, we will address their ethical dimension from several angles.
On the one hand, digital technology provides a universal computational medium which, on the sole condition that data is available and can be manipulated, regardless of its heterogeneity, enables the most diverse realities to be processed in a single movement for variable purposes. For example, correlating diets and health to define insurance premiums, or recruitment profiles based on these same diets, gives rise to a decision-making principle that appears arbitrary and unjustified when it comes to allocating or not allocating an insurance premium or a job. Numerical systems then become tyrannical instruments, as they import hierarchies from other, heterogeneous domains, thereby introducing arbitrariness. The principle of the independence of different spheres of practice, knowledge and power is at stake here.
On the other hand, digital systems address users by positioning them in various ways: the instrumentation of practical situations by digital systems should enable users to exercise their role as moral agents. These tools must enable users to behave in a way that develops the qualities needed to exercise their ethical commitments. When, for reasons specific to the situation, the system has to decide a priori on the decisions to be taken (response time, complexity of processing, etc.) in place of the users, the design that determines these decisions must consider the consequences entailed by these decisions. Ultimately, however, the decision taken by the tool must never concern people’s lives, and must be referred back to a human decision-maker.
In terms of the classic postures of contemporary ethics, our conclusions are clear: a system must enable users to assume an ethics of virtues, or else the design must assume a consequentialist perspective to decide in the best interests of the stakeholders, while respecting prohibitions, thereby adhering to a deontological perspective. This brings us to the conclusion that we must take care of these systems, which are now the mediators of our actions and the constituents of our environment, like an already existing thing to be inscribed in its own order and maintained in harmony with its environment.
The non-neutrality of technology has often been argued in the sense that it is ambivalent:
It is in this context, which can’t help but be passionate, that I’d like to draw attention to one of the most important characteristics of technical progress: its ambivalence. By this I mean that the development of technology is neither good, nor bad, nor neutral – but is made up of a complex mix of positive and negative elements – “good” and “bad” if we want to adopt a moral vocabulary. By this I also mean that it is impossible to dissociate these factors in such a way as to obtain a purely good technique, and that it does not depend at all on the use we make of technical tools to obtain exclusively good results. In fact, in this very use we are in turn modified. In the technical phenomenon as a whole, we do not remain untouched; we are not only indirectly oriented by the equipment itself, but also adapted with a view to making better use of the technique, thanks to the psychological means of adaptation… (Ellul 1965).