139,99 €
This volume tackles the burden of judgment and the challenges of ethical disagreements, organizes the cohabitation of scientific and ethical argumentations in such a way they find their appropriate place in the political decision. It imagines several forms of agreements and open ways of conflicts resolution very different compared with ones of the majority of political philosophers and political scientists that are macro-social and general. It offers an original contribution to a scrutinized interpretation of the precautionary principle, as structuring the decision in interdisciplinary contexts, to make sure to arrive this time to the “Best of the Worlds”.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 588
Veröffentlichungsjahr: 2016
Cover
Title
Copyright
Preface
Acknowledgments
Introduction
I.1. From the New World debate to assessing the best of worlds
I.2. Precaution and pluralisms
I.3. Informed decisions and shaping decisions
I.4. Evaluating evaluation
I.5. The search for epistemic and ethical coherence
I.6. Outline of the problem
PART 1: Pluralism between Ethics and Politics in the Context of Prevention
Introduction to Part 1
1 Burdens of Judgment and Ethical Pluralism of Values
1.1. The “burdens of judgment” at the root of the “fact of reasonable pluralism”
1.2. Burdens of judgment: a critique
1.3. Ethical pluralism of values, from relativism to monism
1.4. Relativisms and commitments
1.5. Opposing monism: conditionality, incompatibility and incommensurability of values
1.6. Conclusion: decompartmentalizing conflicts of values
2 Ethical Pluralism of Ethical Theories at the Heart of Evaluation
2.1. Ordinary morality, anti-theory and skepticism
2.2. What is an ethical theory?
2.3. Main ethical theories
2.4. Pluralism in practical reasoning
2.5. Interactions between normative factors and foundational normative theories
2.6. Conclusion: conflicts and deliberation in the context of ethical theories
3 Deliberative Democracy Put to the Test of Ethical Pluralism
3.1. Participatory exposure
3.2. Rawls and Habermas: opposing views in support of deliberation
3.3. Deliberating in a democracy
3.4. Desperately seeking arguments…
3.5. Conclusion: pluralism of moral and political philosophers
Conclusion to Part 1: Mapping the “Should-be” of the Public Sphere
PART 2: Ethical and Political Pluralism in a Context of Precaution
Introduction to Part 2
4 Deciding on, and in, Uncertainty Using the Precautionary Meta-principle
4.1. Careless criticisms of the precautionary principle
4.2. Precautionary principle: components and trigger factors
4.3. To act, or not to act
4.4. Clashing scenarios and “grammars” of the future
4.5. Typology of political decisions in the context of uncertainty
4.6. Conclusion: the deliberative as genre for uncertain futures
5 Between Sciences and Ethics: A New Quarrel of Faculties?
5.1. Scientists between attachment and independence
5.2. Politics of nature
5.3. The prominent role of values in paradigm changes
5.4. Relationships between scientific facts, epistemic values and ethical values
5.5. Conclusion: a
Republic of Letters
dealing with facts and values
6 Co-argumentation in a Context of Disciplinary Pluralism
6.1. Epistemic pluralism and competitive positions
6.2. Tensions and cooperation due to pluralism internal and external to disciplines
6.3. Types of argumentation and dialogue
6.4. Co-dependence between ethical argumentation and scientific investigation
6.5. Confrontation of hypotheses
6.6. Conclusion: structuring of inter- and intra-disciplinary pluralisms thanks to the precautionary meta-principle
Conclusion
Bibliography
Index
End User License Agreement
Cover
Table of Contents
Begin Reading
C1
iii
iv
v
ix
x
xi
xii
xiii
xv
xvi
xvii
xviii
xix
xx
xxi
xxii
xxiii
xxiv
xxv
xxvi
xxvii
xxviii
xxix
xxx
xxxi
xxxii
xxxiii
xxxiv
xxxv
xxxvi
xxxvii
xxxviii
xxxix
xl
xli
xlii
xliii
xliv
xlv
1
3
4
5
6
7
8
9
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
105
107
108
109
110
111
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
245
246
247
G1
G2
G3
Responsible Research and Innovation Set
coordinated byBernard Reber
Volume 4
Bernard Reber
First published 2016 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK
www.iste.co.uk
John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA
www.wiley.com
© ISTE Ltd 2016
The rights of Bernard Reber to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2016950825
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISBN 978-1-78630-100-0
Responsibility should be central to the design of Responsible Research and Innovation (RRI) strategies; in practice, however, this is not always the case. Research work and practical applications often focus on separate elements or constraints involved in RRI, or tracking projects prefiguring it, rather than considering inquiries and solutions built upon the richness of moral responsibility. Inquiry in this area may be empirical or normative or, better still, combine the best elements from both reciprocal approaches.
Previous volumes in the Responsible Research and Innovation series have addressed, seriously and confidently, the issue of responsibility from a variety of angles, demonstrating the breadth and power of this concept. This diversity should not be seen as a form of lazy ethical relativism, which is often implicit and makes the concept of responsibility appear inaccessible. Like La Fontaine’s fox1, who scorned the grapes hanging just out of reach, this appearance of inaccessibility may lead us to disdain the notion of responsibility, the key element of RRI. In addition to obscuring the very raison d’être of a research project, this type of cognitive dissonance uses the existence of multiple interpretations of responsibility as a pretext to dismiss the concept entirely, or for arbitrary adoption of a single viewpoint. In reality, the diversity of interpretations demonstrates a high level of innovation in ethical terms. Responsibility implies a certain freedom2, open to contingency and efficient3, which should be used creatively in order to respond to new situations, contexts and technical innovations transforming them. Moral responsibility should not be considered synonymous with obedience, compliance, repetition or indiscriminate application.
The intrinsic nature of responsibility has not always been so easily forgotten, and was not (as so often happens) limited to faulty and impotent rhetoric expressed in programs, platforms or the media. Responsibility has been used as a principle for political action, as original and promising with its potential displayed on the international stage. Promoted and defended by the European Union, this notion of responsibility was embodied in the form of the precautionary principle. This meta-principle, encapsulating several other principles, presents a significant advantage, in that it can be applied to, and used to connect, a wide variety of domains such as the sciences, ethics, politics and economics. As the principle took off, it was subject to a variety of interpretative controversies and attacks, due to the way in which it disrupted existing modes of operation and, in some cases, established a new order. Enemies of the precautionary principle included a number of states, who attacked it in arenas such as the World Trade Organization; philosophers, opposed to a caricatured version of the principle; and, ironically, certain thurifers of the precautionary principle, who damaged its reputation by indiscriminate and unsuitable applications.
The purpose of this book is to provide a thorough and balanced examination of the precautionary principle, considering its huge potential to express responsibility in the fields of research and innovation. The precautionary principle has a key part to play in the face of the most disruptive innovations. It is one of the most creative innovations for implementing responsibility in response to new fears surrounding environmental resilience or emerging technologies. It also constitutes one of the most original and well-received proposals of the European Union. RRI owes a certain debt to this institution, and still has a lot to learn from the precautionary principle4. In this work, we shall consider the ethics of the principle of responsibility.
The goal of this work is not simply to improve this meta-principle. We shall consider its interactions with ethical pluralism and with ethical and political deliberations, argumentation in context, and the challenges presented by the interdisciplinary approach5 in an uncertain climate. Taken separately, each problem extends the intention of this book outside of the sphere of RRI. Similarly, these problems need to be solved theoretically before any relevant practical application can be envisaged. A beneficial interplay also exists between practical and theoretical considerations; however, if these problems are not carefully considered from a theoretical perspective at the outset, application and, subsequently, standardization are impossible, whether in the field of research or innovation.
In this book, we shall consider a number of issues, centering on the collaborative choice of innovations and technologies, which will define the future of our world, and the way in which these worlds are to be evaluated in a context of scientific uncertainty and ethical indetermination, due to the existence of ethical pluralism. We shall begin by considering the famous Valladolid controversy concerning colonization of the New World. At present, we must consider a different form of “colonization of new worlds”, not in terms of occupying territories, but rather in terms of a variety of possible futures for our shared planet. An alternative title for this book might be Deliberations on the Best Possible Worlds.
These core aspects of RRI must be taken seriously. Following over 30 years of experiments in the field of Participatory Technology Assessment (PTA)6, the time has now come to establish a more coherent version; the same may be said of RRI, which, in some ways, follows on from PTA. The existence of the PTA concept is laudable, and it offers perspectives on the potential offered by reasoned and careful development of RRI.
Whilst RRI promotes the participation of interested parties or citizens, anticipative governance7 and due consideration of ethics, a number of theoretical and practical issues still need to be resolved. Although some of these issues have been considered in the context of PTA, no satisfactory solutions have been found.
This set of problems, which includes scientific, ethical, political and economic dimensions, may be summarized in the form of a question:
How can we deliberate together, on the basis of preliminary assessments taken from a large number of actors with different and contrasting abilities and expertise (because we include the participation of ordinary citizens, experts and stakeholders), following guidelines taken from democratic theories, concerning issues centering on innovative and controversial technologies with the potential to cause serious and/or irreversible damage?
In more philosophical terms, the issue may be expressed as follows:
Deliberate using different ethical justifications (taking into account elements of applied ethics, ethical theories and meta-ethical options) and different political theories, also taking account of natural and engineering sciences, and their associated disciplines, with their fields of relevance (and thus their implicit exclusions), and their modalities of producing proof and addressing uncertainty.
In this book, we shall consider the difficulties associated with the burden of judgment and with ethical disagreements, along with the cohabitation of scientific and ethical arguments, in order to find the best possible balance as a basis for political decisions. Several types of agreement and disagreement shall be considered, alongside paths to follow for conflict resolution; these paths are different to those used by the majority of philosophers, political sociologists and economists, who tend to take a macro-social, general approach. We aim to provide a new contribution to the in-depth study of the precautionary principle as a tool to structure decision in interdisciplinary contexts, in order to attain the very new world, a new world which is very different to that described in Aldous Huxley’s dystopian novel Brave New World8.
In Part I, we shall employ the hypothesis used by Socrates in Euthyphron that the world of science is stable, unlike the world of ethics, which may differ from one person to the next and change friends into enemies. This ancient hypothesis is still applicable today, in a context where expertise in the field of ethics is often implicitly de-legitimate. We wish to refute this objection. We aim to move beyond the “epistemic abstinence” encountered in most contemporary political theories, justified by arguments based on Rawlsian theories of the burdens of judgment, because they are under-determined in relation to the argumentation requirement. Examples of this approach include the work of Habermas and the key tenets of the theory of deliberative democracy.
We defend an ethical pluralism, a “third way” clearly distinct from relativism and monism. We shall provide an overview of the ethical pluralism of ethical theories, not simply of the pluralism of values. These pluralisms will be replaced in the context of a dialogical and interdisciplinary theory of argumentation.
In the second part of the book, moving from prevention to precaution, our approach follows that of Le rationalisme qui vient9, in which the sciences are considerably less “certain” than we might think. The problems discussed previously will be considered in relation to the methods for decision-making in uncertain contexts, the co-existence of sciences in an assessment situation and the distinction between epistemic and moral values, avoiding the dichotomy between the two, and instead promoting a co-dependent approach, in order to bring an end to confrontations between scientific hypotheses and their compatibility with ethical arguments.
The way in which the precautionary principle characterizes different sources of uncertainty, and the means of responding to this uncertainty in ordinary scientific activity, will be explored in detail. The principle will be used to create a responsible distribution of disciplines for technological evaluation, making a clear distinction between experts and scientists, in order to guarantee intra- and inter-disciplinary epistemic pluralism. Certain conditions, including the independence of experts, the use of certain deontological rules and the principle of contradictory debate, are necessary, but not sufficient, for this to happen.
Our approach is based on over 20 years of theoretical and empirical work in the field of inclusive (or participatory) technology assessment, known as PTA. Many researchers and practitioners have been involved in work in this domain, some of whom consider RRI as an extension of PTA, notably with regard to the importance of the participatory element (first pillar) in RRI. In this work, we shall reconsider a number of theoretical problems which subsist in the field of PTA, reconfigured by the passage toward RRI.
In writing this book, we have made use of texts from areas as varied as moral, political and scientific philosophy. We discuss and compare texts by authors often unaware of each other’s work, from Plato to Deleuze via Aristotle, Socrates, S. Kagan, S. Cavell, J. Rawls, J. Habermas, J.S. Toulmin, C. Perelman, J. Kekes, B. Latour, T. Kuhn, I. Stengers, N. Rescher, M.G. Morgan, M. Henrion., L. C.Becker, R. Ogien, H. Putnam, D. Ross, C. Stevenson, C.S. Peirce and J. Dewey, amongst other, less well-known, writers.
The organization of this text is intended to be as clear as possible, with a summary of key points in the conclusions to each section and each chapter. For this reason, we have overstepped the boundaries of RRI, to defend a pluralistic ethical meta-theory, at the same level as the power of controversial technologies and the environmental challenges which they present. This book therefore goes beyond some of the limits of Hans Jonas’, whose audacity cannot be underestimated, famous ethics, notably in relation to ethical pluralism and to the development of a public policy.
Bernard REBER
October 2016
1
In La Fontaine’s fable,
The Fox and the Grapes
, a hungry fox attempts to eat grapes from a vine, but cannot reach them. Not wishing to admit defeat, he declares the grapes to be under-ripe, “good only for fools”. This is the source of the English phrase “sour grapes”.
2
See Robert Gianni [GIA 16].
3
See Virgil Cristian Lenoir [LEN 15].
4
Another work, soon to be published, that will come back and take a different approach to the precautionary principle is Dratwa [DRA XX].
5
In this series, Armin Grunwald [GRU 17], will provide a different approach to the problem of interdisciplinarity in conjunction with ethical dimensions. The problem was also mentioned and discussed in Virgil Cristian Lenoir [LEN 15].
6
See, for example, Reber [REB 11b], which provides an in-depth bibliography and an analysis of this type of debate. The book provides methods for describing problems linked to the implementation of PTA and for evaluating procedures and experiments, and may be seen as a companion volume to the present work, which addresses certain theoretical problems in relation to PTA and RRI. This work is intended to contribute to a form of institutional design, and to provide elements of a model for ethical learning.
7
A forthcoming book by Marc Maesschalck in the same
Responsible Research and Innovation set
will go into considerable detail concerning the question of governance. This area has also been discussed by Robert Gianni [GIA 16] and [PEL 16].
8
[HUX 06].
9
See Saint-Sernin [SAI 07].
This book has greatly benefitted from careful readings by Richard Bellamy, Pierre Demeulenaere, Jean-Michel Besnier, Jürg Steiner, Peter Kemp, Virgil Cristian Lenoir and Marion Deville. Several parts of the work have formed the subject of presentations at a number of international conferences, too numerous to list, which were extremely useful in confirming or correcting the directions taken. These subjects were also discussed in a variety of seminars, hosted by the excellent research unit Sens, éthique et société (Meaning, Ethics and Society; CERSES, CNRS-Université Paris Descartes) before its closure. Thanks are also due to the Eco-ethica Symposia and to my colleagues at the Centre de Recherchespolitiques (Political Research Center; Cevipof, CNRS and Sciences Po Paris) for their positive reception of this project, and to my students of the Master’s program in Ethics at the Centre européend’enseignement et de recherche en éthique (European Center for Teaching and Research in Ethics, Strasbourg). I am grateful for a number of fruitful discussions with Jane Mansbridge, John Dryzek, Robert Goodin, Emmanuel Picavet, Denis Grison, Philippe Bardy, Marie-Hélène Parizeau, Marie-Jo Thiel, Charles Girard, Caroline Guibet Lafaye, Philippe Descamps, Christopher Coenen, Simon Joss and Pierre-Antoine Chardel.
I also wish to thank the members of the European Governance for responsible innovation (GREAT) project, particularly Sophie Pellé, Robert Gianni and Philippe Goujon. Within the wider context of discussion concerning Responsible Research and Innovation, the Ethics and Public Policy Making: the Case of human Enhancement (EPOCH) project and the UNESCO Ethics of Science and Technology committee provided hospitable and nurturing environments for testing some of the ideas expressed in this volume.
Thanks are also due to the team at ISTE for their talent and enthusiasm in transmitting the fruits of French research to the English-speaking world, and for promoting encounters between the social sciences and humanities and other sciences, harking back to a time when philosophy was not restricted by artificial barriers between domains.
Finally, I would like to express my gratitude to the Democritean Isabelle Reber, the first reader of the very first version of this text.
Every territory on our planet has been discovered, investigated, mapped, attributed and reproduced in various formats, from both geographical and anthropological perspectives. Some areas have even been filmed from above, through the open door of a helicopter flying just above the treetops, as in the movie Home1, which, following its much-anticipated international release, allows viewers to become familiar with even the most distant corners of the planet. What, then, are our “new worlds”? For some, these new worlds are to be found in the stars, and the poetry of black holes and antimatter; however, the future and the transformation of the world we currently live in, through “enframing”2 and increasingly invasive technological colonization, is a “new world” which concerns us all. In some ways, these developments participate in the birth of a new world.
Nevertheless, the process of navigation to reach this new world is as uncertain as that used by Columbus, who believed he had reached the Indies. Engineers and industrialists often make new, better micro-worlds, promises that are taken up and transmitted by political decision makers in their constant quest for innovation. Other scientists, associations or politicians are opposed to the idea of this promised “best” new world, considering it to be dangerous, and preferring either the current status quo or another, third option. Both sides are often wrong: Columbus’ “Indies” turned out to be America, with all of its possession of not only dangers and treasures, but also, and especially, its irreversible effects on a fragile world. An alternative title for this book might have been Deliberating on the Best Possible Worlds.
The first New World, given this title during the Age of Discovery – another title which could equally be applied to modern times – was also the subject of controversy. The term “controversy” itself first became widespread during this period, in the context of political and, to a greater extent, theological disputes, at a time when the two domains were much more closely linked than today. Most of the literature concerning the term “controversy” is of a theological nature, regarding disputes that led to a schism within Christian communities, and the spread of Reformation (and counter-Reformation) ideas through the cities of Europe.
The most famous controversy of the time, the Valladolid Debate, remained within the Catholic sphere and directly concerned the New World. Without wishing to join the historical debate, it is useful to consider this unique incident in a little more depth, touching on some of the lesser known aspects, as it has relevance for the study of contemporary controversies. The question facing the emperor Charles V in 1550, and, before him, the Council of the Indies on July 3, 1549, concerned the extent to which values could be transmitted to another civilization “justly and in good conscience”. Effectively, Ginés de Sepùlveda, canon of Cordoba and the emperor’s royal chronicler, and his adversary, Bartolomé de Las Casas, ex-bishop of Chiapas, participated in one of the first debates foreshadowing the concept of Human Rights and ethical security, in front of an audience of 15 “experts and sages” in the chapel of the College of St Gregory, Valladolid. This view3 of the debate has been put forward by the historian Jean Dumont4. The ethical security aspect of the debate is, evidently, not the first element which comes to mind when considering the episode from a distance, or through the prism of Jean-Daniel Verhaeghe’s screen adaptation (1992), based on a historical novel by Jean-Claude Carrière. More memorable aspects include the greed of the Spanish and Portuguese empires and their thirst for the Indians’ gold, justified through the use of more noble pseudo-justifications, such as the “correction” of barbarian behaviors which allowed infant sacrifice. The conquerors created a hierarchal distinction between races, defended with precision by the brilliant Aristotelian philosopher Ginès de Sepùlveda, who translated Aristotle’s Politics 2 years before the Controversy, and was the author of Democrates (alter), or On the Just Causes of War Against the Indians5.
The time when an emperor6 could take charge of ethical questions concerning the conquest of a territory is long gone; it is also unthinkable that a debate on this subject could take place, often by letter, over a period of 8 months7. Nor would a modern leader request the advice of two theologians or philosophers in reaching a decision. Nowadays, the location for final discussions might be the United Nations Security Council, where the cold conflict of political considerations has more influence over decisions, sometimes at the expense of expertise; this is evident in the case of the erroneous “proof” of the existence of weapons of mass destruction in Iraq on the eve of the Second Gulf War, produced by the 56th US secretary of state, himself a high-ranking military official and a hero of the First Gulf War8.
What, then, are the current forms of deliberation with regard to future worlds, from best to worst and back again? What moral security is desired in different cases, in terms of knowledge and skills, and how should this be obtained?
Whilst knowing may require courage, tenacity and creativity, this is even more true of action, further still if the action is to be successful and if knowledge is limited. All ethical controversies are subject to these difficulties. The strategies used in confronting such problems vary, and may involve elements from traditional sources of reference; religions or thought systems; social mores and norms; personal intuition; and reasoning, which should be as logical as possible.
Controversial, or “new”, technologies must meet two separate but connected claims. A technical solution is often proposed in response to a problem or to improve the conditions in which a task is accomplished using older techniques. For example, scientists working in the field of GMO may consider themselves to be “enhancers”9. We are facing improvements that have to meet scientific and ethical requirements.
Considered in this way, the question cannot be fully resolved using a conventional debate configuration, between facts, provided by the scientific community, and ethics, regarding values or normative aspects of the proposed solutions. Therefore, a situation of competing improvements emerges. This competition exists on several levels. First, the promised world and the current world are compared. These worlds are described, predictions are made and competing and, sometimes, conflicting scenarios are described. Second, these worlds are also debated, alongside the proposed modes of solution and improvement. These debates involve a comparison of improvements in order to select the best option, and a discussion of the undesirable, negative effects associated with each improvement.
The new worlds resulting from the intrusion of these technologies, their rejection or even their modification10, are the subject of controversies between possible, future-worthy possibilities (futuribles), and with regard to these possibilities themselves, expressed as different types of probabilities – or an equivalent, in cases where the element of uncertainty is too great.
These debates concern at least three points:
1) the
choice
(determined and without hazard) between desirable and undesirable worlds;
2) the
possibility
that a given state (better or catastrophic) will emerge in the given world, on the basis of probabilities or anticipations, following an event such as an accident;
3)
ranking
, creating the best possible order of
these worlds
, in such a way as to make worlds as compatible as possible (compossible), thus minimizing the irreversible element of the greatest number of worlds, including the current, shared world
11
.
These three types of difficult questions are interconnected and mutually informative. The specific domain of risk assessment is concerned mostly with the second dimension, but RA does not provide a sufficient response to these questions taken in isolation. Certain new technologies have created a pressing need for true “ethics for the future”, and a new science of politics considering these issues.
The three types of questions appear bit by bit and in a heterogeneous manner in verbal exchanges in the context of participatory technology assessment (PTA), and also in relation to Responsible Research and Innovation (RRI), both in the field of research and in private companies.
These forums are a socio-political innovation with regard to the collective and interdisciplinary evaluation of controversial technologies, and are the modern equivalent of Valladolid. Whilst discussions are, necessarily, briefer than those which took place in the 1550s, the sphere of convocation is considerably broader, including both experts and ordinary citizens, assisted by specialists12 in this new type of debate, and, sometimes, of institutional design.
Current controversies may concern not only individual choices, but also, more significantly, collective choices with long-term and wide-reaching (spatio-temporal) implications. In some cases, these choices are irreversible. The socio-technical usages in question may involve individual practices, production and collective effects. Consider, for example, the technological “layer cake” of the Internet, which involves material elements, a wide variety of usages falling within Web 2.0 and implications for private life, along with programs, protocols and modes of governance.
So, how did we get here? What happened between the New World of the Great Discoveries, innovations meant to participate in the creation of better worlds, and the current situation in which innovations, their new contributions and the planned worlds themselves are called into question? The status of notions such as progress, newness and innovation is no longer unquestionable. Innovation still enjoys a measure of implicit justification due to its links to the health of the economy, the employment market and wellbeing, and to its potential to act as a stimulus in certain domains of research; however, this justification is no longer automatic. For example, certain researchers, having developed new skills such as transgenesis, permitting the production of Genetically Modified Organisms (GMO), find themselves confronted with militant opposition groups who, citing ethical and societal arguments, destroy crops in even the best-guarded fields. Working in this way, they destroy the proof the scientists require. Similarly, these controversial experiments have an impact on quarrels of legitimacy, between territories and types of issue. Decrees issued by mayors, in the name of the precautionary principle, to halt tests of this type until they have been scientifically proven to be safe have been invalidated by French administrative tribunals; concurrently, all of the Swiss cantons banned the use of animal or vegetable GMOs as a result of a popular initiative13. Improvements and progress, intended to support innovation, are now tinged with suspicion. Innovation itself is on trial, and is sometimes met with fierce resistance. The promised “better” world may hide something worse. To borrow an expression from Hans Jonas, one of the rare philosophers to attract a wide readership with his Principe responsabilité. Une éthique pour la civilisation technologique14, “meliorism” can be dangerous. “The promises of modern techniques have turned out to be threats, or rather (…) the two are intrinsically linked…”. For Jonas, this technological civilization constitutes an undiscovered territory for ethics, or rather a land for exploration in order to move beyond immediate, interpersonal ethics, unmediated by technology, both in the interweaving of relationships, which he refers to as “rampant apocalyptics”15, and for long-term projections regarding the effects of the technologies in question.
This style of remark is not new, as fans of ready-made argumentation will attest. Many arguments concerning new technologies can, in their general form, be applied to techniques which are already ancient. The discovery of fire, fictionalized in Roy Lewis’ Evolution Man: Or How I Ate My Father, is a good example of this. In a different sphere, moving from the discovery of bones as weapons to futuristic machines, we might consider Stanley Kubrick’s 2001: A Space Odyssey.
How, then, is the debate surrounding such controversial subjects to be organized? What public policy may be developed in order to draw the ethical conclusions based on the Responsibility Principle?
Technological assessment offices have contributed to the evaluation of “new” controversial technologies16, such as genetically modified organisms (GMO) or certain medical techniques (preimplantation diagnosis, xenotransplantation, research on the brain, nanotechnologies), by developing a variety of new forms of procedure involving citizen participation. This is known as participatory technology assessment (PTA). These evaluation offices17 are responsible for preparing the ground for discussions of complex scientific and technological questions for political or economic decision makers, and sometimes for public information. Technology assessment (TA) has its limits, notably when faced with public opposition and fears, and in relation to questions of normative legitimacy, standardization, values and other aspects falling within the sphere of the humanities and social sciences. Unfortunately, representatives of these fields are generally absent from these spaces. For certain controversial scientific and technological decisions, the exclusive advice of experts18, even of differing opinions, considered by political and economic experts, the resources of simple scientific vulgarization for the general public, or the communication processes involved in mediation19 to make industrial and technical projects acceptable to the public, appear to be insufficient. If, at the end, the final decision is made by political representatives, a desire is sometimes expressed by them to involve a broader range of actors in debates concerning controversial technological objects. Note that, in France, the law of 27 February 200220 on local democracy may be applied to situations of this type. Experiments have been carried out on a reduced scale, showing that an articulation between the different worlds of science and technology is possible, taking the form of technology assessment (TA). Different approaches have resulted in the creation of spaces for discussion between actors with a wide range of capacities, using a variety of communicational regimes (narration, interpretation, argumentation and reconstruction)21.
This new and innovative domain of political experimentation, a premise of RRI, may be considered as a “socio-political laboratory”, and has presented a certain number of justifications, mixing the uncertainty of scientific expertise, or at the least its inability to respond to certain questions concerning the potential risks associated with these “new” controversial technologies22 and the plurality of perspectives, sometimes including deeply entrenched opposing positions, as much within disciplines and scientific communities or interests as between these concerned groups. Normative uncertainty is added to the existing cognitive23 and practical24uncertainty25. This tripartite characterization is typical in the fields of PTA and RRI. However, the tenets of these domains do not enter into the details of scientific uncertainties to give greater precision; this situation will be discussed in Chapter 4. Furthermore, uncertainties in PTA and RRI are subject to the constraint of plurality, factually (de facto) and legally (de jure), in the context of dual scientific and ethical assessments of controversial technologies.
As we have seen already, questioning regarding the consequences of technological innovations for the society and the environment is not a new development. Over 50 years ago, Hans Jonas was already considering this idea in the light of a turnaround between theory and practice26. It led to a radicalization of philosophical approaches; in this area, the difficulties involved with excessively large scales, excessively long chains of causality and probabilities which are difficult, if not impossible, to establish had already been noted, for example, by philosophers such as Hume27 or, more recently, Simmel28. The same goes for Mill’s chains of arguments29.
Kant himself, a philosopher of limits who created a framework of three essential questions, appears excessively optimistic in our modern context. What may I hope?and What can I know? have no obvious solutions in the context of issues such as GMO, and the answer to a third question, What must I do?, is even harder to obtain.
Should we, then, simply adopt a form of minimum ethics, which Jonas would have referred to as “stuffing”? Must we simply use “biodegradable” laws and regulations, changeable with each technological evolution in order to avoid hindering the realization of new possibilities? Is technology the destiny of politics, as suggested by Jonas? Should the advancement of politics be subject to the pressure of different lobbies, in favor of, or opposed to, a given technology? May we envisage a form of civic or civil public debate?
Refusing to give in to fear, we may wish to qualify Jonas’ dramatic statements, which are somewhat “apocalyptic and superior”, to use Kant’s term, further developed by Derrida30. Admittedly, Jonas’ remarks are valid in relation to the insufficiency of traditional ethics, or ethics he considers to be merely “stuffing” in the face of perceived threats. The philosophies of Aristotle, Kant, Heidegger and Levinas, amongst others, have also been justly criticized in a more recent work by Peter Kemp on the ethics of techniques, L’irremplaçable. Une éthique technologique31. These ethics, important contributions in the history of philosophy, apply to intersubjective relationships between contemporaries, often present in the flesh, for small actions on a small scale with limited effects, and not via the medium of technological objects, however simple. Moreover, these ethics do not consider socio-technical relationships and their variations. We must also consider the fact that techniques have changed32, as demonstrated by Gilbert Simondon, an original philosopher of techniques, who divided the period into four33, each resulting in phase shifts and changes in the evolutionary rhythm. Simondon took an optimistic view and hoped that rebalancing would occur. Jonas, however, in his reflections on the history of relationships between techniques, power, knowledge and responsibility, did not take this view. For Jonas, molecular biology represented a revolution (in the original sense of the term); whereas technology had previously been human’s ally against nature, this development brought human into conflict with technology, making him a potential object of manipulation.
One thesis defended by Jonas in The Imperative of Responsibility highlights a number of problems, which are still relevant today: this thesis concerns the need for an ethics of technology, focused on the future, which takes things that are yet to happen into account and where the technology is the object of differing judgments, ranging from strongly positive to apocalyptic. Jonas also considered futurology to be impossible for scientists, further complicating the task.
Another possible position is that of enlightened catastrophism, as described by Jean-Pierre Dupuy34, amongst others, in his critique of the precautionary principle, partly inspired by Jonas. However, this approach is not inevitable. The precautionary principle, when understood correctly, notably in relation to selecting the most complete formulations, validated by legitimate authorities following long negotiations, as seen in the Communication from the Commission on the Precautionary Principle of February 2, 2000, provides a framework which may be used to extend Jonas’ reflection. This principle allows for the use of a political element, which Jonas, despite his audacity, considered too difficult35. Certain commentators have labeled Jonas an anti-democrat. The philosopher Marie-Hélène Parizeau, for example, writes: “On a political level, his choices support the notion of government by experts, rather than democracy”36. At best, Jonas felt that power should be given only to experts, the only individuals able to take the necessary decisions, rather than to politicians, subject to the rules of democracy and to a short “life expectancy”, i.e. the duration of their mandates. Electors also tend to make their evaluations based on a short-term view, without looking beyond the immediate future and with a very limited global vision.
The precautionary principle, on the other hand, has its roots in the wish to avoid any serious and/or irreversible damage, occurring within a much longer temporal window. It presents the advantage of connecting the political world in which decisions are made with the scientific worlds in which facts are established, in situations with a high level of uncertainty, where it is not possible to establish probabilities. It promotes the creation of public institutions with the capacity to handle new technological challenges, and constitutes one of the pillars of European politics. Since 2005, it has formed part of the Environmental Charter appended to the French Constitution. The precautionary principle was also a key motivating factor in the creation of certain institutions, such as the Commission nationale du débat public (National Commission for Public Debate), participating in the renewal of public policies, in increasing participation as promoted by RRI and in exploring new forms of democracy. This commission draws its legitimacy directly from the precautionary principle37, and was first established by the Barnier laws in 1995. These laws also concern the principle of public information and participation in the context of large projects38. A notable project of this type was the ambitious ITER, International Thermonuclear Experimental Reactor (2006)39, research project on nuclear fusion, discussed at a local level in towns around the intended site at Cadarache, in the south of France, selected following long and detailed international negotiations.
The precautionary principle differs from prevention, in that the establishment of facts is not certain, but there is a strong suspicion that a serious and/or irreversible damage may occur. This principle removes the need to wait for a conclusive scientific proof before taking action, or, more precisely, as we shall see in Chapter 4, to plan for further research before deciding on a course of action. Considering the complexity of establishing scientific facts and probabilities, we enter the realm of epistemic pluralism and hypotheses relating to the understanding of the phenomena in question. This pluralism concerns both the epistemic values that guarantee the quality of the knowledge in question, and the elements included in, or excluded from, the establishment of probabilities, or when probabilities cannot be established, for the scenarios in question40.
Another problem facing PTA and RRI concerns the management of plurality. Plurality is increasingly important in our society, alongside high levels of specialization in knowledge. This plurality, as discussed in La démocratie génétiquement modifiée41, directly concerns the quality and evaluation criteria used in PTA – and thus in RRI – in which the term “pluralism” is used in an imprecise manner. In criteriology, pluralism (normative) is often equated with plurality (descriptive) rather than receiving separate consideration, which leads to a specific approach to managing plurality. Plurality should be used to refer to facts, whilst pluralism indicates a type of treatment opposed to both relativism and monism, as we shall see in Chapters 1 and 2. Whilst excessive plurality or radical pluralism can pose a threat to the stability of a society, both concepts also constitute guarantors and requirements of a democratic society, at different levels42. Ethical pluralism is often presented as a pluralism of values (Chapter 1); however, we shall also consider its place in practical judgment, and within normative ethical theories, which may guide or justify evaluations (Chapter 2). These may be individual or public, ex post or ex ante.
Thus far, we have discussed the notion of ethical pluralism. However, the term “moral pluralism” is more widespread in published literature on the subject; many philosophers use the term “moral” in preference to “ethical” and “ethics”. Certain philosophers go so far as to refuse to distinguish between the two and claim that they cannot be separated in a satisfactory manner43. Our position, as set out in DGM, based on published literature and debates in moral philosophy, remains unchanged. Etymology is not helpful in this case, as the roots of the two words are direct equivalents in Greek and Latin. Philosophical usage is not always helpful either, as different philosophers have used the terms “morality” and “ethics” in different ways, some directly opposed to others. In France, there is a tendency to prefer the term “ethics” in order to avoid connotations of moralism, but both terms may be used to denote an object or area of study relating to various, and often contradictory, forms of the good, the just or other normative concepts44, via the difficulty of choice, the meaning of life, rules for justification, the definition of principles or even the definition of moral sentiments. However, whether the term “ethics” or “morality” is used, there is a general distinction in usage between different levels, with (1) mores, which seem straightforward, (2) the order of more or less generalized shared references, (3) questions applied to specific domains, (4) moral theories and (5) meta-ethics. Items (4) and (5) are more reflexive in nature. More commonly, a distinction may be made between (a) applied ethics (3), (b) normative ethics (with moral theories) (4) and (c) meta-ethics (5), even if a level of permeability is accepted.
For example, using the final three levels, when considering the ethical legitimacy of a certain type of GM maize (applied ethics), several means of justifying one’s position (using various normative theories) should be considered, whilst attempting to establish the requisites for justification (meta-ethics).
In many cases, particularly in the context of motivations, our moral intuitions are sufficient to establish a correct course of action, particularly within a very limited time frame. In other situations, intuitions may be unstable, or further reflection may be required. This is particularly true of complex and controversial cases, where spontaneous judgments and intuition may leave us uncertain, confused and/or in disagreement with the opinions of others45. In the case of controversial technologies, for example, the moral intuitions of some individuals are alarmed, whilst others consider the problem to be minimum or non-existent.
In these situations, we move beyond the initial level of morality and into the field of ethics for the purposes of justification. PTA and RRI are almost exclusively concerned with this context, which may be radical if it is made up in a pluralist manner. In this work, we shall focus on levels (b) and (c) of ethical theories and meta-ethics.
Across the fields of science, ethics and politics, PTA and RRI are concerned with the sensitive issue of the relationship between information and decisions. Let us consider the case of a typical format used in PTA and sometimes considered for RRI: a consensus, or citizens’, conference46. The panel of citizens is not a representative entity in statistical or legal terms, and the participants are not decision makers in the full and legitimate sense of the term. Therefore, this group of citizens does not make the ultimate decision47. However, it must produce a report based on discussions with experts, and thus makes certain decisions, at the very least in terms of writing the report. The specification for consensus conferences requests that the responses to questions be noted, alongside the recommendations of the panel. Note that this task is taken seriously, despite the fact that it is less precisely regulated than a court jury, which certain analysts have held up as a comparable entity48.
The relationship between information and decisions was considered very early on by Jürgen Habermas, when he was interested by the context of the technological question49, within the framework of relationships between “specialist knowledge and politics”50. These relationships were presented in the form of three models. They remain valid for analytical purposes, and are often implicitly present in work on PTA and RRI.
Using the first, technocratic model, the responsibility for the initiative is devolved to scientific analysis and technical planning, to the detriment of the political authority, which becomes simply the “executory organism”51.
The second model, Hobbesian, then Weberian52, contrary to the first model, draws a clear distinction between the functions of experts and those of politicians53. Decisions may be exempted to a requirement for justification presented in the form of a public discussion. This model is thus decisionist, and broadly Weberian54; the democratic will ultimately consists of certain elites taking turns in power, according to an acclamatory process weighted in their favor. The irrational essence of domination may thus be legitimized, but not rationalized. Political action would constitute a “choice between certain orders of values and certain religious beliefs”55, which are in competition, without the need for a rational basis. In this case, the rationality involved in the choice of method would go hand in hand with the declared irrationality of the stances adopted in relation to values, aims and requirements.
The third, pragmatist model, inspired by John Dewey, involves a critical interrelation between the functions of experts and politicians, refusing the decisionist dissociation of facts and values56. Habermas took a pragmatic approach to the confrontation of value systems, reflections of social interests and technical possibilities, alongside the strategic methods needed to satisfy these systems. Defending the need for a translation combining both technical and practical knowledge, he established the lineaments of his theory of communicative action, taking a more hermeneutical approach than Dewey. Habermas considered Dewey to be too naïve and too confident in the existence of good will and common sense, whereby technologies, strategies and the orientations of different social groups, marked by “interests in relation to certain values”, would work together for their mutual benefit. Habermas also disagreed with Dewey’s view that public opinion is not excessively complex57.
We feel that Habermas was unnecessarily critical on this point. For Dewey, the necessary public articulation is a long process, requiring sufficient time to reach a full awareness of the possible consequences of the activities of other groups. Dewey attempted to constitute a public based precisely on the fact of being affected by the consequences of technologies. Dewey also proposed a weighty Theory of Enquiry58, for use in identifying and evaluating these consequences, which are often unexpected. To this end, social inquests must be carried out with the same dedication and precision used in experiments in the natural and engineering sciences, in order to identify consequences and attempt to prevent or plan for them; watching activity, carried out by qualified civil servants, is also necessary in Dewey’s definition of the State.
The relationship established between technological consequences and certain publics is, therefore, not new, and is at the heart of the theory of pragmatism, an important one in social sciences, in the version given by John Dewey in 1927.
Habermas preferred the third and final of these types of relationships between information and decisions. His ideal was a controlled translation of technical knowledge into practical knowledge59, the only possibility allowing for communication of individuals to retake possession of knowledge, with the potential to result in certain consequences, in their own language.
This point is still used by analysts in PTA and RRI, notably in the sociology of science and technology, and in public policy analysis, making implicit use of the third open pathway.
Nonetheless, Habermas’ proposition is still too general, and not sufficiently refined for PTA and RRI. A clearer picture of the passage from information to decision in the third version would be beneficial. In his Theory of Communicative Action60, and with the theory of deliberative democracy, which some tenets refer to him, Habermas defended the importance of argumentation. However, paradoxically, his definition of arguments, like those of the majority of proponents of this theory, is incomplete61. How, then, is it possible to claim, in practice and based on real cases, that the best argument has “won”, particularly when confronted with the heterogeneous nature of different disciplines and arguments? These questions will be considered in much detail in Chapters 3 and 6.
Although it does not solve the decision issue, Habermas’ strategy may be useful in bringing disagreements to light, and in envisaging three types of relationships between the scientific, political and, to a lesser extent, ethical spheres62. The first type is reductionist, and it favors the sciences. The second, inversely, gives an advantage to the political sphere. The third places the emphasis on common sense, following a translation process. Habermas does not, therefore, make use of the resources of moral philosophy; in this case the moral philosophy resources are elements that will be used extensively in this book.
Spaces, which Dewey and Habermas could only have dreamed of, with the third version of the relationship between information and decisions, have existed within the context of PTA for over 30 years, and should be forthcoming for RRI. These specific meetings have, we feel, been created due to the limitations of the two other solutions when faced with certain technological choices.
Surprisingly, seen against the background of the analysis of these experiments, with a greater weight given to risk evaluation, citizens tend, initially, to express a form of “nostalgia” for the simple, technical response produced by the first form of the relationship between techniques and decisions, with politically neutral decisions being made by experts. Nowadays, populations are used to the specialization of knowledge and a certain form of science.
The same is true of conflicts resulting from ethical considerations. For some individuals, ethics, or their morals, constitute a basis or foundation for rejecting certain techniques. Experts and citizens thus share the desire for simple ethical responses, particularly if they have not had the opportunity to fully apprehend the complexity and dimensions of problems from this perspective.
The current context has resulted in certain shifts, further complicating the limits of a descriptive rationality of facts, abandoning questions of evaluation and prescription in favor of decisionist principles, and further reducing the rational capacity63, which should be applied. In citizens’ conferences, for example, the Habermasian problem of dual translations between techniques and practice is not the only question. Technical expertise itself is now less certain. Paradoxically, nonscientists have an increasing tendency to make strong peremptory assertions, whilst scientists are increasingly cautious.
We must also take account of tension due to partial anticipation, which arises between requirements and technological responses. We therefore need to translate the understanding of an issue expressed in probabilistic terms, between disciplines, or between theories within disciplines, in addition to the approach described by Habermas. For this reason, citizens and experts are confronted with scientific uncertainty, which is, furthermore, associated with conflicts; more rarely, these issues also raise ethical dilemmas that need to be formulated and resolved.
We therefore need to move from relationships between information and decisions to a more thorough formation of decisions, in a two-way communication between ethics and science, with a more demanding and more precise approach to components, such as organisms64 or arguments (Chapter 6).
PTA and RRI may themselves be the objects of evaluation, as there are different solutions to the questions discussed above. All too often, the problems in question were not anticipated. The evaluation of participation is complicated, and rarely carried out in a transparent manner65; the evaluation of evaluation is even more important. Evaluation as a whole is a complex action, and it may concern:
1)
Coherency, compatibility, conformity
or
identity
between a standard (or a template or model) and an object (or a phenomenon, or an event). The recorded differences may be measured by default. In this case, we may speak of
verification
or
monitoring
.
2) The
meaning, signification
or
value
which this object (phenomenon, event) may take on for actors in relation to a project taking place over time.
3) Both of the above, indiscriminately or complementarily.
Depending on the predominant orientation, evaluation may be mostly estimative, making use of procedural methods of measurement or quantification, or mostly appreciative. In terms of “tool-based” evaluation practices, which are not subjective or trivial, these distinctions may be refined further:
1) Note, establish or prove the conformity of a behavior, conduct, level of knowledge, progress within a project, in relation to a model, or, where this conformity is not present, obtain a measured estimation of the difference.
2) More fundamentally still, consider the various meanings and significations contained within and expressed through these phenomena.
These distinctions may be implemented using different, adapted tools to translate them. These may include indicators, audits, analysis grids, and/or various collection and listening approaches.
Two dimensions of the cognitive activity of evaluation have already been mentioned: the technico-scientific dimension and the ethical dimension. In the ethical dimension, the notion of evaluation is surprisingly lacking in recent research. Successive editions of the Dictionnaire de philosophie morale66, for example, do not include an entry for “evaluation”; nor does the Companion to Ethics67.
In ethics, evaluation is a procedure used, initially, to determine the moral value of a given being or abstract entity. However, one might wonder whether “the value itself results in the evaluation, or the evaluation produces the value”, like Le Senne in his Traité de morale générale68. Some believe ethical values to possess a sort of objective existence, as if they existed outside of ourselves, without our intervention.
Without giving a firm response to this question, we shall consider that ethics and ethical theories may also be used as “tools” or supports for evaluation in a context where justification is required. Resources to define what we mean by ethical evaluation and for establishing conditions in which it is possible, or impossible, may be found in the field of meta-ethics.
Given the elements involved in evaluation, secondary evaluation is understandably difficult, not only in the specific cases covered here, but also for any research project within the humanities and social sciences69. It raises different issues in the more limited domain of PTA and RRI.
First, secondary evaluation covers different types of evaluations and judgments, from informal appreciation, based on intuition or opinion, to more defined, structured and systematic research70.
Second, there are no widely accepted criteria for use in judging the success or failure of this type of experiment.
Third, the evaluation of any notion, such as the idea of participation which polarizes opinion in most research on PTA and RRI, is complex, notably due to the fact that it is “imbued with values”71.
Fourth, there is no agreed evaluation method72.
Fifth, there is a lack of reliable measuring instruments; those analysts suggesting the need for a research agenda on the subject do so only tentatively73.
Sometimes, these problems do not represent the main obstacle for evaluation. Organizers may not wish to proceed, or to publish evaluations of PTA experiments. This trend seems to be stronger in cases where the experiment does not go as expected or fails to produce the desired results. Some analysts claim that certain experiments or processes may be extended because they draw an audience, or because they allow the organizers to claim that a public consultation has taken place. Organizers have also been known to claim that experiments of this type have been successful before any form of evaluation has taken place74.
The emblematic case of the first French citizens’ conference (1998) on GMO in agriculture and the food industry is perplexing on at least two levels. First, a very informal evaluation was carried out, but this was done by certain members75 who established the procedure themselves. Second, a debate intended to permit critical feedback on the experience, during a public conference held at the musée de la Villette in Paris (Cité des Sciences et de l’Industrie), was disrupted by activists throwing rotten eggs at the participants.
