Ethics, Technology, and Engineering - Ibo van de Poel - E-Book

Ethics, Technology, and Engineering E-Book

Ibo van de Poel

0,0
29,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Featuring a wide range of international case studies, Ethics, Technology, and Engineering presents a unique and systematic approach for engineering students to deal with the ethical issues that are increasingly inherent in engineering practice. * Utilizes a systematic approach to ethical case analysis -- the ethical cycle -- which features a wide range of real-life international case studies including the Challenger Space Shuttle, the Herald of Free Enterprise and biofuels. * Covers a broad range of topics, including ethics in design, risks, responsibility, sustainability, and emerging technologies * Can be used in conjunction with the online ethics tool Agora (href="http://www.ethicsandtechnology.com/">http://www.ethicsandtechnology.com) * Provides engineering students with a clear introduction to the main ethical theories * Includes an extensive glossary with key terms

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 774

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Acknowledgments

Introduction

1 The Responsibilities of Engineers

1.1 Introduction

1.2 Responsibility

1.3 Passive Responsibility

1.4 Active Responsibility and the Ideals of Engineers

1.4.1 Technological enthusiasm

1.4.2 Effectiveness and efficiency

1.4.3 Human welfare

1.5 Engineers versus Managers

1.5.1 Separatism

1.5.2 Technocracy

1.5.3 Whistle-blowing

1.6 The Social Context of Technological Development

1.7 Chapter Summary

Study Questions

Discussion Questions

2 Codes of Conduct

2.1 Introduction

2.2 Codes of Conduct

2.2.1 Professional codes

2.2.2 Corporate codes

2.3 Possibilities and Limitations of Codes of Conduct

2.3.1 Codes of conduct and self-interest

2.3.2 Vagueness and potential contradictions

2.3.3 Can ethics be codified?

2.3.4 Can codes of conduct be lived by?

2.3.5 Enforcement

2.4 Codes of Conduct in an International Context

2.4.1 Global codes for multinationals

2.4.2 Global codes for engineers

2.5 Chapter Summary

Study Questions

Discussion Questions

3 Normative Ethics

3.1 Introduction

3.2 Ethics and Morality

3.3 Descriptive and Normative Judgments

3.4 Points of Departure: Values, Norms, and Virtues

3.4.1 Values

3.4.2 Norms

3.4.3 Virtues

3.5 Relativism and Absolutism

3.5.1 Normative relativism

3.5.2 Absolutism

3.6 Ethical Theories

3.7 Utilitarianism

3.7.1 Jeremy Bentham

3.7.2 Mill and the freedom principle

3.7.3 Criticism of utilitarianism

3.7.4 Applying utilitarianism to the Ford Pinto case

3.8 Kantian Theory

3.8.1 Categorical imperative

3.8.2 Criticism of Kantian theory

3.8.3 Applying Kant’s theory to the Ford Pinto case

3.9 Virtue Ethics

3.9.1 Aristotle

3.9.2 Criticism of virtue ethics

3.9.3 Virtues for morally responsible engineers

3.10 Care Ethics

3.10.1 The importance of relationships

3.10.2 Criticism of care ethics

3.10.3 Care ethics in engineering

3.11 Applied Ethics

3.12 Chapter Summary

Study Questions

Discussion Questions

4 Normative Argumentation

4.1 Introduction

4.2 Valid Arguments

4.3 Deductive and Non-Deductive Arguments

4.4 Arguments in Ethical Theories

4.4.1 Argumentation by analogy

4.4.2 Arguments in a utilitarian plea

4.4.3 Argumentation in Kantian reasoning

4.4.4 Argumentation in virtue-ethical reasoning

4.5 Fallacies

4.5.1 Some common fallacies in ethical discussions

4.5.2 Fallacies of risk

4.6 Chapter Summary

Study Questions

Discussion Questions

5 The Ethical Cycle

5.1 Introduction

5.2 Ill-Structured Problems

5.3 The Ethical Cycle

5.3.1 Moral problem statement

5.3.2 Problem analysis

5.3.3 Options for actions

5.3.4 Ethical evaluation

5.3.5 Reflection

5.4 An Example

5.4.1 Moral problem statement

5.4.2 Problem analysis

5.4.3 Options for actions

5.4.4 Ethical evaluation

5.4.5 Reflection

5.5 Collective Moral Deliberation and Social Arrangements

5.6 Chapter Summary

Study Questions

Discussion Questions

6 Ethical Questions in the Design of Technology

6.1 Introduction

6.2 Ethical Issues During the Design Process

6.2.1 Problem analysis and formulation

6.2.2 Conceptual design

6.2.3 Simulation

6.2.4 Decision

6.2.5 Detail design

6.2.6 Prototype development and testing

6.2.7 Manufacture and construction

6.3 Trade-offs and Value Conflicts

6.3.1 Cost-benefit analysis

6.3.2 Multiple criteria analysis

6.3.3 Thresholds

6.3.4 Reasoning

6.3.5 Value Sensitive Design

6.3.6 A comparison of the different methods

6.4 Regulatory Frameworks: Normal and Radical Design

6.5 Chapter Summary

Study Questions

Discussion Questions

7 Designing MoralityPeter-Paul Verbeek

7.1 Introduction

7.2 Ethics as a Matter of Things

7.3 Technological Mediation

7.3.1 Mediation of perception

7.3.2 Mediation of action

7.4 Moralizing Technology

7.4.1 Criticizing the moral character of technological artifacts

7.4.2 Taking mediation into ethics

7.5 Designing Mediations

7.6 Chapter Summary

Study Questions

Discussion Questions

8 Ethical Aspects of Technical Risks

8.1 Introduction

8.2 Definitions of Central Terms

8.3 The Engineer’s Responsibility for Safety

8.4 Risk Assessment

8.4.1 The reliability of risk assessments

8.5 When are Risks Acceptable?

8.5.1 Informed consent

8.5.2 Do the advantages outweigh the risks?

8.5.3 The availability of alternatives

8.5.4 Are risks and benefits justly distributed?

8.6 Risk Communication

8.7 Dealing with Uncertainty and Ignorance

8.7.1 The precautionary principle

8.7.2 Engineering as a societal experiment

8.8 Chapter Summary

Study Questions

Discussion Questions

9 The Distribution of Responsibility in Engineering

9.1 Introduction

9.2 The Problem of Many Hands

9.2.1 The CitiCorp building

9.2.2 Causes of the problem of many hands

9.2.3 Distributing responsibility

9.3 Responsibility and the Law

9.3.1 Liability versus regulation

9.3.2 Negligence versus strict liability

9.3.3 Corporate liability

9.4 Responsibility in Organizations

9.5 Responsibility Distributions and Technological Designs

9.6 Chapter Summary

Study Questions

Discussion Questions

10 Sustainability, Ethics, and TechnologyMichiel Brumsen

10.1 Introduction

10.2 Environmental Ethics?

10.3 Environmental Problems

10.4 Sustainable Development

10.4.1 The Brundtland definition

10.4.2 Moral justification

10.4.3 Operationalization

10.5 Can a Sustainable Society be Realized?

10.6 Engineers and Sustainability

10.6.1 Points of attention during the design process

10.6.2 Life cycle analysis

10.7 Chapter Summary

Study Questions

Discussion Questions

Appendix I: Engineering Qualifications and Organizations in a Number of Countries

Appendix II: NSPE Code of Ethics for Engineers

Appendix III: FEANI Position Paper on Code of Conduct: Ethics and Conduct of Professional Engineers

Appendix IV: Shell Code of Conduct

Appendix V: DSM Values and Whistle Blowing Policy

Glossary

References

Index of cases

Index

Ethics, Technology, and Engineering

This edition first published 2011© 2011 Ibo van de Poel and Lambèr Royakkers© chapter 7: Peter-Paul Verbeek; © chapter 10: Michiel Brumsen

Blackwell Publishing was acquired by John Wiley & Sons in February 2007. Blackwell’s publishing program has been merged with Wiley’s global Scientific, Technical, and Medical business to form Wiley-Blackwell.

Registered OfficeJohn Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom

Editorial Offices350 Main Street, Malden, MA 02148-5020, USA9600 Garsington Road, Oxford, OX4 2DQ, UKThe Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

For details of our global editorial offices, for customer services, and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com/wiley-blackwell.

The right of Ibo van de Poel and Lambèr Royakkers to be identified as the authors of this work has been asserted in accordance with the UK Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought.

Library of Congress Cataloging-in-Publication Data

Poel, Ibo van de, 1966–

Ethics, Technology, and Engineering : An Introduction / by Ibo van de Poel and Lambèr Royakkers.

p. cm.

Includes bibliographical references and index.

ISBN 978-1-4443-3094-6 (hardcover : alk. paper) – ISBN 978-1-4443-3095-3 (pbk. : alk. paper)

1. Technology–Moral and ethical aspects. I. Royakkers, Lambèr M. M. II. Title.

BJ59.P63 2011

174′.96–dc22

2010042204

A catalogue record for this book is available from the British Library.

This book is published in the following electronic formats: eBook 978-1-4443-9570-9; ePub 978-1-4443-9571-6

Acknowledgments

This book is based on our Dutch text book Royakkers, L., van de Poel, I. and Pieters, A. (eds) (2004). Ethiek & techniek. Morele overwegingen in de ingenieurspraktijk, HBuitgevers, Baarn. Most of the chapters have been thoroughly revised. Some chapters from the Dutch text book are not included and this book contains some new chapters.

Section 1.4 contains excerpts from Van de Poel, Ibo. 2007. De vermeende neutraliteit van techniek. De professionele idealen van ingenieurs, in Werkzame idealen. Ethische reflecties op professionaliteit (eds J. Kole and D. de Ruyter), Van Gorcum, Assen, pp. 11–23. [translated from Dutch].

Section 3.11 and large parts of Chapter 5 are drawn from Van de Poel, I., and Royakkers, L. (2007). The ethical cycle. Journal of Business Ethics, 71 (1), 1–13.

Section 6.2.4. contains excerpts from Devon, R. and Van de Poel, I. (2004). Design ethics: The social ethics paradigm. International Journal of Engineering Education, 20 (3), 461–469.

Section 6.3 contains excerpts from Van de Poel, I. (2009). Values in engineering design, in Handbook of the Philosophy of Science. Vol. 9: Philosophy of Technology and Engineering Sciences (ed. A. Meijers), Elsevier, Amsterdam, pp. 973–1006.

Chapter 7, which is written by Peter-Paul Verbeek is based on Verbeek, P.P. (2006a). Materializing morality – Design ethics and technological mediation. Science, Technology and Human Values, 31 (3), 361–380; Verbeek, P.P. (2006b), The morality of things – A postphenomenological inquiry, in Postphenomenology: A Critical Companion to Ihde (ed. E. Selinger), State University of New York Press, New York, pp. 117–130; and Verbeek, P.P. (2008), Morality in design: Design ethics and the morality of technological artifacts, in Philosophy And Design: From Engineering to Architecture (eds P.E. Vermaas, P. Kroes, A. Light, and S.A. Moore), Springer, Dordrecht, pp. 91–103.

Section 8.7 contains excerpts from Van de Poel, I. (2009). The introduction of nanotechnology as a societal experiment, in Technoscience in Progress. Managing the Uncertainty of Nanotechnology (eds S. Arnaldi, A. Lorenzet and F. Russo), IOS Press, Amsterdam, pp. 129–142.

Section 9.2 contains excerpts from van de Poel, I., Fahlquist, J.N., de Lima, T., Doorn, N., Royakkers, L. and Zwart, S. Fairness and completeness in distributing responsibility: The case of engineering. Manuscript.

Introduction

One of the main differences between science and engineering is that engineering is not just about better understanding the world but also about changing it. Many engineers believe that such change improves, or at least should improve, the world. In this sense engineering is an inherently morally motivated activity. Changing the world for the better is, however, no easy task and also not one that can be achieved on the basis of engineering knowledge alone. It also requires, among other things, ethical reflection and knowledge. This book aims at contributing to such reflection and knowledge, not just in a theoretical sense but also more practically.

This book takes an innovative approach to engineering ethics in several respects. It provides a rather unique approach to ethical decision-making: the ethical cycle. This approach is illustrated by an abundance of cases studies and examples, not only from the US but also from Europe and the rest of the world. The book is also innovative in paying more attention than most traditional introductions in engineering ethics to such topics as ethics in engineering design, the organizational context of engineering, the distribution of responsibility, sustainability, and new technologies such as nanotechnology.

There is an increasing attention to ethics in the engineering curricula. Engineers are supposed not only to carry out their work competently and skillfully but also to be aware of the broader ethical and social implications of engineering and to be able to reflect on these. According to the Engineering Criteria 2000 of the Accreditation Board for Engineering and Technology (ABET) in the US, engineering graduates must have “an understanding of professional and ethical responsibility” and “the broad education necessary to understand the impact of engineering solutions in a global and societal context” (Herkert 1999).

This book provides an undergraduate introduction to ethics in engineering and technology. It helps students to acquire the competences mentioned in the ABET criteria or comparable criteria formulated in other countries. More specifically, this book helps students to acquire the following moral competencies:

Moral sensibility: the ability to recognize social and ethical issues in engineering;Moral analysis skills: the ability to analyze moral problems in terms of facts, values, stakeholders and their interests;Moral creativity: the ability to think out different options for action in the light of (conflicting) moral values and the relevant facts;Moral judgment skills: the ability to give a moral judgment on the basis of different ethical theories or frameworks including professional ethics and common sense morality;Moral decision-making skills: the ability to reflect on different ethical theories and frameworks and to make a decision based on that reflection; andMoral argumentation skills: the ability to morally justify one’s actions and to discuss and evaluate them together with other engineers and non-engineers.

With respect to these competencies, our focus is on the concrete moral problems that students will encounter in their future professional practice. With the help of concrete cases we show how the decision to develop a technology, as well as the process of design and production, is inherently moral. The attention of students is drawn towards the specific moral choices that engineers face. In relation to these concrete choices students will encounter different reasons for and against certain actions, and they will discover that these reasons can be discussed. In this way, students become aware of the moral dimensions of technology and acquire the argumentative capacities that are needed in moral debates.

In addition to an emphasis on cases – which is common to most other introductory text books in engineering ethics as well – we would like to mention three further characteristics of the approach to engineering ethics we have chosen in this text book.

First, we take a broad approach to ethical issues in engineering and technology and the engineer’s responsibility for these. Some of the issues we discuss in this book extend beyond the issues traditionally dealt with in engineering ethics like safety, honesty, and conflicts of interest. We also include, for example, ethical issues in engineering design (Chapters 6 and 7) and sustainability (Chapter 10). We also pay attention to such technologies as the atomic bomb and nanotechnology. While we address such “macro-ethical” issues (Herkert 2001) in engineering and technology, our approach to these issues may be characterized as inside-out, that is to say: we start with ethical issues that emerge in the practice of engineers and we show how they arise or are entangled with broader issues.

A second characteristic of our approach is that we pay attention to the broader contexts in which individual engineers do their work, such as the project team, the company, the engineering profession and, ultimately, society. We have devoted a chapter to the issues this raises with respect to organizing responsibility in engineering (Chapter 9). Where appropriate we also pay attention to other actors and stakeholders in these broader contexts. Again our approach is mainly inside-out, starting from concrete examples and the day-to-day work of engineers. It is sometimes thought that paying attention to such broader contexts diminishes the responsibility of engineers, because it shows that engineers lack the control needed to be responsible.1 Although there is some truth in this, we argue that the broader contexts also change the content of the responsibility of engineers and in some respects increase their responsibility. Engineers, for example, need to take into account the view points, values and interests of relevant stakeholders (Chapter 1). This also implies including such stakeholders, and their viewpoints, in relevant discussion and decision making, for example in design (Chapters 5 and 6). Engineers also need to inform managers, politicians, and the public not only of technological risks but also of uncertainties and potential ignorance (Chapter 8).

A third characteristic of our approach is our attention to ethical theories. We consider these theories important because they introduce a richness of moral perspectives, which forces students to look beyond what seems obvious or beyond debate. Although we consider it important that students get some feeling for the diversity and backgrounds of ethical views and theories, our approach is very much practice-oriented. The main didactical tool here is what we call the “ethical cycle” (Van de Poel and Royakkers 2007). This is an approach for dealing with ethical problems that systematically encourages students to consider a diversity of ethical points of view and helps them to come to a reasoned and justified judgment on ethical issues that they can discuss with others. The ethical cycle is explained in Chapter 5, but Chapters 2, 3, and 4 introduce important elements of it.

The development of the ethical cycle was largely inspired by the ten years of experiences we both have in teaching engineering ethics to large groups of students in the Netherlands, and the didactical problems we and our colleagues encountered in doing so (Van der Burg and Van de Poel 2005; Van de Poel, Zandvoort, and Brumsen 2001). We noticed that students often work in an unstructured way when they analyze moral cases, and they tend to jump to conclusions. Relevant facts or moral considerations were overlooked, or the argumentation was lacking. Ethical theories were often used in an instrumental way by applying them to cases in an unreflective way. Some students considered a judgment about a moral case as an opinion about which no (rational) discussion is possible.

The ethical cycle is intended as a didactical tool to deal with these problems. It provides students a guide for dealing with ethical issues that is systematic without assuming an instrumental notion of ethics. After all, what is sometimes called applied ethics is not a straightforward application of general ethical theories or principles to practical problem in an area. Rather, it is a working back and forth between a concrete moral problem, intuitions about this problem, more general moral principles, and a diversity of ethical theories and view points. This is perhaps best captured in John Rawls’ notion of wide reflective equilibrium (Rawls 1971). (For a more detailed discussion, the reader is referred to Chapter 5.)

The ethical cycle provides a tool that does justice to this complexity of ethical judgment but at the same time is practical so that students do not get overwhelmed by the complexity and diversity of ethical theories. By applying the ethical cycle students will acquire the moral competencies that are needed for dealing with ethical issues in engineering and technology (see Figure I.1).

In conjunction with the ethical cycle, we, together with some colleagues have developed a software tool for analyzing ethical issues in engineering and technology: AGORA (Van der Burg and Van de Poel 2005). The approach on which AGORA is based is basically the same as the ethical cycle. AGORA would therefore be a useful software platform to use in combination with this text book. The program contains a number of standard exercises that correspond to chapters in this book. In addition, teachers can develop their own exercises. For more information about AGORA, the reader is referred to the website www.ethicsandtechnology.com

Figure I.1 Ethical issues in engineering and technology

This book consists of two parts. Part I introduces the ethical cycle. After an introductory chapter on the responsibility of engineers, it introduces the main elements of the ethical cycle: professional and corporate codes of conduct (Chapter 2), ethical theories (Chapter 3) and argumentation schemes that are used in ethical reasoning (Chapter 4). Chapter 5 then introduces the ethical cycle and offers an extensive illustration of the application of the cycle to an ethical issue in engineering.

Part II focuses on more specific ethical issues in engineering and technology. Chapters 6 and 7 deal with ethical issues in engineering design. Chapter 6 focuses on ethical issues that may arise during the various phases of the design process and pays special attention to how engineers are confronted with and can deal with conflicting values in design. Chapter 7 takes a broader look at how technologies influence the perceptions and actions of users and considers how such considerations can be taken into account in design. Chapter 8 deals with technological risks, and questions about how to assess such risks, the moral acceptability of risks, risk communication, and dealing with uncertainty and ignorance. Chapter 9 discusses issues of responsibility that arise due to the social organization of engineering. It discusses in particular the problem of many hands, the difficulty of pinpointing who is responsible if a large number of people are involved in an activity, and it discusses ways of dealing with this problem in engineering. Chapter 10 discusses sustainability, both in more general terms and how it affects the work of engineers and can be taken into account in, for example, the design process.

To a large extent, Parts I and II can be used independently from each other. Teachers who have only limited course hours available can, for example, choose to teach a basic introduction and only use the first five chapters. Conversely, students who have earlier followed some basic introduction to engineering ethics can be offered a course that uses some or all of the chapters from Part II. Although the chapters in Part II are consistent with the ethical cycle introduced in Part I, they contain hardly any explicit references to it and most of the necessary background would also be covered by any other basic course in engineering ethics. In fact the chapters in Part II can also largely be used independent of each other, so that they could be used for smaller teaching modules.

Teachers, who want to offer their students an introduction to engineering ethics without discussing the various ethical theories and the ethical cycle, could choose to use the first two chapters and a selection of the chapters from Part II that deal with more specific issues. Any set-up that aims at introducing the ethical cycle should, we feel, at least include Chapters 2, 3 and 5. Chapter 4 is more optional because it provides student’s ability to use the ethical cycle but are not strictly necessary.

Each of the chapters starts with an illustrative case study that introduces some of the main issues that are covered in the chapter. Each chapter introduction also indicates the learning objectives so that students know what they should know and be able to do after reading the chapter. Each chapter also contains key terms and a summary that provide a further guide for getting to the core of the subject matter. Study questions provide further help in rehearsing the main points and in applying the main notions to concrete examples. AGORA exercises (see above) may be a further helpful tool to teach students how to apply what they have learned to more complex cases.

A book like this is impossible without the help of a lot of people. First of all we like to thank everybody who contributed to the composition of the Dutch textbook Ethiek en Techniek. Morele overwegingen in de Ingenieurspraktijk that formed the basis for this book. In particular we would like to thank Angèle Pieters, our co-editor of the Dutch textbook and Stella de Jager of HB Uitgevers. We also like to thank Peter-Paul Verbeek and Michiel Brumsen for contributing a chapter to this book. We thank Steven Ralston and Diane Butterman for translating parts of our Dutch texts. Jessica Nihlén Fahlquist, Tiago de Lima, Sjoerd Zwart, and Neelke Doorn were so kind to allow us to use a part of a common manuscript in chapter 9 of this book. We would also like to thank the people of Wiley-Blackwell for their comments and support, in particular Nick Bellorini, Ian Lague, Louise Butler, Tiffany Mok, Dave Nash, and Mervyn Thomas. Finally we would like to thank the anonymous reviewers and the people who anonymously filled in a questionnaire about the scope of the book for their comments and suggestions.

Ibo van de Poel is grateful to NIAS, the Netherlands Institute for Advanced Study, for providing him with the opportunity, as a Fellow-in-Residence, to finish this book.

Ibo van de Poel and Lambèr Royakkers

Note

1 Michael Davis, for example has expressed the concern that what he calls a sociological approach to the wider contexts that engineers face may in effect free engineers from any responsibility (see Davis 2006).

1

The Responsibilities of Engineers

Having read this chapter and completed its associated questions, readers should be able to:

Describe passive responsibility, and distinguish it from active responsibility;Describe the four conditions of blameworthiness and apply these to concrete cases;Describe the professional ideals: technological enthusiasm, effectiveness and efficiency, and human welfare;Debate the role of the professional ideals of engineering for professional responsibility;Show an awareness that professional responsibility can sometimes conflict with the responsibility as employee and how to deal with this;Discuss the impact of social context of technological development for the responsibility of engineers.

Contents

1.1 Introduction

1.2 Responsibility

1.3 Passive Responsibility

1.4 Active Responsibility and the Ideals of Engineers

1.4.1 Technological enthusiasm

1.4.2 Effectiveness and efficiency

1.4.3 Human welfare

1.5 Engineers versus Managers

1.5.1 Separatism

1.5.2 Technocracy

1.5.3 Whistle-blowing

1.6 The Social Context of Technological Development

1.7 Chapter Summary

Study Questions

Discussion Questions

1.1 Introduction

Case Challenger

The 25th launching of the space shuttle was to be something special. It was the first time that a civilian, the teacher Christa McAuliffe, or as President Ronald Reagan put it: “one of America’s finest” would go into space. There was, therefore, more media attention than usual at cold Cape Canaveral (Florida, United States). When, on the morning of January 28, 1986, the mission controllers’ countdown began it was almost four degrees Celsius below freezing point (or about 25 degrees Fahrenheit). After 73 seconds the Challenger space shuttle exploded 11 kilometers above the Atlantic Ocean. All seven astronauts were killed. At the time it was the biggest disaster ever in the history of American space travel.

Figure 1.1 Challenger Space Shuttle. Photo: © Bob Pearson / AFP / Getty Images.

After the accident an investigation committee was set up to establish the exact cause of the explosion. The committee concluded that the explosion leading to the loss of the 1.2 billion dollar spaceship was attributable to the failure of the rubber sealing ring (the O-ring). As the component was unable to function properly at low temperatures fuel had started to leak from the booster rocket. The fuel then caught fire, causing the Challenger to explode.

Morton Thiokol, a NASA supplier, was the company responsible for the construction of the rocket boosters designed to propel the Shuttle into space. In January 1985 Roger Boisjoly, an engineer at the Morton Thiokol company, had aired his doubts about the reliability of the O-rings. In July 1985 he had sent a confidential memo to the Morton Thiokol management board. In that memo he had expressed his concerns about the effectiveness of the O-rings at low temperatures: “I am really afraid that if we do not take immediate steps we will place both the flight and the launching pad in serious danger. The consequences would be catastrophic and human lives would be put at risk.” The memo instantly led to a project group being set up in order to investigate the problem. However, the project group received from the management insufficient material and funding to carry out its work properly. Even after one of the project group managers had sent a memo headed “Help!” and ending with the words: “This is a red flag!” to Morton Thiokol’s vice-chairman nothing concrete was actually undertaken.

On the day of the fatal flight the launching was delayed five times, partly for weather-related reasons. The night preceding the launching was very cold; it froze 10 degrees Celsius (or 14 degrees Fahrenheit). NASA engineers confessed to remembering having heard that it would not be safe to launch at very low temperatures. They therefore decided to have a telephone conference on the eve of the launching between NASA and Morton Thiokol representatives, Boisjoly also participated. The Morton Thiokol Company underlined the risk of the O-rings eroding at low temperatures. They had never been tested in subzero conditions. The engineers recommended that if the temperature fell below 11 degrees Celsius (or 52 degrees Fahrenheit) then the launch should not go ahead. The weather forecast indicated that the temperature would not rise above freezing point on the morning of the launch. That was the main reason why Morton Thiokol initially recommended that the launch should not be allowed to go ahead.

The people at NASA claimed that the data did not provide sufficient grounds for them to declare the launching, which was extremely important to NASA, unsafe. What was rather curious was the fact that the burden of proof was placed with those who were opposed to the launching; they were requested to prove that the flight would be unsafe. The official NASA policy, though, was that it had to be proved that it would be safe to make the flight.

A brief consultation session was convened so that the data could once again be examined. While the connection was broken for five minutes the General Manager of Thiokol commented that a “management decision” had to be made. Later on several employees actually stated that shortly after the launching NASA would make a decision regarding a possible contract extension with the company. It was at least the case that Boisjoly felt that people were no longer listening to his arguments. For Morton Thiokol it was too much of a political and financial risk to postpone the launch. After discussing matters amongst themselves the four managers present, the engineers excluded, put it to the vote. They were reconnected and Thiokol, ignoring the advice of Boisjoly, announced to NASA its positive recommendations concerning the launching of the Challenger. It was a decision that was immediately followed by NASA without any further questioning. As agreement had been reached, the whole problem surrounding the inadequate operating of the O-ring at low temperatures was not passed on to NASA’s higher management level. Several minutes after the launch someone of the mission control team concluded that there had: “obviously been … a major malfunction.”

A Presidential Commission determined that the whole disaster was due to inadequate communication at NASA. At the same time, they argued for a change in system and ethos that would ensure transparency and encourage whistle blowing. As a consequence, the entire space program was stopped for two years so that the safety of the Shuttle could be improved. Morton Thiokol did not lose its contract with NASA but helped, instead, to work on finding a solution to the O-ring problem. Engineers were given more of a say in matters. In the future, they will have the power to halt a flight if they had their doubts.

Source: Based on Wirtz (2007, p. 32), Vaughan (1996), and the BBC documentary Challenger: Go for Launch of Blast!Films.

In this case we see how the Challenger disaster was caused by technical error and inadequate communication. For the designers of the O-rings, the engineers at Morton Thiokol, the disaster did not have legal implications. Does that mean that the case is thus closed or do they bear some kind of responsibility? If so, what then is their responsibility? This chapter first investigates what exactly responsibility is (Section 1.2), distinguishing between passive responsibility for things that happened in the past (Section 1.3) and active responsibility for things not yet attained (Section 1.4). The final two sections discuss the position of engineers vis-à-vis managers, which was obviously important in the Challenger case, the wider context of technological development, and examine the consequences for the responsibility of engineers of this wider context.

1.2 Responsibility

Whenever something goes wrong or there is a disaster like that of the Challenger then the question who is responsible for it often quickly arises. Here responsibility means in the first place being held accountable for your actions and for the effects of your actions. The making of choices, the taking of decisions but also failing to act are all things that we regard as types of actions. Failing to save a child who is drowning is therefore also a type of action. There are different kinds of responsibility that can be distinguished. A common distinction is between active responsibility and passive responsibility. Active responsibility is responsibility before something has happened. It refers to a duty or task to care for certain state-of-affairs or persons. Passive responsibility is applicable after something (undesirable) has happened.

Responsibility (both active and passive) is often linked to the role that you have in a particular situation. In the case described here Boisjoly fulfilled the role of engineer and not that of, for example, family member. You often have to fulfill a number of roles simultaneously such as those of friend, parent, citizen, employee, engineer, expert, and colleague. In a role you have a relationship with others, for instance, as an employee you have a relationship with your employer, as an expert you have a relationship with your customers and as a colleague you have relationships with other colleagues. Each role brings with it certain responsibilities. A parent, for example, is expected to care for his child. In the role of employee it is expected that you will execute your job properly, as laid down in collaboration with your employer; in the role of expert it will be presumed that you furnish your customer with information that is true and relevant and in the role of colleague you will be expected to behave in a collegial fashion with others in the same work situation. An engineer is expected to carry out his work in a competent way. Roles and their accompanying responsibilities can be formally laid down, for instance legally, in a contract or in professional or corporate codes of conduct (see Chapter 2). In addition, there are more informal roles and responsibilities, like the obligations one has within a family or towards friends. Here, too, agreements are often made and rules are assumed but they are not usually put down in writing. We will call the responsibility that is based on a role you play in a certain context role responsibility.

Role responsibility The responsibility that is based on the role one has or plays in a certain situation.

Since a person often has different roles in life he/she has various role responsibilities. One role may have responsibilities that conflict with the responsibilities that accompany another role. Boisjoly for example in the Challenger case both had a role as an employee and as an engineer. As an employee he was expected to be loyal to his company and to listen to his superiors, who eventually decided to give positive advice about the launch. As an engineer he was expected to give technically sound advice taking into account the possible risks to the astronauts and, in his view, this implied a negative advice with respect to the launch.

Moral responsibility Responsibility that is based on moral obligations, moral norms or moral duties.

Professional responsibility The responsibility that is based on one’s role as professional in as far it stays within the limits of what is morally allowed.

Although roles define responsibilities, moral responsibility is not confined to the roles one plays in a situation. Rather it is based on the obligations, norms, and duties that arise from moral considerations. In Chapter 3, we will discuss in more detail what we mean with terms like morality and ethics, and what different kinds of ethical theories can be distinguished. Moral responsibility can extend beyond roles. In the Challenger case, it was part of Boisjoly’s moral responsibility to care for the consequences of his advice for the astronauts and for others. Moral responsibility can, however, also limit role responsibilities because with some roles immoral responsibilities may be associated. (Think of the role of Mafioso.) In this and the next chapter we are mainly interested in the professional responsibility of engineers. Professional responsibility is the responsibility that is based on your role as a professional engineer in as far it stays within the limits of what is morally allowed. Professional responsibilities are not just passive but they also contain an active component. We will examine the content of the professional responsibility of engineers in more detail in Section 1.4, but first we turn to a more detailed description of passive responsibility.

1.3 Passive Responsibility

Typical for passive responsibility is that the person who is held responsible must be able to provide an account why he/she followed a particular course of action and why he/she made certain decisions. In particular, the person is held to justify his/her actions towards those who are in a position to demand that the individual in question accounts for his/her actions. In the case of the Challenger, NASA had to be able to render account for its actions to the families of the victims, to society, and to the sitting judge. We will call this type of passive responsibility accountability.

Passive responsibility Backward-looking responsibility, relevant after something undesirable occurred; specific forms are accountability, blameworthiness, and liability.

Passive responsibility often involves not just accountability but also blameworthiness. Blameworthiness means that it is proper to blame someone for his/her actions or the consequences of those actions. You are not always blameworthy for the consequences of your actions or for your actions themselves. Usually, four conditions need to apply: wrong-doing, causal contribution, foreseeability, and freedom. The extent to which you can be blamed is determined by the degree to which these conditions are fulfilled. The four conditions will be illustrated on the basis of the Challenger disaster.

Accountability Backward-looking responsibility in the sense of being held to account for, or justify one’s actions towards others.

Blameworthiness Backward-looking responsibility in the sense of being a proper target of blame for one’s actions or the consequences of one’s actions. In order for someone to be blameworthy, usually the following conditions need to apply: wrong-doing, causal contribution, foreseeability, and freedom.

Wrong-doing

Whenever one blames a person or institution one usually maintains that in carrying out a certain action the individual or the institution in question has violated a norm or did something wrong. This can be a norm that is laid down in the law or that is common in the organization. In the Challenger case, for example, NASA violated the norm that a flight had to be proven to be safe. Instead the burden of proof was reversed in this case. In this book, we are not just interested in legal and organizational norms, but in moral ones. We will therefore investigate different kind of ethical frameworks that can be applied in judging the moral rightness or wrongness of actions and their consequences. This includes ethical frameworks such as your own conscience and moral beliefs but also codes of conduct (Chapter 2) and ethical theories (Chapter 3). Together these frameworks form a means of thinking about how one can arrive at what is good, and how one can act in the right way.

Causal contribution

A second criterion is that the person who is held responsible must have made a causal contribution to the consequences for which he or she is held responsible. Two things are to be kept in mind when judging whether someone made a causal contribution to a certain consequence. First, not only an action, but also a failure to act may often be considered a causal contribution, like in the case of the Challenger the failure to stop the launch. Second, a causal contribution is usually not a sufficient condition for the occurrence of the consequence under consideration. Often, a range of causal contributions will have to be present for the consequence to occur. A causal contribution will often be a necessary ingredient in the actual chain of events that led to the consequence, that is, without the causal contribution the consequence would not have occurred.

Both the NASA project team and the Morton Thiokol management team made a causal contribution to the disaster because both could have averted the disaster by postponing the launch. In fact, before the Challenger could be launched, both teams needed to make a positive decision. The engineer, Boisjoly, maintained that he no longer had the chance to take action. Internally he had done everything in his power to prevent the consequences but he did not have enough influence. In retrospect he could possibly have gone public by informing the press. He should also possibly have intervened earlier on in the process – before the telephone conference – to ensure that the O-ring problem had been tackled more successfully.

Foreseeability

A person who is held responsible for something must have been able to know the consequences of his or her actions. The consequences are the harm actually arising from transgressing a norm. People cannot be held responsible if it is totally unreasonable to expect that they could possibly have been aware of the consequences. What we do expect is that people do everything that is reasonably possible to become acquainted with the possible consequences.

In the Challenger case engineer Boisjoly, the Morton Thiokol management team and the NASA representatives (the project team) could all have expected the Challenger disaster because all three were aware of the risks of erosion when the O-rings are exposed to low temperatures, a factor which thus meant that safe launching could not be guaranteed under such conditions. Though there was no conclusive scientific evidence that the launching was unsafe, all parties were certainly aware of the danger of a possible disaster, which means that the condition of foreseeability was fulfilled.

Freedom of action

Finally, the one who is held responsible must have had freedom of action, that is, he or she must not have acted under compulsion. Individuals are either not responsible or are responsible to a lesser degree if they are, for instance, coerced to take certain decisions. The question is, however, what exactly counts as coercion. A person can, for example, be “forced” or manipulated to work on the development of a particular technology under the threat that if he does not cooperate he will sacrifice his chances of promotion. In this case, this person is strictly speaking not coerced to work on the development of the particular technology, he can still act differently. Therefore the person remains responsible for his actions. However, since he is also not entirely free we could say that his responsibility is somewhat smaller than in the case where he had freely chosen to be involved in the development of this technology.

The NASA project team was under pressure. The launch had already been postponed several times, which meant that the time available for other space missions was becoming very limited. There was also the pressure of the eager public, largely because of the presence of McAuliffe. Morton Thiokol might also have felt the pressure of NASA because negative recommendations could well have prevented further cooperation with NASA and that would have had its financial consequences. The possibilities open to the engineer Boisjoly were limited. The only thing he could have possibly done to prevent the disaster was inform the press but that would have had negative consequences (e.g., dismissal) for him and his family. In all three cases, the pressure was probably not strong enough to say that NASA, Morton Thiokol, or Boisjoly lacked freedom of action; they could have done other things than they actually did, they were not compelled to act as they did. Nevertheless, especially in the case of Boisjoly you could argue that the negative personal consequences he could expect diminished his responsibility.

1.4 Active Responsibility and the Ideals of Engineers

We considered above questions of responsibility when something has gone wrong. Responsibility is also something that comes into play beforehand, if nothing has yet gone wrong or if there is the chance to realize something good. We will refer to this as active responsibility. If someone is actively responsible for something he/she is expected to act in such a way that undesired consequences are avoided as much as possible and so that positive consequences are realized. Active responsibility is not primarily about blame but requires a certain positive attitude or character trait of dealing with matters. Philosophers call such positive attitudes or character traits virtues (see Chapter 3). Active responsibility, moreover, is not only about preventing the negative effects of technology as about realizing certain positive effects.

Active responsibility Responsibility before something has happened referring to a duty or task to care for certain state-of-affairs or persons.

Active Responsibility

Mark Bovens mentions the following features of active responsibility:

Adequate perception of threatened violations of norms;Consideration of the consequences;Autonomy, i.e. the ability to make one’s own independent moral decisions;Displaying conduct that is based on a verifiable and consistent code; andTaking role obligations seriously. (Bovens, 1998)

One way in which the active responsibility of engineers can be understood is by looking at the ideals of engineers. Ideals, as we will understand the notion here, have two specific characteristics. First ideals are ideas or strivings which are particularly motivating and inspiring for the person having them. Second, it is typical for ideals that they aim at achieving an optimum or maximum. Often, therefore, ideals cannot be entirely achieved but are strived for. In the course of practicing their profession engineers can be driven by several ideals. Those can be personal ideals such as the desire to earn a lot of money or to satisfy a certain degree of curiosity but they can also be social or moral ideals, such as wanting to implement technological ends to improve the world. Those are also the types of ideals that can spur people on to opt for an engineering field of study and career. Some of these ideals are directly linked to professional practice because they are closely allied to the engineering profession or can only be aspired to by carrying out the profession of engineer. We call such ideals professional ideals. As professional ideals, these ideals are part of professional responsibility in as far they stay within the limits of what is morally allowed. Below, we shall therefore discuss three different professional ideals of engineers and we shall establish whether these ideals are also morally commendable.

Ideals Ideas or strivings which are particularly motivating and inspiring for the person having them, and which aim at achieving an optimum or maximum.

Professional ideals Ideals that are closely allied to a profession or can only be aspired to by carrying out the profession.

1.4.1 Technological enthusiasm

Technological enthusiasm pertains to the ideal of wanting to develop new technological possibilities and take up technological challenges. This is an ideal that motivates many engineers. It is fitting that Samuel Florman refers to this as “the existential pleasures of engineering” (Florman, 1976). One good example of technological enthusiasm is the development of Google Earth, a program with which, via the Internet, it is possible to zoom in on the earth’s surface. It is a beautiful concept but it gives rise to all kinds of moral questions, for instance in the area of privacy (you can study the opposite neighbor’s garden in great detail) and in the field of security (terrorists could use it to plan attacks). In a recent documentary on the subject of Google Earth one of the program developers admitted that these are important questions.1 Nevertheless, when developing the program these were matters that the developers had failed to consider because they were so driven by the challenge of making it technologically possible for everyone to be able to study the earth from behind his or her PC.

Technological enthusiasm The ideal of wanting to develop new technological possibilities and taking up technological challenges.

Technological enthusiasm in itself is not morally improper; it is in fact positive for engineers to be intrinsically motivated as far as their work is concerned. The inherent danger of technological enthusiasm lies in the possible negative effects of technology and the relevant social constraints being easily overlooked. This has been exemplified by the Google Earth example. It is exemplified to an extreme extent by the example of Wernher von Braun (see box).

Wernher von Braun (1912–77)

Wernher von Braun is famous for being the creator of the space program that made it possible to put the first person on the moon on July 20, 1969. A couple of days before, on July 16, the Apollo 11 spaceship used by the astronauts to travel from the earth had been launched with the help of a Saturn V rocket and Von Braun had been the main designer of that rocket. Sam Phillips, the director of the American Apollo program, was reported to have said that without Von Braun the Americans would never have been able to reach the moon as soon as they did. Later, after having spoken to colleagues, he reviewed his comment by claiming that without Von Braun the Americans would never have landed on the moon full stop.

Figure 1.2 Wernher von Braun. Photo: NASA Archives.

Von Braun grew up in Germany. From an early age he was fascinated by rocket technology. According to one anecdote Von Braun was not particularly brilliant in physics and mathematics until he read a book entitled Die Rakete zu den Planetenraümen by Hermannn Oberth and realized that those were the subjects he would have to get to grips with if he was later going to be able to construct rockets. In the 1930s Von Braun was involved in developing rockets for the German army. In 1937 he joined Hitler’s National Socialist Party and in 1940 he became a member of the SS. Later he explained that he had been forced to join that party and that he had never participated in any political activities, a matter that is historically disputed. What is in any case striking is the argument that he in retrospect gave for joining the National Socialist Party which was this: “My refusal to join the party would have meant that I would have had to abandon the work of my life. Therefore, I decided to join” (Piszkiewicz (1995, p. 43). His life’s work was, of course, rocket technology and a devotion to that cause was a constant feature of Von Braun’s life.

During World War II Von Braun played a major role in the development of the V2 rocket, which was deployed from 1944 onwards to bomb, amongst other targets, the city of London. Incidentally more were killed during the V2-rocket’s development and production – an estimated 10 000 people – than during the actual bombings (Neufeld, 1995, p. 264). The Germans had deployed prisoners from the Mittelbau-Dora concentration camp to help in the production of the V2 rockets. Von Braun was probably aware of those people’s abominable working conditions.

There is, therefore, much to indicate that Von Braun’s main reason for wanting to join the SS was carefully calculated: in that way he would be able to continue his important work in the field of rocket technology. In 1943 he was arrested by the Nazis and later released. It was claimed that he had allegedly sabotaged the V2 program. One of the pieces of evidence used against him was that he had apparently said that after the war the V2 technology should be further developed in the interests of space travel – and that is indeed what ultimately happened when he later started to work for the Americans. When, in 1945, Von Braun realized that the Germans were going to lose the war he arranged for his team to be handed over to the Americans.

In the United States Von Braun originally worked on the development of rockets for military purposes but later he fulfilled a key role in the space travel program, a program that was ultimately to culminate in man’s first steps on the moon. Von Braun’s big dream did therefore ultimately come true.

Source: Based on Stuhlinger and Ordway (1994), Neufeld (1995), and Piszkiewicz (1995).

1.4.2 Effectiveness and efficiency

Engineers tend to strive for effectiveness and efficiency. Effectiveness can be defined as the extent to which an established goal is achieved; efficiency as the ratio between the goal achieved and the effort required. The drive to strive towards effectiveness and efficiency is an attractive ideal for engineers because it is – apparently – so neutral and objective. It does not seem to involve any political or moral choices, which is something that many engineers experience as subjective and therefore wish to avoid. Efficiency is also something that in contrast, for example, to human welfare can be defined by engineers and is also often quantifiable. Engineers are, for example, able to define the efficiency of the energy production in an electrical power station and they can also measure and compare that efficiency. An example of an engineer who saw efficiency as an ideal was Frederick W. Taylor (see box).

Effectiveness The extent to which an established goal is achieved.

Efficiency The ratio between the goal achieved and the effort required.

Frederick W. Taylor (1856–1915)

Frederick Taylor was an American mechanical engineer. He became known as the founder of the efficiency movement and was specifically renowned for developing scientific management also known as Taylorism.

Out of all his research Taylor became best known for his time-and-motion studies. There he endeavored to scientifically establish which actions – movements – workers were required to carry out during the production process and how much time that took. He divided the relevant actions into separate movements, eliminated all that was superfluous and endeavored, with the aid of a stopwatch, to establish precisely how long the necessary movements took. His aim was to make the whole production process as efficient as possible on the basis of such insight. Taylorism is often seen as an attempt to squeeze as much as possible out of workers and in practice that was often what it amounted to but that had probably not been Taylor’s primary goal. He believed that it was possible to determine, in a scientific fashion, just what would be the best way of carrying out production processes by organizing such processes in such a way that optimal use could be made of the opportunities provided by workers without having to demand too much of them. He maintained that his approach would put an end to the on-going conflict between the trade unions and the managerial echelons, thus making trade unions redundant. He was also critical of management which he found unscientific and inefficient. To his mind having the insight of engineers and their approach to things would culminate in a better and more efficient form of management.

Figure 1.3 Frederick Taylor. Photo: Bettmann Archive/Corbis.

In 1911 Taylor published his The Principles of Scientific Management in which he explained the four principles of scientific management:

Replace the present rules of thumb for working methods with methods based on a scientific study of the work process.Select, train and develop every worker in a scientific fashion instead of allowing workers to do that themselves.Really work together with the workers so that the work can be completed according to the developed scientific principles.Work and responsibility are virtually equally divided between management and workers. The management does the work for which it is best equipped: applying scientific management principles to plan the work; and the workers actually perform the tasks.

Though Taylor was a prominent engineer – for a time he was, for instance, president of the influential American Society of Mechanical Engineers (ASME) – he only had a limited degree of success when it came to the matter of conveying his ideas to people. They were not embraced by all engineers but, thanks to a number of followers, they were ultimately very influential. They fitted in well with the mood of the age. In the United States the first two decades of the twentieth century were known as the “Progressive Era.” It was a time when engineers clearly manifested themselves as a professional group capable of promoting the interests of industry and society. It was frequently implied that the engineering approach to social problems was somehow superior. Taylor’s endeavors to achieve a form of management that was efficient and scientific fitted perfectly into that picture.

Source: Based on Taylor (1911), Layton (1971), and Nelson (1980).

Though many engineers would probably not have taken things as far as Taylor did, his attempt to efficiently design the whole production process – and ultimately society as a whole – constituted a typical engineering approach to matters. Efficiency is an ideal that endows engineers with authority because it is something that – at least at first sight – one can hardly oppose and that can seemingly be measured objectively. The aspiration among engineers to achieve authority played an important part in Taylor’s time. In the United States the efficiency movement became an answer to the rise of large capitalistic companies where managers ruled and engineers were mere subordinate implementers. It constituted an effort to improve the position of the engineer in relation to the manager. What Taylor was really arguing was that engineers were the only really capable managers.

From a moral point of view, however, effectiveness and efficiency are not always worth pursuing. That is because effectiveness and efficiency suppose an external goal in relation to which they are measured. That external goal can be to consume a minimum amount of non-renewable natural resources to generate energy, but also war or even genocide. It was no coincidence that Nazi engineers like Eichmann were proud of the efficient way in which they were able to contribute to the so-called “resolving of the Jewish question” in Europe which was to lead to the murdering of six million Jews and other groups that were considered inferior by the Nazis like Gypsies and mental patients (Arendt, 1965). The matter of whether effectiveness or efficiency is morally worth pursuing therefore depends very much on the ends for which they are employed. So, although some engineers have maintained the opposite, the measurement of the effectiveness and efficiency of a technology is value-laden. It proposes a certain goal for which the technology is to be employed and that goal can be value-laden. Moreover, to measure efficiency one need to calculate the ratio between the output (the external goal) and the input, and also the choice of the input may be value-laden. A technology may for example be efficient in terms of costs but not in terms of energy consumption.

1.4.3 Human welfare

A third ideal of engineers is that of contributing to or augmenting human welfare. The professional code of the American Society of Mechanical Engineering (ASME) and of the American Society of Civil Engineers (ASCE) states that “engineers shall use their knowledge and skill for the enhancement of human welfare.” This also includes values such as health, the environment, and sustainability. According to many professional codes that also means that: “Engineers shall hold paramount the safety, health and welfare of the public” (as, for example, stated by the code of the National Society of Professional Engineers, see Chapter 2). It is worth noting that the relevant values will differ somewhat depending on the particular engineering specialization. In the case of software engineers, for instance, values such as the environment and health will be less relevant whilst matters such as the privacy and reliability of systems will be more important. One of the most important values that falls under the pursuit of human welfare among engineers is safety. One of the engineers who was a great proponent of safety was the Dutch civil engineer Johan van Veen.

Johan van Veen (1893–1959)

Johan van Veen is known as the father of the Delta Works, a massive plan devised to protect the coasts of the South-western part of the Netherlands which materialized after the flood disaster of 1953. During the disaster 1835 people died and more than 72 000 were forced to evacuate their homes.

Figure 1.4 Netherlands. Viewed from a US Army helicopter, a Zuid Beveland town gives a hint of the tremendous damage wrought by the 1953 flood to Dutch islands. Photo: Agency for International Development/National Archives, Washington (ARC Identifier 541705).

Before the disaster occurred there were indications that the dykes were not up to standard. In 1934 it was discovered that a number of dykes were probably too low. In 1939 Wemelsfelder, a Public Works Agency employee working for the Research Service for the Estuaries, Lower River Reaches and Coasts sector, was able to support that assumption with a series of models. Even before the big disaster of 1953 Johan van Veen had emphasized the need to close off certain estuaries.

Van Veen studied civil engineering in Delft before then going on, in 1929, to work for the Research Service which he was later to head. On the basis of his interest in the history of hydraulic engineering and his activities with the Public Works Agency, he gradually became convinced that the danger posed by storm-driven flooding had been vastly underestimated and that the dykes were indeed too low. Van Veen was quite adamant about his beliefs which soon earned him the nickname, within the service, of “the new Cassandra” after the Trojan priestess who had perpetually predicted the fall of Troy. He even adopted the pseudonym Cassandra in the epilogue to the fourth edition of his book Dredge, Drain, Reclaim