112,99 €
A one-stop reference guide to design for safety principles and applications
Design for Safety (DfSa) provides design engineers and engineering managers with a range of tools and techniques for incorporating safety into the design process for complex systems. It explains how to design for maximum safe conditions and minimum risk of accidents. The book covers safety design practices, which will result in improved safety, fewer accidents, and substantial savings in life cycle costs for producers and users. Readers who apply DfSa principles can expect to have a dramatic improvement in the ability to compete in global markets. They will also find a wealth of design practices not covered in typical engineering books—allowing them to think outside the box when developing safety requirements.
Design Safety is already a high demand field due to its importance to system design and will be even more vital for engineers in multiple design disciplines as more systems become increasingly complex and liabilities increase. Therefore, risk mitigation methods to design systems with safety features are becoming more important. Designing systems for safety has been a high priority for many safety-critical systems—especially in the aerospace and military industries. However, with the expansion of technological innovations into other market places, industries that had not previously considered safety design requirements are now using the technology in applications.
Design for Safety:
Design for Safety is an ideal book for new and experienced engineers and managers who are involved with design, testing, and maintenance of safety critical applications. It is also helpful for advanced undergraduate and postgraduate students in engineering.
Design for Safety is the second in a series of “Design for” books. Design for Reliability was the first in the series with more planned for the future.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 847
Veröffentlichungsjahr: 2017
Cover
Title Page
Preface
Reference
Acknowledgments
Introduction: What You Will Learn
1 Design for Safety Paradigms
1.1 Why Design for System Safety?
1.2 Reflections on the Current State of the Art
1.3 Paradigms for Design for Safety
1.4 Create Your Own Paradigms
1.5 Summary
References
2 The History of System Safety
2.1 Introduction
2.2 Origins of System Safety
2.3 Tools of the Trade
2.4 Benefits of System Safety
2.5 System Safety Management
2.6 Integrating System Safety into the Business Process
References
Suggestions for Additional Reading
3 System Safety Program Planning and Management
3.1 Management of the System Safety Program
3.2 Engineering Viewpoint
3.3 Safety Integrated in Systems Engineering
3.4 Key Interfaces
3.5 Planning, Execution, and Documentation
3.6 System Safety Tasks
References
Suggestions for Additional Reading
4 Managing Risks and Product Liabilities
4.1 Introduction
4.2 Risk
4.3 Risk Management
4.4 What Happens When the Paradigms for Design for Safety Are Not Followed?
4.5 Tort Liability
4.6 An Introduction to Product Liability Law
4.7 Famous Legal Court Cases Involving Product Liability Law
4.8 Negligence
4.9 Warnings
4.10 The Rush to Market and the Risk of Unknown Hazards
4.11 Warranty
4.12 The Government Contractor Defense
4.13 Legal Conclusions Involving Defective and Unsafe Products
References
Suggestions for Additional Reading
5 Developing System Safety Requirements
5.1 Why Do We Need Safety Requirements?
5.2 Design for Safety Paradigm 3 Revisited
5.3 How Do We Drive System Safety Requirements?
5.4 What Is a System Requirement?
5.5 Hazard Control Requirements
5.6 Developing Good Requirements
5.7 Example of Certification and Validation Requirements for a PSDI
5.8 Examples of Requirements from STANAG 4404
5.9 Summary
References
6 System Safety Design Checklists
6.1 Background
6.2 Types of Checklists
6.3 Use of Checklists
References
Suggestions for Additional Reading
Additional Sources of Checklists
7 System Safety Hazard Analysis
7.1 Introduction to Hazard Analyses
7.2 Risk
7.3 Design Risk
7.4 Design Risk Management Methods and Hazard Analyses
7.5 Hazard Analysis Tools
7.6 Hazard Tracking
7.7 Summary
References
Suggestions for Additional Reading
8 Failure Modes, Effects, and Criticality Analysis for System Safety
8.1 Introduction
8.2 The Design FMECA (D‐FMECA)
8.3 How Are Single Point Failures Eliminated or Avoided in the Design?
8.4 Software Design FMECA
8.5 What Is a PFMECA?
8.6 Conclusion
Acknowledgments
References
Suggestions for Additional Reading
9 Fault Tree Analysis for System Safety
9.1 Background
9.2 What Is a Fault Tree?
9.3 Methodology
9.4 Cut Sets
9.5 Quantitative Analysis of Fault Trees
9.6 Automated Fault Tree Analysis
9.7 Advantages and Disadvantages
9.8 Example
9.9 Conclusion
References
Suggestions for Additional Reading
10 Complementary Design Analysis Techniques
10.1 Background
10.2 Discussion of Less Used Techniques
10.3 Other Analysis Techniques
References
Suggestions for Additional Reading
11 Process Safety Management and Analysis
11.1 Background
11.2 Elements of Process Safety Management
11.3 Process Hazard Analyses
11.4 Other Related Regulations
11.5 Inherently Safer Design
11.6 Summary
References
Suggestions for Additional Reading
12 System Safety Testing
12.1 Purpose of System Safety Testing
12.2 Test Strategy and Test Architecture
12.3 Develop System Safety Test Plans
12.4 Regulatory Compliance Testing
12.5 The Value of PHM for System Safety Testing
12.6 Leveraging Reliability Test Approaches for Safety Testing
12.7 Safety Test Data Collection
12.8 Test Results and What to Do with the Results
12.9 Design for Testability
12.10 Test Modeling
12.11 Summary
References
13 Integrating Safety with Other Functional Disciplines
13.1 Introduction
13.2 Raytheon’s Code of Conduct
13.3 Effective Use of the Paradigms for Design for Safety
13.4 How to Influence People
13.5 Practice Emotional Intelligence
13.6 Practice Positive Deviance to Influence People
13.7 Practice “Pay It Forward”
13.8 Interfaces with Customers
13.9 Interfaces with Suppliers
13.10 Five Hats for Multi‐Disciplined Engineers (A Path Forward)
13.11 Conclusions
References
14 Design for Reliability Integrated with System Safety
14.1 Introduction
14.2 What Is Reliability?
14.3 System Safety Design with Reliability Data
14.4 How Is Reliability Data Translated to Probability of Occurrence?
14.5 Verification of Design for Safety Including Reliability Results
14.6 Examples of Design for Safety with Reliability Data
14.7 Conclusions
Acknowledgment
References
15 Design for Human Factors Integrated with System Safety
15.1 Introduction
15.2 Human Factors Engineering
15.3 Human‐Centered Design
15.4 Role of Human Factors in Design
15.5 Human Factors Analysis Process
15.6 Human Factors and Risk
15.7 Checklists
15.8 Testing to Validate Human Factors in Design
Acknowledgment
References
Suggestions for Additional Reading
16 Software Safety and Security
16.1 Introduction
16.2 Definitions of Cybersecurity and Software Assurance
16.3 Software Safety and Cybersecurity Development Tasks
16.4 Software FMECA
16.5 Examples of Requirements for Software Safety
16.6 Example of Numerical Accuracy Where 2 + 2 = 5
16.7 Conclusions
Acknowledgments
References
17 Lessons Learned
17.1 Introduction
17.2 Capturing Lessons Learned Is Important
17.3 Analyzing Failure
17.4 Learn from Success and from Failure
17.5 Near Misses
17.6 Continuous Improvement
17.7 Lessons Learned Process
17.8 Lessons Learned Examples
17.9 Summary
References
Suggestions for Additional Reading
18 Special Topics on System Safety
18.1 Introduction
18.2 Airworthiness and Flight Safety
18.3 Statistical Data Comparison Between Commercial Air Travel and Motor Vehicle Travel
18.4 Safer Ground Transportation Through Autonomous Vehicles
18.5 The Future of Commercial Space Travel
18.6 Summary
References
Appendix A: Hazards Checklist
Reference
Appendix B: System Safety Design Verification Checklist
Reference
Index
End User License Agreement
Chapter 01
Table 1.1 Paradigm locations
Chapter 02
Table 2.1 Evolution of MIL‐STD‐882
Table 2.2 A sampling of IEC 61508 standards
Table 2.3 Common hazard analysis techniques
Chapter 03
Table 3.1 Types of system safety analysis
Table 3.2 Task application matrix
Chapter 04
Table 4.1 Product liability in federal court
Table 4.2 Product liability in Missouri 2014
Table 4.3 Product liability in Missouri: 10‐year summary 2005–2014
Table 4.4 Ten largest vehicle recalls
Chapter 05
Table 5.1 Qualities of a well‐written performance specification
Chapter 06
Table 6.1 Example of requirement‐type checklist
Table 6.2 Example of energy source‐type checklist
Table 6.3 Example of generic hazard‐type checklist
Table 6.4 Example of similar system‐type checklist
Table 6.5 Example of hazardous operation‐type checklist
Chapter 07
Table 7.1 Hazard probability levels
Table 7.2 Hazard severity levels
Table 7.3 Risk assessment matrix
Table 7.4 Risk levels
Chapter 08
Table 8.1 Possible software elements for FMECA
Table 8.2 Priority factors
Table 8.3 Severity (SEV) factors
Table 8.4 Occurrence (OCC) factors
Table 8.5 Detection (DET) factors
Chapter 09
Table 9.1 Failure probabilities for pressure tank example
Chapter 11
Table 11.1 Examples of guide word–parameter–deviations in HAZOP
Chapter 14
Table 14.1 Hazard probability levels (excerpt from Table 7.1)
Table 14.2 Quantitative probability of occurrence ranges
Table 14.3 Risk assessment matrix
Table 14.4 Example types of distributions used in reliability
Table 14.5 Hazard severity levels (from Table 7.2)
Table 14.6 Severity definitions
Chapter 15
Table 15.1 Considerations in human‐centered design
Table 15.2 Perceptions across media
Table 15.3 Human factors analysis tools
Chapter 16
Table 16.1 Priority classifications
Table 16.2 Reprinted from
Software System Safety Handbook
from G‐48
Table 16.3 Potential software failure mode effects for software elements of a navigation system
Table 16.4 Example causes of potential failure modes
Chapter 02
Figure 2.1 Maslow’s hierarchy
Figure 2.2 The system safety process
Chapter 03
Figure 3.1 Phases of development
Figure 3.2 Systems engineering V‐model
Figure 3.3 System safety V‐Chart
Chapter 04
Figure 4.1 Risk management process
Chapter 05
Figure 5.1 Top 10 emerging systemic issues
Chapter 06
Figure 6.1 MD‐80 preflight checklist
Chapter 07
Figure 7.1 Hazard reduction precedence
Figure 7.2 Preliminary hazard list format
Figure 7.3 Preliminary hazard analysis format
Figure 7.4 Subsystem hazard analysis format
Figure 7.5 System hazard analysis format
Figure 7.6 Operating and support hazard analysis format
Figure 7.7 Health hazard analysis format
Figure 7.8 Typical closed‐loop hazard tracking process
Chapter 08
Figure 8.1 Failure modes and effects relationship diagram
Figure 8.2 Failure cause Pareto diagram
Figure 8.3 Top‐down system hazard/failure analysis flow example
Figure 8.4 Example PFMECA form
Chapter 09
Figure 9.1 Fault tree symbols
Figure 9.2 Simple fault trees
Figure 9.3 Fault tree construction process
Figure 9.4 Primary, secondary, and command approach
Figure 9.5 Simple cut set determination
Figure 9.6 Minimal cut set determination
Figure 9.7 Simple fault tree for illustrating MOCUS algorithm
Figure 9.8 Pressure tank system
Figure 9.9 Pressure tank system operational modes
Figure 9.10 Pressure tank rupture fault tree example
Figure 9.11 Simplified (reduced) fault tree for pressure tank example
Chapter 10
Figure 10.1 Generic event tree
Figure 10.2 Generic event tree with probabilities
Figure 10.3 Sneak circuit analysis topological patterns
Figure 10.4 Functional hazard analysis format
Figure 10.5 Barrier separates energy source from target
Figure 10.6 Barrier analysis format
Figure 10.7 Connector configuration and matrix of possible pin‐to‐pin combinations
Figure 10.8 Bent pin analysis format
Figure 10.9 Markov state transition model for one‐component system with repair
Figure 10.10 MORT top events
Chapter 11
Figure 11.1 Example process flowchart
Figure 11.2 What‐if analysis worksheet
Figure 11.3 HAZOP analysis worksheet
Chapter 12
Figure 12.1 Test failure data by test position
Figure 12.2 Test failure data by TTF
Figure 12.3 Example of a CDF
Figure 12.4 Example of a PDF
Chapter 13
Figure 13.1 SSE interfaces to other functional disciplines
Figure 13.2 Raytheon’s values
Chapter 14
Figure 14.1 Reliability and system safety data interfaces
Figure 14.2 Graph of reliability over time
Figure 14.3 Probability of success example
Chapter 16
Figure 16.1 CISQ comparison of SwA definitions from ARiSE
Chapter 17
Figure 17.1 Foreshadowing of Toyota problems. Percentage of customer complaints having to do with speed control. .
Figure 17.2 GM models and years effected by the ignition switch recall
Chapter 18
Figure 18.1 Percent change in motor vehicle fatalities from 2006 to 2015
Figure 18.2 Annual percentage change in Vehicle Miles Traveled (VMT) between 1975 and 2015
Cover
Table of Contents
Begin Reading
ii
iii
iv
v
ii
xix
xx
xxi
xxii
xxiii
xxv
xxvi
xxvii
xxviii
xxix
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
Dr Andre Kleyner
Series Editor
The Wiley series in Quality & Reliability Engineering aims to provide a solid educational foundation for both practitioners and researchers in Q&R field and to expand the reader’s knowledge base to include the latest developments in this field. The series will provide a lasting and positive contribution to the teaching and practice of engineering.
The series coverage will contain, but is not exclusive to,
statistical methods;
physics of failure;
reliability modeling;
functional safety;
six‐sigma methods;
lead‐free electronics;
warranty analysis/management; and
risk and safety analysis.
Wiley Series in Quality & Reliability Engineering
Next Generation HALT and HASS: Robust Design of Electronics and Systemsby Kirk A. Gray, John J. PaschkewitzMay 2016
Reliability and Risk Models: Setting Reliability Requirements, 2nd Editionby Michael TodinovSeptember 2015
Applied Reliability Engineering and Risk Analysis: Probabilistic Models and Statistical Inferenceby Ilia B. Frenkel. Alex Karagrigoriou, Anatoly Lisnianski, Andre V. KleynerSeptember 2013
Design for Reliabilityby Dev G. Raheja (Editor), Louis J. Gullo (Editor)July 2012
Effective FMEAs: Achieving Safe. Reliable, and Economical Products and Processes Using Failure Modes and Effects Analysisby Carl CarlsonApril 2012
Failure Analysis: A Practical Guide for Manufacturers of Electronic Components and Systemsby Marius Bazu, Titu BajenescuApril 2011
Reliability Technology: Principles and Practice of Failure Prevention in Electronic Systemsby Norman PascoeApril 2011
Improving Product Reliability: Strategics and Implementationby Mark A. Levin, Ted T. KalalMarch 2003
Test Engineering: A Concise Guide to Cost‐Effective Design, Development and Manufactureby Patrick O’ConnorApril 2001
Integrated Circuit Failure Analysis: A Guide to Preparation Techniquesby Friedrich BeckJanuary 1998
Measurement and Calibration Requirements for Quality Assurance to ISO 9000by Alan S. MorrisOctober 1997
Electronic Component Reliability: Fundamentals. Modelling, Evaluation, and Assuranceby Finn JensenNovember 1995
Edited by
Louis J. Gullo
Raytheon Missile Systems, Arizona, USA
Jack Dixon
JAMAR International, Inc., Florida, USA
This edition first published 2018© 2018 John Wiley & Sons Ltd
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
The right of Louis J. Gullo and Jack Dixon to be identified as the authors of the editorial material in this work has been asserted in accordance with law.
Registered Office(s)John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USAJohn Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK
Editorial OfficeThe Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.
Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
Library of Congress Cataloging‐in‐Publication data applied for
ISBN: 9781118974292
Cover Design: WileyCover Images: (Left to right) © 3DSculptor/Gettyimages; © ms. Octopus/Shutterstock; © prosot‐photography/iStock; © gali estrange/Shutterstock
To my wife, Diane, and my children, Louis, Jr., Stephanie, Catherine, Christina, and Nicholas.Louis J. GulloTo my wife, Margo.Jack Dixon
And
to all the heroes of the world, especially all the safety heroes that make the world a safer place.Louis J. GulloJack Dixon
The Wiley Series in Quality and Reliability Engineering aims to provide a solid educational foundation for researchers and practitioners in the field of dependability, which includes quality, reliability, and safety, and expand the knowledge base by including the latest developments in these disciplines.
It is hard to overstate the effect of quality and reliability on system safety. A safety‐critical system is a system whose failure or malfunction may result in death or serious injury to people. According to Federal Aviation Administration (FAA), system safety is the application of engineering and management principles, criteria, and techniques to optimize safety by the identification of safety‐related risks, eliminating or controlling them by design and/or procedures, based on acceptable system safety precedence.
Along with continuously increasing electronics content in vehicles, airplanes, trains, appliances, and other devices, electronic and mechanical systems are becoming more complex with added functions and capabilities. Needless to say, this trend is making the jobs of design engineers increasingly challenging, which is confirmed by the growing number of safety recalls. These recalls are prompting further strengthening of reliability and safety requirements and a rapid development of functional safety standards, such as IEC 61508 Electrical/Electronic/Programmable systems, ISO 26262 Road Vehicles, and others, which have increased the pressure on improving the design processes and achieving ever higher reliability as it applies to system safety.
There are no do‐overs in safety. You cannot undo the damage to a human caused by an accident caused by an unsafe system; therefore it is extremely important to design a safe system the first time. This book Design for Safety, written by Louis J. Gullo and Jack Dixon explores the safety engineering and takes the concept of design and system safety to a new level. The book takes you step by step through the process of designing for safety. These steps include the development of system requirements, design for safety checklist, and application of the critical design tools, such as fault tree analysis, hazard analysis, FMEA, system integration, testing, and many others.
Both authors have lifelong experience in product design, safety, and reliability, and sharing their knowledge will be a big help to the new generation of design engineers as well as to the seasoned practitioners. This book offers an excellent mix of theory, practice, useful applications, and commonsense engineering, making it a perfect addition to the Wiley Series in Quality and Reliability Engineering.
Despite its obvious importance, quality, reliability, and safety education are paradoxically lacking in today’s engineering curriculum. Very few engineering schools offer degree programs, or even a sufficient variety of courses, in quality or reliability methods, and the topic of safety only receives minimal coverage in today’s engineering student curriculum. Therefore, the majority of the quality, reliability, and safety practitioners receive their professional training from colleagues, professional seminars, publications, and technical books. The lack of opportunities for formal education in these fields emphasizes too well the importance of technical publications like this one for professional development.
We are confident that this book, as well as this entire book series, will continue Wiley’s tradition of excellence in technical publishing and provide a lasting and positive contribution to the teaching and practice of quality, reliability, and safety engineering.
Dr. Andre Kleyner,Editor of the Wiley Series in Quality and Reliability Engineering
Anyone who designs a product or system involving hardware and/or software needs to ask the following questions and seek answers to:
Will my designs be safe for the users of the product or system that I design for them?
Will my designs be safe for people affected by the users of the product or system that I design for them?
Are there applications that my designs may be used for that are not safe even though it is not the original intentions of my design?
Can anyone die or be harmed by my designs?
The designers and engineers that fully answer these questions and take action to improve the safety features of a design are heroes. These engineering heroes are usually unsung heroes who don’t receive nor seek any reward or recognition.
When you think of heroes, you might conjure up the image of a US Army Medal of Honor recipient, or a brave firefighter willing to sacrifice his or her life to rescue people from a towering inferno, or a policeman cited for courage in the line of duty, but you probably won’t imagine an engineer willing to sacrifice his or her job or career to prevent a potential catastrophic hazard from occurring within a product or system. Every day and throughout the world, multitudes of engineers working in numerous development and production engineering career fields within a global marketplace discover and analyze safety‐critical failure modes and assess risks of hazards to the user or customer, which may cause loss of life or severe personal injury. These engineers display a passion for their work with consideration for the safety aspects of their products or systems realizing the ultimate impacts to the health and well‐being of their user community. The passion of these engineers usually goes unnoticed, except by other engineers or managers who work closely with them. The passion of these engineers may be recognized in extreme or unusual circumstances with an individual or team achievement award, but they most certainly would not become hailed as heroes. Why not? Does our engineering society place value of those willing to display courage in managing challenging technical problems? Of course, there is value in this characteristic of an engineer, but only when it results in making the organization or company more money, not reducing or eliminating the potential of dangerous hazards that could harm the user community. The engineers demonstrating courage in tackling the challenging technical problems to keep people safe are just doing their jobs as system safety engineers or some other related job function, but they would not be considered as heroes.
When you think of heroes in engineering, you might say Nikola Tesla or Thomas Edison made significant contributions to the advancement of a safe world in terms of developing commercial power to light homes at night and prevent fires due to lit candles igniting window dressings or draperies. We are sure you will agree that commercial power saves lives, indirectly. As a result of commercial power, most home fires caused by candles lighting a home at night have been prevented, but fires at home will still occur regardless of the use of commercial power replacing candles. There are other mitigating factors that have a direct correlation to causes of fires at home, such as smoking cigarettes in bed or poor insulation of electrical wiring or overloaded electrical circuits.
A direct application of saving lives is a preventive action to design out an explosive hazard in an automobile due to the fuel tank during an automobile collision. As a result of an engineer’s diligence, persistence, and commitment to mitigate the risk of a fuel tank explosion in a car during normal operation or during a catastrophic accident, it is clear that the engineer’s actions would have saved lives, directly. This direct application of design improvements that result in no deaths or personal injuries caused by automobile fuel tank explosions should warrant the title of “Engineering Hero” to the ones worthy of such distinction.
Engineering needs more heroes [1]. Engineers with the biggest paychecks get the widest acclaim. To be a hero today, you must be considered financially successful and ahead of your peers. There must be other ways to recognize engineering heroes on a broad scale, but how? There is no Nobel Prize for engineering. There is no engineering award with similar global status and prestige. Engineers cannot routinely recognize their heroes in a similar fashion as do physicists, economists, and novelists. To be fair, in lesser known circles than Nobel, engineers are recognized by their peers through the Kyoto Prize, Charles Stark Draper Prize of the US National Academy of Engineering, and the IEEE’s own Medal of Honor, to name a few engineering honors. We agree with G. Pascal Zachary when he states that a valid criterion for an engineer to be considered an engineering hero is when one overcomes adversity. Engineering heroism appears when an engineer overcomes personal, institutional, or technological adversity to do their best job possibly while realizing what is ethically or morally right, contributing to the social and cultural well‐being of all humanity.
Anyone who convinces a product manufacturer to install a safety feature on an existing product should be praised as a hero. One example of a design for safety feature that was installed on an existing product is the “safety mechanism” designed for firearms. A safety catch mechanism or safety switch used for pistol and rifle designs was intended to prevent the accidental discharge of a firearm, helping to ensure safe handling during normal use. The safety switch on firearms has two positions: one is “safe” mode and the other is “fire” mode. The two‐position safety toggle switch was designed on the military grade firearm, M16 automatic rifle. In “safe” mode, the trigger cannot be engaged to discharge the projectile in the firing assembly. Other types of safety mechanisms include manual safety, grip safety, decocker mechanism, firing pin block, hammer block, transfer bar, safety notch, bolt interlock, trigger interlock, trigger disconnect, magazine disconnect, integrated trigger safety mechanism, loaded chamber indicator, and stiff double‐action trigger pull. “Drop safety mechanisms” or “trigger guards” are passive safety features designed to reduce the chance of an accidental firearm discharge when the firearm is dropped or handled in a rough manner. Drop safeties generally provide an obstacle in the firing mechanism, which can only be removed when the trigger is pulled, so that the firearm cannot otherwise discharge. Trigger guards provide a material barrier to prevent inadvertent trigger pulls. Many firearms that were manufactured in the late 1990s were designed with mandatory integral locking mechanisms that had to be deactivated by a unique key before the firearm could be fired. These are intended as child‐safety devices during unattended storage of firearms. These types of locking mechanisms were not intended as safety mechanisms while carrying. Other devices in this category are muzzle plugs, trigger locks, bore locks, and firearm safes.
Accidents decreased tremendously over the years as a result of safety features. Accidental discharges were commonplace in the days of the “Ole West,” circa 1850–1880. Those were the days before safety switches were designed into rifle and pistol designs. Now accidental discharges only occur when a loaded firearm is handled when the safety position is off. Since the implementation of this safety switch design, gunshots caused by accidental firing have been significantly reduced. There was a designer behind this safety switch design who thought about saving lives. In our minds, this designer was an unsung hero, one of many heroes in the development of safe firearms.
We propose these unsung heroes deserve immense credit for preventing unnecessary injury or death from accidental discharge of firearms. There are many more examples of this.
The idea for this book was conceived as a result of publishing our first book, Design for Reliability. We saw the need for additional books discussing various topics associated with the design process. As a result, we are planning to create a series of Design for X books with this one, Design for Safety, being the second in the series. Our book fills the gap between the published body of knowledge and current industry practices by communicating the advantages of designing for safety during the earliest phase of product or system development. This volume fulfils the needs of entry‐level design engineers, experienced design engineers, engineering managers, and system safety engineers/managers who are looking for hands‐on knowledge of how to work collaboratively on design engineering teams.
[1] Zachary, G. P. (2014), Engineering Needs More Heroes,
IEEE Spectrum
, 51, 42–46.
Louis J. GulloJack Dixon
We would like to thank Dev Raheja for his contributions to this book and for his co‐editing of Design for Reliability, the first book in our planned Design for X series. Without the inspiration from Dev Raheja, only a few of these words would have been written. We have been humbled by his knowledge and grateful for his contributions to this book in offering us a cohesive framework using the ten paradigms in which to tie the pages together. We also are indebted to Nancy Leveson and her publishers. Her contributions to the field of system software safety are immense and greatly appreciated. There are many others who have made this work possible, adding to the body of knowledge from which we have drawn on. Among them, we especially want to thank Mike Allocco, Brian Moriarty, Robert Stoddard, Joseph Childs, and Denis W. Stearns.
Louis J. GulloJack Dixon
Chapter 1Design for Safety Paradigms (Raheja, Gullo, and Dixon)
This chapter introduces the concept of design for safety. It describes the technical gaps between the current state of the art and what it takes to design safety into new products. This chapter introduces ten paradigms for safe design that help you do the right things at the right times. These paradigms will be used throughout the book as guiding themes.
Chapter 2The History of System Safety (Dixon)
This chapter provides a brief history of system safety from the original “fly‐fix‐fly” approach to safety, to the 1940s’ hints at a better way of doing aircraft safety, to the 1950s’ introduction of the term “system safety,” and to the Minuteman program that brought the systematic approach to safety to the mainstream. Next, the development of and history of MIL‐STD‐882 is discussed. The growth of system safety and various hazard analyses techniques over the years are covered in detail. The expansion of system safety into the nonmilitary, commercial arena is discussed along with numerous industry standards. Tools of the trade, management of system safety, and integration of system safety into the business process are summarized.
Chapter 3System Safety Program Planning and Management (Gullo and Dixon)
This chapter discusses the management of system safety in detail. It describes how system safety fits into the development cycle, how it is integrated into the systems engineering process, and what the key interfaces are between system safety and other disciplines. The System Safety Program Plan is described in detail as well as how it is related to other management plans. Another important document, the Safety Assessment Report, is also outlined in detail.
Chapter 4Managing Risks and Product Liabilities (Gullo and Dixon)
In this chapter, the importance of product liability is emphasized beginning with some financial statistics and numerous examples of major losses due to bad design. The importance of risk and risk management is described. This chapter includes a brief summary of product liability law and what it means to the safety engineer and the organization developing the product or system.
Chapter 5Developing System Safety Requirements (Gullo)
This chapter’s main emphasis is on developing safety requirements including why we need them and why they are so important. We discuss what requirements are and how they enter into various types of specifications. This chapter covers in detail how to develop good safety requirements and provides examples of both good and bad requirements.
Chapter 6System Safety Design Checklists (Dixon)
This chapter introduces various types of checklists and why they are an important tool for the safety engineer. It covers procedural, observational, and design checklists and provides examples of each type. The uses of checklists are also discussed, and several detailed checklists are provided in the appendices of the book.
Chapter 7System Safety Hazard Analysis (Dixon)
This chapter introduces some terminologies and discusses risk in detail as an introduction to hazard analyses. After that, it covers several of the most widely used hazard analysis techniques including preliminary hazard list, preliminary hazard analysis, subsystem hazard analysis, system hazard analysis, operating and support hazard analysis, and health hazard analysis. The chapter ends with a discussion of hazard tracking and its importance.
Chapter 8Failure Modes, Effects, and Criticality Analysis for System Safety (Gullo)
This chapter describes how the Failure Modes and Effects Analysis (FMEA) and Failure Modes, Effects, and Criticality Analysis (FMECA) are useful for system safety analysis. It discusses various types of FMEAs including Design FMECA, Software Design FMECA, and Process Failure Modes, Effects, and Criticality Analysis (PFMECA) and how they may be applied in a number of flexible ways at different points in the system, hardware, and software development life cycle.
Chapter 9Fault Tree Analysis for System Safety (Dixon)
Fault Tree Analysis (FTA) is covered in this chapter. It is a very popular type of analysis used in system safety. It is a representation in tree form of the combination of causes (failures, faults, errors, etc.) contributing to a particular undesirable event. It uses symbolic logic to create a graphical representation of the combination of failures, faults, and errors that can lead to the undesirable event being analyzed. The purpose of FTA is to identify the combinations of failures and errors that can result in the undesirable event. This chapter provides a brief history of the development of FTA and provides a detailed description of how the analyst creates and applies FTA.
Chapter 10Complementary Design Analysis Techniques (Dixon)
This chapter covers several additional popular hazard analysis techniques including event trees, sneak circuit analysis, functional hazard analysis, barrier analysis, and bent pin analysis. It also provides brief introductions to a few additional techniques that are less often used including Petri nets, Markov analysis, management oversight risk tree, and system‐theoretic process analysis.
Chapter 11Process Safety Management and Analysis (Dixon)
This chapter introduces Process Safety Management (PSM). It is an effort to prevent catastrophic accidents involving hazardous processes that involve dangerous chemicals and energies. It applies management principles and analytic techniques to reduce risks to processes during the manufacture, use, handling, storage, and transportation of chemicals. A primary focus of PSM is on hazards related to the materials and energetic processes present in chemical production facilities, but it can also be applied to facilities that handle flammable materials, high voltage devices, high current load devices, and energetic materials, such as rocket motor propellants. In this chapter we discuss the regulatory requirement for PSM, elements of PSM, hazard analysis techniques, and related regulations and end with a discussion of inherently safer design.
Chapter 12System Safety Testing (Gullo)
In this chapter we discuss the purpose and importance of safety testing. The different types of safety tests are described along with the test strategy and test architecture. The development of safety test plans is covered. This chapter contains a section on testing for regulatory compliance and discusses numerous national and international standards. The topic of Prognostics and Health Monitoring (PHM) is introduced along with a discussion of the return on investment associated with PHM. We also discuss how to leverage reliability test approaches for safety testing. Safety test data collection is covered along with what to do with test results. The chapter is ended with a discussion on designing for testability and test modeling.
Chapter 13Integrating Safety with Other Functional Disciplines (Gullo)
In this chapter, we cover several ways of integrating safety with other engineering and functional disciplines. We discuss the many key interfaces to system safety engineering, and we define the cross‐functional teams. We have touched on modern decision‐making in a digital world and on knowing who are your friends and your foes. The importance of constant communication is emphasized. We talk about a code of conduct and values. This chapter introduces paradigms from several different sources and how they relate to system safety and how their application can make you a better engineer and help make you and your organization more successful.
Chapter 14Design for Reliability Integrated with System Safety (Gullo)
The integration with all functional disciplines is very important for effectively and efficiently practicing system safety engineering, but the most important of these functional discipline interfaces is the interface to reliability engineering. This chapter builds on and applies the lessons from Chapter 13 to establish a key interface with reliability engineering. In this chapter we discuss what reliability is and how it is intertwined with system safety. Specifically we discuss how system safety uses reliability data and how this data is used to help determine risk. We conclude the chapter with examples of using reliability data to design for safety.
Chapter 15Design for Human Factors Integrated with System Safety (Dixon and Gullo)
In starting this chapter, we refer back to the previous two chapters where we discussed the ways system safety engineers should integrate and interface with other types of engineers and functional disciplines and, in particular, with reliability engineering. Another important engineering interface for a system safety engineer is Human Factors Engineering (HFE). System safety benefits greatly from a well‐established and reinforced interface to HFE. In this chapter we define HFE and its role in design of both hardware and software. We discuss the Human–Machine Interface (HMI), the determination of manpower and workload requirements, and how they influence personnel selection and training. We detail how human factors analysis is performed and how the various tools are used. Also discussed is how the human in the system influences risk, human error and its mitigation, and testing to validate human factors in design.
Chapter 16Software Safety and Security (Gullo)
This chapter introduces the subjects of software safety and security. Many of today’s systems are software‐intensive systems, and it is necessary to analyze, test, and understand the software thoroughly to ensure a safe and secure system and to build system trust that the system always works as intended without fear of disruptions or undesirable outcomes. This chapter provides a detailed discussion of cybersecurity and software assurance. Next we discuss the basic software system safety tasks and how software safety and cybersecurity are related. Software hazard analysis tools are discussed along with a detailed discussion of Software FMECA. This chapter ends with a discussion of software safety requirements.
Chapter 17Lessons Learned (Dixon, Gullo, and Raheja)
A lesson learned is the study of similar types of data and knowledge discovered from past events to prevent future traumatic recurrences or enable great successes. This chapter focuses on the importance of using lessons learned to prevent future accidents. It discusses the importance of capturing lessons learned and how to analyze failures to learn from them. We also discuss the importance of learning from successes and near‐misses. Throughout this chapter pertinent examples are provided along with analysis of why lessons learned are so important. We also cover the process of continuous improvement.
Chapter 18Special Topics on System Safety (Gullo and Dixon)
This final chapter delves into several special topics and applications to consider in relation to the future of system safety. There is no better industry marketplace to address the future of system safety features than the commercial aviation and automobile industries. We examine the historical and current safety data of both industries to see what it tells us about the historical trends and the future probabilities of fatal accidents. We explore the safety design benefits from commercial air travel that could be leveraged by automobile manufacturers and developers of new ground transportation systems. This chapter also discusses future improvements in commercial space travel.
Dev Raheja, Louis J. Gullo, and Jack Dixon
Only through knowledge of a specific system’s performance can a person understand how to design for safety for that particular system. Anyone designing for safety should realize that there is no substitute for first‐hand knowledge of a system’s operating characteristics, architecture, and design topology. The most important parts of this knowledge is understanding the system—learning how it performs when functioning as designed, verifying how the system performs when applied under worst‐case conditions (including required environmental stress conditions), and experiencing faulty conditions (including mission‐critical failures and safety‐critical failures).
A system is defined as a network or group of interdependent components and operational processes that work together to accomplish the objectives and requirements of the system. Safety is a very important aim of a system while executing and accomplishing its objectives and requirements. The design process of any system should ensure that everybody involved in using the system or developing the system gains something they need, avoiding the allure to sacrifice one critical part of the system design in favor of another critical part of the system. This context includes customers, system operators, maintenance personnel, suppliers, system developers, system safety engineers, the community, and the environment.
System safety is the engineering discipline that drives toward preventing hazards and accidents in complex systems. It is a system‐based risk management approach that focuses on the identification of system hazards, analysis of these system hazards, and the application of system design improvements, corrective actions, risk mitigation steps, compensating provisions, and system controls. This system‐based risk management approach to safety requires the coordinated and combined applications of system management, systems engineering, and diverse technical skills to hazard identification, hazard analysis, and the elimination or reduction of hazards throughout the system life cycle.
Taking a systems approach enables management to view its organization in terms of many internal and external interrelated organization and company business connections and interactions, as opposed to discrete and independent functional departments or processes managed by various chains of command within an organization. (Note: The term “organization” will be used throughout the book to refer to all system developer and customer entities to include businesses, companies, suppliers, operators, maintainers, and users of systems.) When all the connections and interactions are properly working together to accomplish a shared aim, an organization can achieve tremendous results, from improving the safety of its systems, products, and services to raising the creativity of an organization to increasing its ability to develop innovative solutions to help mankind progress.
System safety is defined as the application of engineering and management principles, criteria, and techniques to achieve acceptable risk within the constraints of operational effectiveness and suitability, time, and cost throughout all phases of the system life cycle [1]. We have come a long way since the early days of system safety in the 1960s. System safety in many organizations has been successfully integrated into the mainstream of systems engineering and is vigorously supported by management as a discipline that adds value to the product development process. Many analysis techniques have been created and revised numerous times to make them more effective and/or efficient. The application of system safety in product design and development has proven valuable in reducing accidents and product liability.
However, there are still many challenges facing system safety engineers. First and foremost, even after over 50 years, system safety is still a small and somewhat obscure discipline. It needs more visibility. While many organizations successfully implement system safety, many continue to ignore its benefits and suffer the consequences of delivering inferior, unsafe products.
Other challenges include the continually increasing complexity of systems being developed. Now, instead of only worrying about one system at a time, we must worry about building safe systems of systems. This additional complexity has introduced new challenges of how to address the interactions of all the systems that might make up a system‐of‐systems.
Inadequate specifications and requirements continue to plague the discipline. Too often weak, generic specifications are provided to the designers leading to faulty designs because the requirements were vague or ill defined.
The management of change is often another weakness in the product life cycle. As changes are made to the product or system, system safety must be involved to ensure that the changes themselves are safe and that they do not cause unintended consequences that could lead to accidents.
The human often causes safety problems by the way he uses, or abuses, the product. All too often the user can be confused by the complexity of a product or system or by the user interface provided by the software that operates it. Taking the human into consideration during the design process is paramount to its successful deployment.
The goal of this book is to help remedy some of these problems and build upon the many years of success experienced by system safety. In this chapter we present 10 paradigms we believe will lead to better and safer product designs. Throughout this chapter, and the book, we provide both good and bad examples so the reader can identify with real‐world cases from which to learn.
Forming an ideal system’s approach to designing new systems involves developing paradigms, standards, and design process models for a developer to follow and use as a pattern for themselves in their future design efforts. These paradigms are often called “words of wisdom” or “rules of thumb.” The word “paradigm,” which originated from the Greek language, is used throughout the content of this book to describe a way of thinking, a framework, and a model to use to conduct yourself in your daily lives as a system safety engineer, or any type of engineer. A paradigm becomes the way you view the world, perceiving, understanding, and interpreting your environment and helping you formulate a response to what you see and understand.
This book starts by focusing on 10 paradigms for managing and designing systems for safety. These 10 paradigms are the most important criteria for designing for safety. Each of these paradigms is listed next and is explained in detail in separate clauses of this chapter following the list:
Paradigm 1: Always aim for zero accidents.
Paradigm 2: Be courageous and “Just say no.”
Paradigm 3: Spend significant effort on systems requirements analysis.
Paradigm 4: Prevent accidents from single as well as multiple causes.
Paradigm 5: If the solution costs too much money, develop a cheaper solution.
Paradigm 6: Design for Prognostics and Health Monitoring (PHM) to minimize the number of surprise disastrous events or preventable mishaps.
Paradigm 7: Always analyze structure and architecture for safety of complex systems.
Paradigm 8: Develop a comprehensive safety training program to include handling of systems by operators and maintainers.
Paradigm 9: Taking no action is usually not an acceptable option.
Paradigm 10: If you stop using wrong practices, you are likely to discover the right practices.
These paradigms, which are referenced here, are cited throughout the course of this book. Table 1.1, at the back of this chapter, provides a guide to where in this book the various paradigms are addressed.
Table 1.1 Paradigm locations
Paradigm number
Paradigm
Chapter
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Paradigm 1
Always aim for zero accidents
X
X
X
X
X
X
X
Paradigm 2
Be courageous and “just say no”
X
X
X
X
Paradigm 3
Spend significant effort on systems requirements analysis
X
X
X
X
X
X
Paradigm 4
Prevent accidents from single as well as multiple Causes
X
X
X
X
X
X
Paradigm 5
If the solution costs too much money, develop a cheaper solution
X
X
X
X
Paradigm 6
Design for Prognostics and Health Monitoring (PHM) to minimize the number of surprise disastrous events or preventable mishaps
X
X
X
X
Paradigm 7
Always analyze structure and architecture for safety of complex systems
X
X
X
X
Paradigm 8
Develop a comprehensive safety training program to include handling of systems by operators and maintainers
X
X
X
X
X
Paradigm 9
Taking no action is usually not an acceptable option
X
X
X
X
X
Paradigm 10
If you stop using wrong practices, you are likely to discover the right practices
X
X
X
X
X
Philip Crosby, the former Senior Vice President (VP) at ITT and author of the famous book Quality is Free, pioneered the zero defects standard. Philip Crosby considered “zero defects” as the only standard you needed. This applies even more to safety. It is a practice that aims to prevent defects and errors and to do things right the first time. The ultimate aim is to reduce the level of defects to zero. The overall effect of achieving zero defects is the maximization of profitability.
To experience high profitability, an organization has to compare the life cycle costs of designing the product using current methods versus improving the design for zero accidents using creative solutions. Such creative solutions are usually simple. Because of this, they are often called elegant solutions. It may be a cheaper option to develop an elegant solution over a complex solution. As the great Jack Welch, former CEO of General Electric (GE), said in his book Get Better or Get Beaten [2], doing things simply is the most elegant thing one can do. In a company in Michigan, a shaft/key assembly for a heavy‐duty truck transmission was designed for zero failures for at least 20 years by changing the heat‐treating method. Heat treatment (e.g., annealing) is a process where heating and cooling of a metallic item alter the material properties for the purpose of improving the design strength and reducing the risk of hazards or failures. In this case, the temperature range and the heating/cooling rates were varied to achieve the optimum design strength. The cost of the new heat treatment method was the same as the previous method. The only additional cost was the cost to run a few experiments to determine the temperature range and the heating/cooling rates that were needed to get the desired strength. The new heat treatment method became a cheaper method when the company eliminated warranty costs, risk of safety‐related lawsuit costs, and projected maintenance costs. They received more business as a result of customer satisfaction through achieving high quality and safety. The Return on Investment (ROI) was at least 1000%. This was an elegant solution. This solution was much cheaper than paying legal penalties and maintenance costs for accidents.
Similarly, consider another example involving de Havilland DH 106 Comet aircraft. A redesign of the de Havilland DH 106 Comet aircraft resulted from the first three de Havilland DH 106 Comet aircraft fuselage failures in 1952. The failures were located around the perimeter of the large square windows on the fuselage, which were manufactured without increasing the thickness of the fuselage in this area. The metal fatigue from high stress concentration on the sharp corners was causing the failures. They eliminated the sharp corners by designing oval‐shaped windows. This redesign was much cheaper than paying for the accidents that would have resulted if the design was not changed [3].
Paradigm 2 is to be courageous and “Just say no” to those who want to rush designs through the design review process without exercising due diligence and without taking steps to prevent catastrophic events. Say “No” at certain times during the system development design process to prevent future possible catastrophic events as they are discovered. Many organizations have a Final Design Review. Some call it the Critical Design Review. This is the last chance to speak up if anyone is concerned about anything in the design. A very important heuristic to remember is “Be courageous and just say no.” The context here is that if the final design is presented with known safety design issues, and everyone votes “yes” to the design approval without seriously challenging it, then your answer should be “no.” Why? Because there are almost always new problems lingering in the minds of the team members, but they don’t speak up at the appropriate time. They are probably thinking that it is too late to interfere or they want to be a part of the groupthink process where everyone thinks alike. No matter how good the design is, an independent facilitator can find many issues with it. Ford Motor Company hired a new VP during the design of the 1995 model of the Lincoln Continental car. The company had been making this car for years, and everyone on the team had at least 10 years of experience. The design was already approved, but the new VP insisted on questioning every detail of the design with a cross‐functional team composed of engineers from each subsystem.
Though its redesign began four months later than had been intended, the 1995 Lincoln Continental was available on the market one month ahead of schedule. The team made over 700 design changes. Since they made these improvements while the design was still on paper, the team completed their project using only a third of their budgeted 90 million dollars, resulting in savings of 60 million dollars [4].
It takes courage to be a real change agent and a true believer that a change has critical importance. Jack Welch stated in his book Winning [5], that real change agents comprise less than 10% of all business people. They have courage—a certain fearlessness about the unknown. As Jack says, “Change agents usually make themselves known. They’re typically brash, high energy, and more than a little bit paranoid about the future. Very often, they invent change initiatives on their own or ask to lead them. Invariably, they are curious and forward looking. They ask a lot of questions that start with the phrase: “Why don’t we….?”’
To recommend design for safety changes at critical points in the system design cycle, and be successful in implementing the design changes, you will need to win people to your way of thinking. As Dale Carnegie stated in his book How to Win Friends and Influence People [6], you need to exercise 12 principles to win people to your way of thinking. Paraphrasing these principles here, we have:
Principle 1: Get the best of an argument by avoiding it.
Principle 2: Respect the other person’s opinions and avoid saying, “You are wrong.”
Principle 3: If you realize that you are wrong, admit you are wrong immediately.
Principle 4: Begin a discussion in a friendly way and nonconfrontational.
Principle 5: Provoke “yes, yes” responses from the other person immediately.
Principle 6: Allow the other person to do the most talking.
Principle 7: Let the other person think your idea is also their idea.
Principle 8: See things from the other person’s point of view and perspective.
