Risk Management in Life-Critical Systems -  - E-Book

Risk Management in Life-Critical Systems E-Book

0,0
141,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Risk management deals with prevention, decision-making, action taking, crisis management and recovery, taking into account the consequences of unexpected events. The authors of this book are interested in ecological processes, human behavior, as well as the control and management of life-critical systems, which are potentially highly automated. Three main attributes define life-critical systems, i.e. safety, efficiency and comfort. They typically lead to complex and time-critical issues and can belong to domains such as transportation (trains, cars, aircraft), energy (nuclear, chemical engineering), health, telecommunications, manufacturing and services. The topics covered relate to risk management principles, methods and tools, and reliability assessment: human errors as well as system failures, socio-organizational issues of crisis occurrence and management, co-operative work including human.machine cooperation and CSCW (computer-supported cooperative work): task and function allocation, authority sharing, interactivity, situation awareness, networking and management evolution and lessons learned from Human-Centered Design.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 570

Veröffentlichungsjahr: 2014

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



First published 2014 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK

www.iste.co.uk

John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA

www.wiley.com

© ISTE Ltd 2014The rights of Patrick Millot to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2014947879

Contents

List of Figures

List of Tables

Foreword

Introduction

PART 1 General Approaches for Crisis Management

1 Dealing with the Unexpected

1.1. Introduction

1.2. From mechanics to software to computer network

1.3. Handling complexity: looking for new models

1.4. Risk taking: dealing with nonlinear dynamic systems

1.5. Discussion

1.6. Conclusion

1.7. Bibliography

2 Vulnerability and Resilience Assessment of Infrastructures and Networks: Concepts and Methodologies

2.1. Introduction

2.2. Risk and vulnerability

2.3. Vulnerability analysis and assessment

2.4. Resilience and main associated concepts

2.5. Paradigms as consequence of risk analysis extension

2.6. Resilience analysis and assessment

2.7. Conclusion: new challenges

2.8. Bibliography

3 The Golden Hour Challenge: Applying Systems Engineering to Life-Critical System of Systems

3.1. Introduction

3.2. The Golden hour: toward a resilient life-critical system of systems

3.3. Systems of systems engineering

3.4. Next steps forward

3.5. Bibliography

4 Situated Risk Visualization in Crisis Management

4.1. Introduction

4.2. Crisis management, emergency management and business continuity

4.3. Risk management in critical operations

4.4. Situated risk visualization in critical operations

4.5. Conclusions and perspectives

4.6. Bibliography

5 Safety Critical Elements of the Railway System: Most Advanced Technologies and Process to Demonstrate and Maintain Highest Safety Performance

5.1. Railways demonstrate the highest safety performance for public transportation

5.2. Key success factors

5.3. The European very high-speed rail technology: a safety concept with more than 30 years of experience and continuous innovation in the technology

5.4. Project management and system integration

5.5. Procedure for risk management

5.6. Conclusion

6 Functional Modeling of Complex Systems

6.1. Introduction

6.2. The modeling paradigm of MFM

6.3. Uses of functional modeling

6.4. Multilevel flow modeling

6.5. Conclusions

6.6. Bibliography

PART 2 RISK MANAGEMENT AND HUMAN FACTORS

7 Designing Driver Assistance Systems in a Risk-based Process

7.1. Risk-based design in perspective

7.2. Human factors in risk-based design

7.3. A quasi-static methodology

7.4. Implementation on board vehicles for driver assistance

7.5. A case study

7.6. Conclusions

7.7. Bibliography

8 Dissonance Engineering for Risk Analysis: A Theoretical Framework

8.1. Introduction

8.2. The concept of dissonance

8.3. A theoretical framework for risk analysis

8.4. Examples of application of the theoretical framework

8.5. Conclusion

8.6. Bibliography

9 The Fading Line between Self and System

9.1. Introduction

9.2. Four events

9.3. Development, drama

9.4. Views on human error

9.5. Peirce’s triadic semiotic system

9.6. Abduction, or how do humans form conclusions

9.7. Heidegger and Descartes

9.8. Designing the signs

9.9. Consequences

9.10. Conclusions

9.11. Bibliography

10 Risk Management: A Model for Procedure Use Analysis

10.1. Introduction

10.2. Procedures in nuclear power

10.3. Description of the model

10.4. Application of the model

10.5. Significance

10.6. Conclusions

10.7. Acknowledgements

10.8. Bibliography

11 Driver-assistance Systems for Road Safety Improvement

11.1. Introduction

11.2. Driver’s vigilance diagnostic

11.3. Driver distraction diagnostic

11.4. Human–machine interaction concept

11.5. Conclusions

11.6. Bibliography

PART 3 Managing Risk via Human–Machine Cooperation

12 Human–Machine Cooperation Principles to Support Life-Critical Systems Management

12.1. Context

12.2. Human–machine cooperation model

12.3. Common work space

12.4. Multilevel cooperation

12.5. Towards a generic modeling of human–machine cooperation

12.6. Conclusion and perspectives

12.7. Bibliography

13 Cooperative Organization for Enhancing Situation Awareness

13.1. Introduction

13.2. Procedure-based behavior versus innovative behavior

13.3. Situation awareness: between usefulness and controversy

13.4. Collective SA: how to take the agent’s organization into account?

13.5. Enhancing collective SA with a support tool issued of cooperation concepts: the common work space

13.6. Conclusion

13.7. Bibliography

14 A Cooperative Assistant For Deep Space Exploration

14.1. Introduction

14.2. The virtual camera

14.3. Evaluation

14.4. Future work

14.5. Conclusion

14.6. Bibliography

15 Managing the Risks of Automobile Accidents via Human–Machine Collaboration

15.1. Introduction

15.2. Trust as human understanding of machine

15.3. Machine understanding of humans

15.4. Design of attention arousal and warning systems

15.5. Trading of authority for control from the driver to the machine under time-critical situations

15.6. Conclusions

15.7. Bibliography

16 Human–Machine Interaction in Automated Vehicles: The Abv Project

16.1. Introduction

16.2. The ABV project

16.3. Specifications of the human–machine cooperation

16.4. Cooperation realization

16.5. Results

16.6. Conclusion

16.7. Bibliography

17 Interactive Surfaces, Tangible Interaction: Perspectives for Risk Management

17.1. Introduction

17.2. State of the art

17.3. Proposition: distributed UI on interactive tables and other surfaces for risk management

17.4. Case studies

17.5. Conclusion

17.6. Acknowledgments

17.7. Bibliography

Conclusion

C.1. A large range of Life-critical Systems

C.2. Evolution of risk management methods

C.3. Risk management and human factors

C.4. Bibliography

List of Authors

Index

List of Figures

1.1. Expected and actual situation showing small and bigger variations

2.1. Factors shaping the risks faced to critical infrastructures [KRO 08]

2.2. A proposition of risk situations and relevant risk assessment strategies

3.1. eCall: the crashed car calls 112! [EC 13e]

3.2. N² matrix of pairings of different systems within the system of systems [RUA 11]

3.3. Functional model of the accident detection system architecture [RUA 11]

4.1. RTO and maximum tolerable period of disruption [COR 07]

4.2. Global view of the 3D interactive scene – Unity 3D [STE 13] (For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement)

5.1. Range of order that has been observed for the last decade

5.2. Main components of a railway system

5.3. The bogie integrating six safety critical functions

5.4. Classical development V-cycle

5.5. Risk management organization in European Union

5.6. Technics for identification and evaluation of hazards and their subsequent risks

5.7. European safety management system

5.8. Safety authorization and safety management system

6.1. The means-end relation

6.2. Means-end structure showing the possible combinations of means-end relations

6.3. MFM concepts

6.4. A heat transfer loop

6.5. MFM of heat transfer loop without control

6.6. MFM of heat transfer loop with flow and temperature control

6.7. MFM model of heat transfer loop with a protection system suppressing high temperature in HE2

7.1. Risk-based design methodology flowchart

7.2. Sheridan’s five levels of “supervisory control” (adapted from [SHE 97])

7.3. A generic operator model (adapted from [CAR 07])

7.4. Essential nature of human–machine interaction

7.5. Error propensity (EP) and dynamic generation of sequences

7.6. General structure of the quasi-static methodology for RBD

7.7. Expanded human performance event tree (adapted from [CAC 12])

7.8. Generic risk matrix. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement.zip

7.9. ADAS at level of driving task a) and temporal sequence of intervention b)

7.10. EHPET for the case study with ADAS

8.1. The DIMAGE model

8.2. Stable and unstable level of a dissonance dimension

8.3. The theoretical framework based on human– machine learning to control dissonances

8.4. The reverse comic strip-based approach to identify dissonances

8.5. Examples of emotion and sound variation images

8.6. The knowledge analysis algorithm

8.7. The dissonance evaluation algorithm

8.8. The generic reinforcement based on learning process

8.9. A reinforcement algorithm by case-based reasoning

8.10. The interpretation of pictures from rail platform signaling systems

8.11. The associated reverse comic strip for dissonance identification

8.12. The associated rule analysis for dissonance identification and evaluation

8.13. A prediction process based on the knowledge reinforcement

8.14. The correct prediction rate by reinforcing the knowledge base

9.1. Depiction of Peirce’s triadic relationship between object, sign and interpretation

9.2. Diagram illustrating the problems of determining causes and control actions in an uncertain system. An unknown disturbance might be acting on the system, a shift in its parameter may have happened, leading to a qualitative change in dynamics, or a structural change might have occurred, leading to a significantly different system. The innovation or surprise i is the difference between observation and expectation, and may lead to adjustment. Whether control is based on observation or on expectation is uncertain, and probably variable

10.1. A model for procedure analysis

11.1. Examples of driver-assistance systems

11.2. Vehicle/driver/environment system

11.3. T involuntary transition from waking to sleeping (from Alain Muzet)

11.4. Algorithmic principle for the hypovigilance diagnostic of the driver and results of this analysis on a subject in real driving conditions

11.5. Classification principles for visual distraction detection

11.6. DrivEasy concept

12.1. Attributes of cooperative agent

12.2. Cooperative activity through agents’ know-how (Agi KH), agents’ know-how-to-cooperate (Agi KHC), agents’ situation awareness (Agi SA), common frame of reference (COFOR), team situation awareness (Team SA) and common work space

12.3. Fighter aircraft CWS (example of the tactical situation SITAC). For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement

12.4. Multilevel cooperation

12.5. Cooperative tasks 1-KH; 2-CWS; 3-KHC (current task); 4-KHC (intention); 5-KHC (authority); 6-KHC (model)

12.6. Robotics CWS. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement

12.7. Example of agents’ abilities identification for task sharing and authority management (red arrows). For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement.zip

13.1. Allocation of functions among humans and machines (adapted from [BOY 11])

13.2. SA three-level model adapted from [END 95a]

13.3. Team-SA adapted from [SAL 08]

13.4. The three forms for task distribution according to agents KH and related tasks to share

13.5. Task distribution and related SA distribution, in the augmentative and integrative forms

13.6. Task distribution and related SA distribution, in the debative form

13.7. CWS principle for team SA [MIL 13]

14.1. The model of cooperation between astronauts and ground-based experts and how it is changing for deep space exploration

14.2. Virtual camera data feedback loop

14.3. The human-centered design process for the development of the virtual camera

14.4. Riding in the NASA Lunar Electric Rover vehicle at DesertRATS, collecting user requirements for the development of the VC

14.5. Horizontal prototype for the VC showing icons and interface. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement

14.6. The VC vertical prototype with icons labeled. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement

15.1. The structure of trust

15.2. Deceleration meter

15.3. a) Pressure distribution sensors and b) the obtained data. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement

15.4. Pressure distribution sensors and the obtained data [ISH 13]. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement

15.5. Model of driver lane change intent emergence [ZHO 09]

15.6. a) The attention arousing display and b) its effects on THW [ITO 13a]. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement

15.7. Driver reaction against the rapid deceleration of the forward vehicle [ITO 08b]

15.8. A situation machine protective action is needed. In this example, the left lane is the cruising lane and the right lane is the passing lane. The vehicle in the right lane is in the blind spot of the side-view mirror of the host vehicle

16.1. Structure of the ABV project

16.2. Graph of the different modes of the ABV system

16.3. Graph of the different modes of the ABV system. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement

16.4. Driver monitoring system from Continental. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement

16.5. Shared driving control architecture

16.6. Experimental results on the SHERPA simulator. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement

16.7. Evaluation of the sharing quality. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement

17.1. Two configurations for risk management UI: a) centralized distribution of U; b) network of distributed UI [LEP 11]. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement.zip

17.2. Crisis unit using TangiSense and other platforms (adapted from [LEP 11]). For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement.zip

17.3. A road traffic simulation on two TangiSense interactive tables. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement.zip

17.4. Use of zoom tangible object, without effect on the other table. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement.zip

17.5. Tangiget synchronization with effect on TangiSense 2. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement.zip

17.6. The TangiSense table as equipped for the risk game with ground map display, tangible objects and virtual feedback shown. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement.zip

17.7. Functional view showing the various types of agents, filters and traces. For a color version of this figure, seewww.iste.co.uk/millot/riskmanagement.zip.

List of Tables

2.1. Classification of initiating events

2.2. Site/building inherent vulnerability assessment matrix (partial risk assessment) [FEM 03]

3.1. Major problems and respective drivers that eCall can improve [EC 11a]

4.1. Approaches to crisis management

4.2. Crisis features, types and questions to be answered [STE 13]

7.1. Possible data for the traffic light scenario

10.1. Have you ever witnessed a scenario where?

10.2. Solutions table for Case 15

10.3. Decision point metrics

14.1. A use case for surface exploration

15.1. Scale of degrees of automation [SHE 92, INA 98]

Foreword

The theme “Risk Management in Life Critical Systems” resulted from a cooperative work between LAMIH (French acronym for Laboratory of Industrial and Human Automation, Mechanics and Computer Science) at the University of Valenciennes (France) and Human-Centered Design Institute (HCDi) at Florida Institute of Technology (USA) within the framework of the Partner University Funds (PUF) Joint Research Lab on Risk Management in Life-Critical Systems co-chaired by me and Dr Guy A. Boy.

A summer school on the above theme was held at Valenciennes on 1–5 July 2013, which had gathered more than 20 specialists from the domain from seven countries (i.e. France, USA, Italy, Germany, Netherland, Japan and Denmark) among the most developed ones where “safety” assumes an increasing importance. This book is the result of the contribution of most of these researchers.

This book relates to the management of risk. Another book, focusing on risk taking, will be edited by my colleague Dr Guy A. Boy and published by Springer, UK.

Patrick MILLOTSeptember 2014

Introduction

Introduction written by Patrick MILLOT.

Life Critical Systems are characterized by three main attributes: safety, efficiency and comfort. They typically lead to complex and time critical issues. They belong to domains such as transportation (trains, cars, aircraft, air traffic control), space exploration energy (nuclear and chemical engineering), health and medical care, telecommunication networks, cooperative robot fleets, manufacturing, and services leading to complex and time critical issues.

Risk management deals with prevention, decision-making, action taking, crisis management and recovery, taking into account consequences of unexpected events. We are interested in ecological processes, human behavior, as well as control and management of life-critical systems, potentially highly-automated. Our approach focuses on “human(s) in the loop” systems and simulations, taking advantage of the human ability to cope with unexpected dangerous events on the one hand, and attempting to recover from human errors and system failures on the other hand. Our competences are developed both in Human–Computer Interaction and Human–Machine System. Interactivity and human-centered automation are our main focuses.

The approach consists of three complementary steps: prevention, where any unexpected event could be blocked or managed before its propagation; recovery, when the event results in an accident, making protective measures mandatory to avoid damages; and possibly after the accident occurs, management of consequences is required to minimize or remove the most severe ones. Global crisis management methods and organizations are considered.

Prevention can be achieved by enhancing both system and human capabilities to guarantee optimal task execution:

– by defining procedures, system monitoring devices, control and management methods;
– by taking care of the socio-organizational context of human activities, in particular the adequacy between system demands and human resources. In case of lack of adequacy, assistance tools must be introduced using the present development of Information Technologies and Engineering Sciences.

The specialties of our community and the originality of our approaches are to combine these technologies with cognitive science knowledge and skills in “human in the loop” systems. Our main related research topics are: impact of new technology on human situation awareness (SA); cooperative work, including human–machine cooperation and computer supporting cooperative work (CSCW); responsibility and accountability (task and function allocation, authority sharing).

Recovery can be enhanced:

– by providing technical protective measures, such as barriers, which prevent from erroneous actions;
– by developing reliability assessment methods for detecting human errors as well as system failures;
– and by improving human detection and recovery of their own errors, enhancing system resilience; human–machine or human–human cooperation is a way to enhance resilience.

Crisis management consists:

– of developing dedicated methods;
– of coping with socio-organizational issues using a multiagent approach through for instance an adaptive control organization.

The different themes developed in this book are related to complementary topics developed in pluridisciplinary approaches, some are more related to prevention, others to recovery and the last ones to global crisis management. But all are related to concrete application fields among life-critical systems.

Seventeen chapters contribute to answer these important issues. We chose to gather them in this book into three complementary parts: (1) general approaches for crisis management, (2) risk management and human factors and (3) managing risks via human–machine cooperation.

Part 1 is composed of first six chapters dedicated to general approaches for crisis management:

– Chapter 1, written by Guy A. Boy criticizes the theories, methods and tools developed several years ago, based on linear approaches to engineering systems that consider unexpected and rare events as exceptions, instead of including them in the flow of everyday events, handled by well-trained and experienced experts in specific domains. Consequently, regulations, clumsy automation and operational procedures are still accumulated in the short term instead of integrating long-term experience feedback. This results in the concept of quality assurance and human–machine interfaces (HMI) instead of focusing on human–system integration. The author promotes human-centered processes such as creativity, adaptability and problem solving and the need to be better acquainted with risk taking, preparation, maturity management, complacency emerging from routine operations and educated common sense.
– Chapter 2, written by Eric Chatelet starts with well-known concepts in risk analysis but introduces the merging use of the resilience concept. The vulnerability concept is one of the starting points to extend the risk analysis approaches. The author gives an overview of approaches dedicated to the resilience assessment of critical or catastrophic events concerning infrastructures and/or networks.
– Chapter 3, written by Jean René Ruault deals with a case study on an emergency system management, from an architectural and a system of systems engineering perspective. It gives an overview of all dimensions to take into account when providing a geographical area with the capacity to manage crisis situations – in the present case road accidents – in order to reduce accidental mortality and morbidity and to challenge the golden hour. This case study shows how these operational, technical, economic and social dimensions are interlinked, both in the practical use of products and in service provision. Based on a reference operational scenario, the author shows how to define the perimeter and functions of a system of systems.
– Chapter 4, written by Lucas Stephane provides an overview of state-ofthe-art approaches to critical operations and proposes a solution based on the integration of several visual concepts within a single interactive 3D scene intended to support situated visualization of risk in crisis situations.The author first presents approaches to critical operations and synthesizes risk approaches. He then proposes the 3D integrated scene and develops user-test results and feedback.
– Chapter 5, written by Stephane Romei shows the high level of performance attained by the European railway system. It results from several success factors among which three are of higher importance: (1) expertise and innovation in design, operation and maintenance in safety critical technologies, (2) competences in project management and system integration and (3) procedures for risk management. Illustrations are taken from Very High Speed Train technology.
– Finally, Chapter 6, written by Morten Lind deals with system complexity, another dimension that influences decisions made by system designers and that may affect the vulnerability of systems to disturbances, their efficiency, the safety of their operations and their maintainability. The author describes a modeling methodology capable of representing industrial processes and technical infrastructures from multiple perspectives. The methodology called Multilevel Flow Modeling (MFM) has a particular focus on semantic complexity but addresses also syntactic complexity. MFM uses mean-end and part-whole concepts to distinguish between different levels of abstraction representing selected aspects of a system. MFM is applied for process and automation design and for reasoning about fault management and supervision and control of complex plants.

Part 2 is comprised of the five following chapters and is related to human factors, the second dimension beside the technical and methodological aspects of risk management:

– Chapter 7, written by Pietro Carlo Cacciabue, presents a well-formalized and consolidated methodology called Risk-Based Design (RBD) that integrates systematically risk analysis in the design process with the aim of prevention, reduction and/or containment of hazards and consequences embedded in the systems as the design process evolves. Formally, it identifies the hazards of the system and continuously optimizes design decisions to mitigate them or limit the likelihood of the associated consequences, i.e. the associated risk. The author first discusses the specific theoretical problem of handling dynamic human–machine interactions in a safety- and risk-based design perspective. A development for the automotive domain is then considered and a case study complements the theoretical discussion.
– Chapter 8, written by Frederic Vanderhaegen presents a new original approach to analyze risks based on the dissonance concept. A dissonance occurs when conflict between individual or collective knowledge occurs. A theoretical framework is then proposed to control dissonances based on the Dissonance Management (DIMAGE) model and the human–machine learning concept. The dissonance identification, evaluation and reduction function of DIMAGE is supported by automated tools that analyze the human behavior and knowledge. Three examples illustrate the approach.
– Chapter 9, written by René Van Paassen, deals with the influence of human errors in the reliability of systems, illustrated by examples in aviation. While technical developments increased the reliability of aircraft, it cannot be expected that the human component in a complex technical system underwent similar advances in reliability. This chapter contains a designer’s view on the creation of combined human–machine systems that provide safe, reliable and flexible operation. A common approach in design is the breakdown of a complete system into subsystems, and to focus on the design of the individual components. This can, up to a point, be used in the design of safe systems. However, the adaptive nature of the “human” component, which is precisely the reason for having humans in complex systems, is such that it is not practical to isolate the human as a single component, and assume that the synthesis of the human with the other components yields the complete system. Rather, humans “merge” with the complete system to a far greater extent than often imagined, and a designer needs to be aware of that. The author explores – through the reflection on a number of incidents and accidents – the nature of mishaps in human–machine systems, and the factors that might have influenced these events. It begins with a brief introduction of the events, and an overview of the different ways of analyzing them.
– Chapter 10, written by Kara Schmitt, challenges the assumptions of the US nuclear industry, that “strict adherence to procedure increases safety”, to see if they are still valid and hold true. The author reviews what has changed within the industry, and verifies that the industry does have strict adherence to procedures and a culture of rigid compliance. She offers an application regarding performing an experimental protocol and utilizing expert judgment to prove that the strict procedure adherence is not sufficient for overall system safety.
– Chapter 11, written by Serge Boverie, shows that in Organization for Economic Cooperation and Development (OECD) countries about 90% of the accidents are due to an intentional or non-intentional driver behavior: bad perception or bad knowledge of the driving environment (obstacles, etc.), due to physiological conditions (drowsiness and sleepiness) or bad physical conditions (old people and elderly drivers) etc. The author shows how development of increasingly intelligent advanced driver assistance systems (ADASs) should partly solve these problems. New functions will improve the environmental perception of the driver (night vision, blind spot detection and obstacle detection). In critical situation, they can substitute the driver (e.g. autonomous emergency braking, etc.). New ADAS generation will be able to provide the driver with the possibility to adapt the level of assistance in relation to his comprehension, needs, aptitudes, capacities and availabilities. For instance, a real-time diagnosis of the driver’s state (sleepiness, drowsiness, head orientation or of extra driving activity) is now under development.

Finally, Part 3 groups together the last six chapters dedicated to managing risk via a human–machine cooperation:

– Chapter 12, written by Marie Pierre Pacaux-Lemoine, presents a model of human–machine cooperation issued from different disciplines, human engineering, automation science, computer sciences, cognitive and social psychology. The model aims to enable humans and machines to work as partners while supporting interactions between them, i.e. making easier the perception and understanding of other agents’ viewpoint and behavior. Such a support is called a common work space (CWS) that we will see again in the following chapters. These principles aim to evaluate the risk for a human–machine system to reach an unstable and unrecoverable state. Several application domains, including car driving, air traffic control, fighter aircraft and robotics, illustrate this framework.
– Chapter 13, written by Patrick Millot, shows how organizations improving SA enhance human–machine safety. Indeed, people involved in the control and management of life-critical systems provide two kinds of roles: negative, with their ability to make errors, and positive, with their unique involvement and capacity to deal with the unexpected. The human– machine system designer remains, therefore, faced with a drastic dilemma: how to combine both roles, a procedure-based automated behavior versus an innovative behavior that allows humans to be “aware” and to cope with unknown situations. SA that characterizes the human presence in the system becomes a crucial concept for that purpose. The author reviews some of the SA weaknesses and proposes several improvements, especially the effect of the organization and of the task distribution among the agents to construct an SA distribution and a support to collective work. This issue derives from the human–machine cooperation framework, and the support to collective SA is once again the CWS.
– Chapter 14, written by Donald Platt, looks at a human-centered design approach to develop a tool to allow improved SA and cooperation in a remote and possibly hostile environment. The application field relates to deep space exploration. The associated risks include physical, mental, emotional and even organizational risks. Cooperation between astronauts on the planet surface and the mission operator and the chief scientist on Earth takes the form of a virtual camera (VC). The VC displays the dialog between the human agents, but is also a database with various useful information on the planet geography, geology, etc., that can be preliminary recorded in its memory or downloaded online. It plays the role of a CWS. The author relates how its ability to improve astronaut SA as well as collective SA has been tested experimentally.
– Chapter 15, written by Makoto Itoh, returns to ADASs. The human driver has to place appropriate trust in the ADAS based on an appropriate understanding of this tool. For this purpose, it is necessary for system designers to understand what trust is and what inappropriate trust is (i.e. overtrust and distrust), and how to design ADAS that is appropriately trusted by human drivers. ADAS also has to understand the physiological and/or cognitive state of the human driver in order to determine whether it is really necessary to provide assistive functions, especially safety control actions or not. The author presents a theoretical model of trust in ADAS, which is useful to understand what overtrust and/or distrust is and what should be needed to avoid inappropriate trust. Also, this chapter presents several driver-monitoring techniques, especially to detect a driver’s drowsiness or fatigue and to detect a driver’s lane-changing intent. Finally, he shows several examples of design of attention arousal systems, warning systems and systems that perform safety control actions in an autonomous manner.
– Chapter 16, written by Chouki Sentouh and Jean Christophe Popieul, presents the ABV project (French acronym for low-speed automation). It focuses on the interaction between human and machine with a continuous sharing of driving, considering the acceptability of the assistance and driver’s distractions and drowsiness. The main motivation of this project is the fact that in many situations, the driver is required to drive his/her vehicle at a speed lower than 50 km/h (speed limit in urban areas) or in the case of a traffic congestion due to traffic jams, in the surrounding areas of big cities, for example. The authors describe the specification of cooperation principles between the driver and assistance system for lane keeping developed in the framework of the ABV project.
– Finally, Chapter 17 is written by Christophe Kolski, Catherine Garbay, Yoann Lebrun, Fabien Badeig, Sophie Lepreux, René Mandiau and Emmanuel Adam. It describes interactive tables (also called tabletops) that can be considered as new interaction platforms, as collaborative and colocalized workspaces, allowing several users to interact (work, play, etc.) simultaneously. The authors’ goal is to share an application between several users, platforms (tabletops, mobile and tablet devices and other interactive supports) and types of interaction, allowing distributed human–computer interactions. Such an approach may lead to new perspectives for risk management; indeed, it may become possible to propose new types of remote and collaborative ways in this domain.

PART 1

General Approaches for Crisis Management

1

Dealing with the Unexpected

Chapter written by Guy A. BOY.

1.1. Introduction

Sectors dealing with life-critical systems (LCSs), such as aerospace, nuclear energy and medicine, have developed safety cultures that attempt to frame operations within acceptable domains of risk. They have improved their systems’ engineering approaches and developed more appropriate regulations, operational procedures and training programs. System reliability has been extensively studied and related methods have been developed to improve safety [NIL 03]. Human reliability is a more difficult endeavor; human factors specialists developed approaches based on human error analysis and management [HOL 98]. Despite this heavy framework, we still have to face unexpected situations that people have to manage in order to minimize consequences.

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!