168,99 €
This book, on the ergonomics of human.machine systems, is aimed at engineers specializing in informatics, automation, production or robotics, who are faced with a significant dilemma during the conception of human.machine systems. On the one hand, the human operator guarantees the reliability of the system and has been known to salvage numerous critical situations through an ability to reason in unplanned, imprecise and uncertain situations; on the other hand, the human operator can be unpredictable and create disturbances in the automated system. The first part of the book is dedicated to the methods of human-centered design, from three different points of view, the various chapters focusing on models developed by human engineers and functional models to explain human behavior in their environment, models of cognitive psychology and models in the domain of automobile driving. Part 2 develops the methods of evaluation of the human.machine systems, looking at the evaluation of the activity of the human operator at work and human error analysis methods. Finally, Part 3 is dedicated to human.machine cooperation, where the authors show that a cooperative agent comprises a know-how and a so-called know-how-to-cooperate and show the way to design and evaluate that cooperation in real industrial contexts.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 602
Veröffentlichungsjahr: 2014
Foreword
Introduction: Human–Machine Systems and Ergonomics
I.1. What has ergonomics got to do with human–machine systems?
I.2. Increasing level of automation?
I.3. Bibliography
Part 1 Design of Human–Machine Systems
1 Human-Centered Design
1.1. Introduction
1.2. The task–system–operator triangle
1.3. Organization of the human–machine system
1.4. Human-centered design methodology
1.5. Conclusion
1.6. Bibliography
2 Integration of Ergonomics in the Design of Human–Machine Systems
2.1. Introduction
2.2. Classic and partial approaches of the system
2.3. The central notion of performance (Long, Dowell and Timmer)
2.4. An integrated approach: cognitive work analysis
2.5. Conclusion
2.6. Bibliography
3 The Use of Accidents in Design: The Case of Road Accidents
3.1. Accidents, correction and prevention
3.2. Analysis of accidents specific to the road
3.3. Need-driven approach
3.4. A priori analyses
3.5. What assistance for which needs?
3.6. Case of cooperative systems
3.7. Using results in design
3.8. Conclusion
3.9. Bibliography
Part 2 Evaluation Models of Human–Machine Systems
4 Models Based on the Analysis of Human Behavior: Example of the Detection of Hypo-Vigilance in Automobile Driving
4.1. Introduction
4.2. The different models used in detection and diagnosis
4.3. The case of human–machine systems
4.4. Example of application: automobile driving
4.5. Conclusion
4.6. Bibliography
5 Evaluation of Human Reliability in Systems Engineering
5.1. Introduction
5.2. Principles of evaluating human reliability
5.3. Analysis of dynamic reliability
5.4. Analysis of altered or added tasks
5.5. Perspectives for the design of a safe system
5.6. Conclusion
5.7. Bibliography
Part 3 Human–Machine Cooperation
6 Causal Reasoning: A Tool for Human–Machine Cooperation
6.1. Introduction
6.2. Supervision
6.3. Qualitative model
6.4. Causal graphs and event-based simulation
6.5. Hierarchy of behavior models
6.6. Fault filtering
6.7. Discussion and conclusion
6.8. Bibliography
7 Human–Machine Cooperation: A Functional Approach
7.1. Introduction
7.2. A functional approach to cooperation
7.3. Cooperation in actions
7.4. Cooperation in planning
7.5. Meta-cooperation
7.6. Conclusion
7.7. Bibliography
8 The Common Work Space for the Support of Supervision and Human–Machine Cooperation
8.1. Introduction
8.2. Human–machine cooperation
8.3. Application in air traffic control
8.4. Application to the process of nuclear combustibles reprocessing
8.5. Conclusion
8.6. Acronyms
8.7. Bibliography
9 Human–Machine Cooperation and Situation Awareness
9.1. Introduction
9.2. Collective situation awareness
9.3. Structural approaches of human–machine cooperation
9.4. Human–machine cooperation: a functional approach
9.5. Common work space for team-SA
9.6. Conclusion
9.7. Bibliography
Conclusion
List of Authors
Index
First published 2014 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK
www.iste.co.uk
John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA
www.wiley.com
© ISTE Ltd 2014The rights of Patrick Millot to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2014939767
British Library Cataloguing-in-Publication DataA CIP record for this book is available from the British LibraryISBN 978-1-84821-685-3
There are three central words to this book: “ergonomics”, “human being” and “machine”. This book is not a book on ergonomics, although the topic is duly covered; neither is it a book on human science, even though human beings play a key role in it. This book is what I like to call a book of interfaces, a book that is the result of research at the junctions of various topics. Topics encompass the scientific domains that research – in France, and elsewhere – consists of. Classically, one belongs to a single topic, and one conducts research within this single topic. However, this categorization does not enable a true rendition of all the problems of research, including those found at the boundary between several topics.
Separation into different topics satisfies a certain element of Cartesianism in its presentation, but can cause confusion with regard to the work of researchers focusing on these interfaces. Indeed, they are not focused on the difficult problems of topic A or topic B; they are rather focused on the difficult problems that involve both topics A and B. This structure therefore presents shortcomings in appreciating the quality of much important research. Simply by looking at the history of scientific progress, it would appear that a lot of breakthroughs happen at these interfaces.
For this reason, among others, some countries (the number of which is always on the increase) have called upon the help of research organized into projects through agencies (such as the National Science Foundation (NSF), the Defense Advanced Research Projects Agency (DARPA), etc., in the United States, and the National Funding Agency for Research (ANR) in France, etc.). A project is presented as a scientific objective and, therefore, is not restricted by topic. It is run by a consortium consisting of members of the different topics required to reach this research objective over a certain timescale. The evaluation of a project allows us to determine the quality of the research conducted, and this evaluation is no longer limited to one topic. This organization into projects allows for a greater focus on various difficult problems, which the one-topic limit did not.
To conclude this point, this does not mean that research must only be conducted through various projects: actually, depending on the goal of the planned research, two types of presentation may be necessary, and these must therefore coexist. However, this also means that research at interfaces is just as important as purely topic-based research and that both must be evaluated using relevant scientific criteria.
The work put together by Patrick Millot sits at the junction of several topics, since it covers work on human–machine systems and their conception. It can therefore be defined as a project whose objective is to assemble the results of the most recent research in this field; this project is run by a consortium of acclaimed scientists from a variety of different backgrounds. Let us not mention the timescale of this project, as it equates to the maturing time of a book: a time that is usually underestimated.
Human–machine systems are as present in our working world and everyday life as they are in the technological world. These systems are therefore very important, as poor choices during conception can have very big consequences, especially in terms of safety, as recent examples have certainly shown. I have chosen one example in particular, as it is universally known. One system, the automobile, is itself linked to another, the road system and the other drivers and one pilot: the driver of the vehicle itself. To this collective, a significant limitation must be added: the driver is not a professional. The human–machine systems must therefore be simple (the driver has not received any specialized training other than that required for the obtainment of a driving license) but very informative, without being overbearing (there is also a lot of information coming from outside the vehicle), with the goal of making driving as safe and enjoyable as possible. We can see the difficulty and complexity of such a human–machine system and therefore the necessity of permanent research on this topic. The proposed work will offer solid lines of thought and solutions, especially as it comes at the topic from an original angle: putting the person at the center of human–machine systems.
From this point of view, the book is organized into three complementary parts that enable the different aspects of the problem to be addressed: part 1 focuses on the methods of conception, part 2 focuses on the methods of evaluation and, finally, part 3 focuses on human–machine cooperation. Undoubtedly, the readers will find in this book an idea of the state of research in this area, and hopefully the answers to many of their questions.
Finally, this book introduces us to a selection of authors from very different disciplines: specialists in “human engineering”, cognitive psychology, artificial intelligence, etc. This fits in well with the requirement of uniting acclaimed specialists from different topics so as to conduct or even understand research at the interfaces.
To conclude, I will reiterate something that I often say, which is that one cannot be precise at the interface of topics: in other words, one cannot be a specialist in inter-disciplinarity. On the contrary we can be excellent in our discipline and know how to cooperate with other specialists, and from this cooperation new advances arise from knowledge. However, this cooperation is only fruitful if the different players are excellent in their respective domains.
This book is the perfect illustration of this concept, and I am convinced that the readers will take great pleasure and interest in reading a book that offers a complete vision of the conception of a human–machine system that is centered on the “human fully involved in the loop”.
Bernard DUBUISSONProfessor Emeritus, UMR HeudiasycUniversity of Technology of Compiègne
Patrick MILLOT
This book on the ergonomics of human–machine1 systems is aimed at engineers specializing in informatics, automation, production or robotics, and who are confronted with an important dilemma during the conception of human–machine systems:
– on the one hand, the human operator guarantees the reliability of the system and he has been known to salvage numerous critical situations through reasoning abilities in unplanned, imprecise and uncertain situations: the Apollo 13 space mission is a mythical example of this2, where the three astronauts owed their survival to their own genius, their innovative capabilities, as well as to those of the engineers on the ground;
– on the other hand, the human operator can be unpredictable and create disturbances in the automated system; the nuclear industry is an “interesting”3 example of this in that it gave three dramatic examples in a little over 30 years: Three Mile Island in 1979, Chernobyl in 1986 and Fukushima in 2011. The Mont Sainte Odile accident is another significant example, from the aeronautic field.
At the beginning of the 1990s, a well-known researcher in the French control community said to me: “human-machine systems are interesting, but I don’t see what they’ve got to do with automation!” On the contrary, the three nuclear incidents mentioned show what the consequences of badly designed human–machine interaction can be. Kara Schmitt accurately summarizes the problems that can be encountered with human–machine interaction. The Three Mile Island accident was the result of automation misunderstanding in that the operators did not understand the function of the automatic safety system that would have avoided the accident, and unplugged it. The major Chernobyl accident was characterized by a lack of confidence in automation associated with poor understanding of nuclear physics, and the lack of a culture of automated safety in Eastern European countries at the time. These combined factors caused the operators to conduct tests that pushed the reactors to their limits after having turned off the safety systems. Finally, the Fukushima accident, which took place after a tsunami that damaged the nuclear station, was the result of a lack of appropriate automation associated with an under-estimate of the risks during conception: the height of the anti-tsunami chamber was only 5.7 m while the waves reached 10 m, the emergency generators in the underground were flooded, their batteries no longer having enough power to feed the cooling systems and to secure the reactors after they had stopped. Moreover, the several emergency stop systems were not automated and the security principles were passive and not active, and therefore required energy to operate [SCH 12].
As a matter of fact, automation does not rival a human remaining in the control loops and supervising the systems, but the human operator must not be reduced to an emergency device to control non-automated activities. On the contrary, the teams of human operators must be fully integrated in the command, control and supervisory loops of human–machine systems, so as to get as much as possible out of their capabilities, without suffering from the disadvantages. So, this book focuses on these problems of human-centered automation and the factors it addresses.
The approaches of different solutions lie in models, in the sense of a “greater understanding” of human operators, as much as for the systems themselves and their environment. Human modeling has united the human factors community over the last 70 years, since the end of World War II. Considering the limitations of the systems of that time, which had relatively low levels of automation, and therefore required a human presence in piloting, control and regulating tasks, researchers tried out unique approaches between the modeling (for the command) of the technical component of the human–machine system and the modeling of the human component. These approaches were inspired by theories of “engineering”, first of all the information theory, and then the control theory [SHE 74]. Human engineering research belongs to this movement and was mainly brought to France by Noel Malvache [MAL 73]. The reader can find a history of the approaches used in the human factors research field in [MIL 03] and [SHE 85].
Since the end of the 1990s, the application domains studied have strongly evolved toward large, complex systems, whether they are discrete continuous or hybrid. These are designated as systems of systems, network systems and multi-agents. The automation level has greatly increased, which has brought about an increase in the performance of the production or service system.
Nevertheless, other objectives must be taken into account, particularly safety and security. The interest in life critical systems has increased increasingly. At the beginning of the 2000s, Amalberti came up with the following categories of risky systems [AMA 05]:
– the riskiest systems involve amateur individuals, alpine mountaineering for example, with a risk level around 10-2;
– next, he places systems available to the public in which the safety culture is poorly developed (or not consistent) and the choice of operators is not very discriminative, such as car driving, with a level of 10-3;
– the chemical industry is next, with a risk level of 10-4;
– charter flights with a level of 10-5;
– finally come systems that are said to be ultra-safe, such as commercial aviation, the nuclear industry and transport by train, with a risk level of 10-6.
In these systems, the human is seen as an unreliable factor: in 1950, for a hundred accidents, seventy were due to a technical problem, and humans caused thirty. Since 2000, globally this proportion has been reversed, with seventy human causes for thirty technical causes. This is particularly due to a great increase in technical reliability, while human causes have not changed. This explains a natural reflex in the designer to minimize the role of the human in systems by increasing the level of automation. Aside from the technical difficulties surrounding complete automation, increasing automation levels is not actually that simple, in that it involves aspects other than ergonomics, i.e. contextual and organizational. This book attempts to show all the dimensions relating to this problem.
The level of automation determines the role and the involvement of human operators to guarantee these objectives: performance, safety and security. In highly automated systems, operators have migrated toward control rooms to carry out supervisory functions, i.e. monitoring and managing failures: diagnosis for re-use, the accommodation or the reconfiguration of the automated system. Human tasks become decision based, at the expense of action tasks (reactive), the outcomes of which can be very important for the integrity of the system, but also for its security. In these systems, operators are usually professionals, trained and supervised in an organization that is often hierarchical, where they must follow procedures to respond to known situations, whether they are normal or part of an incident. However, the main difficulties relate to unexpected and new situations, for which the operators are not prepared and where they must “invent” a solution. The designer’s dilemma can here be summarized as follows: on the one hand, he can be tempted to aid, or even limit, human activity in the known situations so as to avoid possible mistakes; on the other hand, he can only rely on human inventiveness to deal with the unexpected. However, to try to understand these unexpected situations, the human operator needs information of the system operation in known situations, the very information that is being taken away from him!
These problem-solving tasks are cognitive in nature, and theories that support their modeling can mainly be found in the vast spectrum of cognitive sciences, which include artificial intelligence, cognitive psychology, sociology and ergonomics. These approaches are multi-disciplinary and participative where each discipline contributes to the model and to the proposing of solutions. This book develops these different multi-disciplinary approaches of analysis and modeling for the design of modern human–machine systems and attempts to give an answer to the designer’s dilemma mentioned above.
In the large transport systems (airplanes, high-speed trains, metro), the operators can still remain directly involved in the driving or piloting loop, all the while carrying out a supervisory role. However, the domain of automobile driving is atypical. It is currently the object of considerable effort to increase its safety, but the problem is difficult because the population of car drivers possesses very heterogeneous capabilities, practice and training, and the organization is hardly controlled, except in an open manner by traffic laws, and with some “sampling” in a closed manner through police controls. Its level of automation is low and efforts are focused on the automation of certain security features, rather than driving in general [INA 06]. Several chapters of this book are focused on this field of research.
Organization itself plays an important role. From an informatics point of view, Guy Boy provides a diagram for a human–machine system using a pyramid, made up of five summits and of their relations which he names AUTOS: A for artifact, U for user, T for task, O for organization and S for situation (see Figure I.1) [BOY 11]. Transposed onto the dynamic systems, the artifact becomes the system and the user becomes the operator.
This figure therefore shows the well-known classic triangle of the human engineer O–S–T, as the operator has been formed in the system, carrying out tasks depending on the needs of the system by applying procedures (or by trying to innovate in the case of a new problem), these tasks needing to be helped by the ergonomic quality of human–machine interaction but also of the interface.
Figure I.1.Human–machine system environment (adapted from [BOY 11])
The fourth summit, organization, introduces the level of automation involving the role of the operator and task sharing (and function sharing) between humans within the control or supervision team, but also between humans and automatic systems (of control or of decision). Task sharing (or function sharing) between humans and machines gives humans a level of responsibility regarding the management of performance and of the risks and a level of authority that determines this responsibility. The socio-organizational context of the systems must then make compatible these two levels of authority and responsibility: this is part of the designer’s dilemma mentioned above. To this effect, human–human and human–machine task sharing cannot be static and defined from the earliest design, but instead must evolve dynamically according to criteria that integrate the performance of the global system and/or the human workload [MIL 88]. It can go even further by establishing a cooperation between human and machine. These advanced aspects will be covered in the last part of this book.
The fifth summit concerns the situation of the task that can introduce new limitations requiring an evolution of the situation awareness of the human operator to detect an unusual situation, an evolution of decisions, of the competences being used of even a dynamic evolution of the organization as mentioned previously.
The connection of these five dimensions shows that the successful automation of a system goes well beyond the problem of making it automatic, and that it needs to be a part of a process of human-centered design of the human–machine system. Indeed, this approach is developed in this book, in three parts.
Part 1 is dedicated to the methods of human-centered design, from three points of view:
– Chapter 1, written by Patrick Millot, presents the models developed by human engineers and bases itself on functional models to explain human behavior in his environment. It looks at the approaches for positioning levels of automation, notably through principles of task and/or function distribution between human and machine, and extends these to the sharing of authority and responsibility. To attempt to resolve the apparent ambiguity of the role of the operator, this chapter also introduces the mastering of the situation awareness of operators, widely studied today.
– Chapter 2 by Christine Chauvin and Jean-Michel Hoc develops models of cognitive psychology and proposes a methodology of design derived from the works of Rasmussen and Vicente called Cognitive Work Analysis [VIC 99].
– Chapter 3, written by Gilles Malaterre, Hélène Fontaine and Marine Millot, can be situated in the domain of automobile driving, which unfortunately is the victim of numerous real accidents. The approach the authors use is to analyze these cases to deduce the need for adjustments or assistance tools for the design of new vehicles and the improvement of infrastructure.
Part 2 develops the methods of evaluation of human–machine systems:
– Chapter 4, by Jean-Christophe Popieul, Pierre Loslever and Philippe Simon, evaluates the activity of the human operator at work using methods of automatic classification to define different classes of behavior. The data come from sensors that give the parameters of the task and of its environment, but also from sensors placed on the human body which record characteristic signals of the human state such as the electroencephalograph (EEG) or characteristics of the person’s decision and action strategies through eye movements. The methods are illustrated by experimental examples obtained in an automobile driving simulator during studies on the detection of hypo-vigilance.
– Chapter 5, written by Frédéric Vanderhaegen, Pietro Carlo Cacciabue and Peter Wieringa, presents human error analysis methods that are inspired by and adapted from technical reliability analysis methods and which in a sense form the dual approach of modeling methods based on “normal” human behavior. This chapter concludes with the results of the integration of such methods in the design process of human–machine systems.
Finally, Part 3 is dedicated to human–machine cooperation through four complementary (between themselves) chapters. We shall see that a cooperative agent comprises a know-how and a so-called know-how-to-cooperate. The organization of the cooperative system is defined according to a structure in which the inputs and the outputs of each of the agents are connected to their environment and to the system that they must control or manage. The functioning of cooperation is related to more functional aspects. Finally, operational aspects such as parameters called cooperation catalysts play a role:
– Chapter 6 by Jacky Montmain contributes to the know-how of the cooperative agent. It develops the causal reasoning that permits a human–machine cooperation by creating tools founded on artificial intelligence (AI), to support the operator in the control room confronted with situations requiring complex decisions. The author moves from the observation that human reasoning is neither based on a mathematical model of the process, nor on the detailing of the numerical data that are presented, but on the symbolic interpretation of these, which is the key to the explanations that a support system should give. The principal quality expected of the models is no longer precision, but pertinence and compatibility between the representation in use and the cognitive modes of the operator. Examples from the supervision of a chemical process in a nuclear reprocessing plant illustrate these principles.
– Chapter 7, written by Jean-Michel Hoc, contributes to the functional aspects of cooperation. In particular, it presents the models of cooperative activity, the concept of the COmmon Frame Of Reference (COFOR) and draws up the lessons for the design of the cooperative human–machine systems. It then describes cooperative activities according to the three levels of abstraction corresponding to the three temporal horizons, by deriving some implications for the design: cooperation in action, where the agents manage the interferences between their goals, cooperation in planning, where the agents negotiate to come up with a common plan or to maintain a common reference and meta-cooperation, which establishes the structures of knowledge of cooperation, such as models of partners or models of oneself.
– Chapter 8 by Serge Debenard, Bernard Riera and Thierry Poulain, describes the development of the human–machine cooperation through the definition of the cooperative structures and through the definition of the cooperative forms between human and machine and the implication that they have on human activities. They introduce the concept of “common work space” (CWS), which is very import to encourage cooperation between the agents. Two examples of application processes are detailed, each having different levels of automation, the first application process is low and is concerned with air traffic control (ATC), the second application process is high and concerns a nuclear waste reprocessing plant.
– Finally, Chapter 9, by Patrick Millot and Marie-Pierre Pacaux-Lemoine, widens the notion of the dynamic sharing of tasks or functions between the human and machine toward human–machine cooperation by integrating two dimensions: the structural and organizational dimension and the functional dimension linked to the know-how of the human and automated agents, but also (and especially) their know-how-to-cooperate. The CWS is shown as a way to make the COFOR concrete. Indeed, COFOR is mandatory for any cooperation. Three examples are given to illustrate these ideas: human–machine cooperation in the cockpit of a fighter aircraft, cooperation between a human and a robot in a recognition task and human machine cooperation in the ATC. Finally, we show that more than just being a useful tool facilitating cooperation, the CWS improves the situation awareness of the team. This is of major interest for holding humans in the loop.
[AMA 05] Amalberti R., Auroy Y., Berwick D., et al., “Five system barriers to achieving ultrasafe health care”, Annals of Internal Medicine, vol. 142, no. 9, pp. 756–764, 2005.
[BOY 11] Boy G., “A human-centered design approach”, in Boy G., (ed.), TheHandbook of Human Machine Interaction: A Human-Centered Design Approach, Ashgate, Farnham, pp. 1–20, 2011.
[INA 06] Inagaki T., “Design of human-machine interactions in light of domain-dependence of human-centered automation”, Cognition, Technology and Work, vol. 8, no. 3, pp. 161–167, 2006.
[MAL 73] Malvache N., Analyse et identification des systèmes visuel et manuel en vision frontale et périphérique chez l’Homme, State Doctorate Thesis, University of Lille, April 1973.
[MIL 88] Millot P., Supervision des procédés automatisés et ergonomie, Hermès, Paris, 1988.
[MIL 03] Millot P., “Supervision et Coopération Homme-Machine: approche système” in Boy G., (ed.), Ingénierie Cognitive IHM et Cognition, Hermès, Lavoisier, Paris, Chapter 6, pp. 191–221, 2003.
[SCH 12] Schmitt K., “Automations influence on nuclear power plants: a look at the accidents and how automation played a role”, International Ergonomics Association World Conference, Recife, Brazil, February 2012.
[SHE 74] Sheridan T., Ferrel R., Man-Machine Systems, MIT, Cambridge, 1974.
[SHE 85] Sheridan T., “Forty-five years of man-machine systems: history and trends”, 2nd IFAC/IFIP/IFORS/IEA Conference Analysis, Design and Evaluation of Man-Machine Systems, Varese, Italy, September 1985.
[VIC 99] VICENTE K.J., Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-based Work, Erlbaum, Mahwah, 1999.
1 The word human used here without prejudice as a synonym for a human being, or a human operator. For this reason, the masculine form he is used throughout the text to avoid weighing down the syntax of the text with the form he/she.
2 Apollo 13 (April 11, 1970, 13.13 CST – April 17, 1970) was a manned moon mission of the Apollo program that was cut short following the explosion of an oxygen tank in the Apollo service module during the flight to the Moon. As the vessel could not be turned around, the crew was forced to pursue their trajectory toward the Moon, and harness its gravitational pull during orbit so as to return to Earth. As the service module had become uninhabitable, the crew took refuge in the lunar module, Aquarius. Occupation of this module by the entire crew for an extended period of time had obviously not been anticipated. The astronauts and the control center on Earth had to find ways of recuperating energy, saving enough oxygen and getting rid of carbon dioxide. The crew eventually made it safely back to Earth. See http://fr.wikipedia.org/wiki/Apollo_13.
3 More than the terrible impact of the accidents, importance lies in the lessons learned that could lead to increased safety levels.
Patrick Millot
The theme covered in this chapter is the design of dynamic systems, of production, transport or services, that integrate both human operators and decision or command algorithms. The main question during the designing of a human–machine system concerns the ways of integrating human operators into the system.
As mentioned in the Introduction, human-centered design of human–machine systems must take into account five dimensions and the relations between them: not only the operator, the system and the tasks to be carried out, but also the organization and the situation of the work. These five dimensions are tightly linked; the tasks are different depending on the type of system, particularly its level of automation, but also depending on the potential situation and the expected safety level, the organization of the agents in charge of operating it (operators and/or automatic operating systems). The manner in which the tasks are carried out depends on the human operators themselves, who have different profiles depending on their training, aptitudes, etc.
In section 1.2, we cover the diversity of the tasks human operators are faced with, and the difficulties that they encounter in various situations. The models that explain the mechanisms of reasoning, error management and maintaining situation awareness (SA) are then explored. The creation of tools to support either action or decision in difficult situations leads to a modification of the level of automation, and as a result the global organization and the task or the functions sharing between humans or between humans and machines. The concepts of authority and of responsibility are then introduced. All of these points are the topics of section 1.3. Section 1.4 draws these different concepts together into a method of design-evaluation of the human–machine systems.
First of all, we must make the distinction between the task, which corresponds to the work that is “to be done”, and the activity, which corresponds to the work carried out by a given operator, who has his own aptitudes and resources. Thus, to carry out the same task, the activity of operator 1 can be different from the activity of operator 2.
The tasks themselves depend on the system and the situation, as the latter can be either normal or abnormal, or even dangerous. The level of automation determines the level of human involvement in the interaction with the system: often in highly automated systems, humans rarely intervene during normal operation. However, they are often called upon during abnormal situations and for difficult tasks. The example of the supervision of nuclear power plants is given hereafter.
In systems with low levels of automation, such as the automobile, the operators are involved both in normal situations (driving on clear roads in normal weather) and in difficult situations such as during the sudden appearance of an object at night in snowy weather. The involvement of the driver is then different.
To be able to deal with the difficulty of a task, we can attempt to decompose it. For example, the task of driving an automobile can be functionally decomposed into three sub-tasks according to three objectives (see Figure 1.1):
– strategic, to determine the directions between the start point and the destination;
– tactical, to define the trajectory and the speed on the chosen road; and
– operational, to control the speed and the trajectory of the vehicle on the road.
Functionally, this can be decomposed in a hierarchic manner, the three subtasks performed by the driver having different having different temporal horizons, and the functions and the resources necessary to execute each of these also being different. Assistance tools are also added which can be applied to specific sub-tasks: the speed regulator, ABS brakes (the wheel anti-blocking system) and automated cruise control (ACC) are applied to the operational sub-task, and GPS to the strategic sub-task. These additions are only assistance tools, i.e. they do not increase the level of automation since the human operator remains the sole actor.
Figure 1.1.Diagram of the task of automobile driving according to three objectives
An increase in the level of automation could, however, be applied to one of the sub-tasks, for example the ABV project (Automatisation à Basse Vitesse, or low speed automation), which aims to make driving in peri-urban areas completely autonomous through automation, for speeds that are below 50 km/h [SEN 10], or the “Horse Mode project”, which is inspired from the horse metaphor, in which the horse can guide itself autonomously along a road, and the horse rider deals with the tactical and strategic tasks. A corollary project is looking into sharing the tasks between the human pilot and the autopilot [FLE 12], and we will look into it further later on.
At the other end of the spectrum, a nuclear power plant is a highly automated system that is very big (around 5,000 instrumented variables), complex (lots of interconnection between the variables) and potentially very risky. The tasks of the operators in the control room have shifted from the direct command level to the supervision level (see Figure 1.2). These are therefore decision tasks for monitoring, i.e. fault detection diagnosis to determine the causes and the faulty elements involved and decision-making to define the solutions. These can be part of three types: a maintenance operation to replace or repair the faulty element; accommodation (adaptations) of the parameters to change the operation point; or, finally, reconfiguration of the objectives, for example, to favor fallback objectives when the mission is abandoned or shortened. Planning consists, for example, of decomposing the solutions by giving them hierarchy according to the strategic, tactical or operational objectives like the ones shown above1.
Figure 1.2.Principles of supervision
The highly automated systems are also characterized by differences in the difficulty of the tasks, i.e. the difficulty of the problems to be solved during supervision, depending on whether the situation is normal or abnormal, for example during the supervision of critical systems where time-related pressure further increases stress: nuclear industry, civil aviation, automatic metro/underground systems. To deal with these difficulties, the operators require resources, which, in these cases, is the knowledge required to be able to analyze and deal with the operation of the system, with the goal of coming up with a diagnosis. This knowledge can then be used to write up procedure guides or diagnostic support systems. A distinction can be made between the following:
– knowledge available to the designers: on the one hand, about the function of the components, and on the other hand, topological, i.e. related to the positioning and the interconnections between the components [CHI 93]; and
– knowledge acquired during usage by the supervision and/or maintenance teams during the resolution of the successive problems. These can be functional, i.e. related to the modes of functioning or of malfunctioning of these components, and behavioral or specific to a particular situation or context [JOU 01].
Figure 1.3.Knowledge requirements throughout the life cycle of a system
The difficulties of the tasks in the different situations of operation are modulated by the degree of maturity of the system (see Figure 1.3), which has an influence on the availability of this knowledge:
– The early period of youth is the one which requires the most adjustments and updates and where the system is the most vulnerable to faults: the problem being that the operators have not yet accumulated enough experience to control and manage the system at this stage (REX “retour d’expérience”: feedback experience) to effectively deal with system malfunctions. However, knowledge of the process design is available, but not always very clear or well modeled, describing the structure and the topology of the system, i.e. its components, their operation modes and the relations between them. This knowledge can make up a strong basis to help during the process exploitation phases. In Chapter 6, Jacky Montmain describes a method of modeling that is based on the relations of causality between the variables. This is important for the composition of the diagnostic support systems which are based on a model of normal operation of the process when expert knowledge of possible malfunction is not yet available [MAR 86].
– In the period of maturity, both exploitation knowledge and knowledge of design are available; often they are transcribed in the form of exploitation and/or maintenance procedures. Moreover, the risks of fault due to youth imperfections are reduced, both for the system and the operators.
– Finally, during the period of old age, the process presents an increasing number of faults due to wearing of the components, but the operators have all the necessary knowledge to deal with this, or to apply a better maintenance policy.
Air traffic control is another example of a risky system with a low level of automation and a high number of variables, namely the airplanes and their flight information. Problems, called aerial conflicts, take place when two or more planes head toward each other and risk a collision. It is then up to air traffic controllers to preventatively detect these conflicts and to resolve them before they take place, by ordering the pilot(s) to change their trajectory. Considering the expertise of the controllers, the difficulty does not lie so much in the complexity of the problems to be solved, but rather in their sheer number, especially during periods of heavy traffic2. Thus, tens of minutes can pass between the moment of detection of a conflict by a controller and the adequate moment when the resolving order is transmitted to the relevant pilot(s). In this way, the controller risks forgetting the conflict and sending the order too late. Several practical cases in this book involve air traffic control.
After this overview of the diversity of the tasks, depending on the types of systems and the situations, we now move on to look at the approaches to modeling the system itself, and the methods to be developed so as to attempt to make their understanding easier.
The large dimension of the technical system makes classical modeling and identification techniques extremely time consuming and leads to models that are not suited to real-time simulations. This has been the basis for work on hierarchical modeling in the systems trend led by Lemoigne [LEM 94], producing several methods of analysis and of modeling, such as SAGACE [PEN 94]. SADT follows the same idea, which relies on a decomposition of the global system. More recently, the multilevel flow modeling (MFM) method by Lind decomposes the system according to two axes: the means/ends axis and the all/part axis (see Figure 1.4) [LIN 10, LIN 11a, LIN 11b, LIN 11c].
Figure 1.4.MFM multilevel decomposition of a large system according to Lind
According to the means/ends axis, there are four levels of model, from the most global (and least detailed) to the grainiest: the goals, the functions, the behaviors and the components. The models of one level are thus the goals of the models of the lower level and the means of the higher level. Let us note that the models of control theory are found at the level of behavior and very technical models, for example electronic and mechanical models found at the level of components. These two levels are part of engineering sciences. The two higher levels are themselves part of the cognitive sciences and concern the nature and the realization of the more global functions (and their sequencing), ensured by the behaviors of the physical level. Among the possible methods of modeling, we can cite the qualitative models [GEN 04], Petri networks, etc. The making of decisions related to the putting in place of the functions is often the result of optimization algorithms, or even human expertise, which is therefore symbolic, and which can be put into place through certain rules. This starts to be part of the domain of artificial intelligence (AI).
Decomposition according to the whole/part axis is the corollary of the decomposition imposed by the means/ends axis: the closer we are to the ends or goals, the more the entirety of the system is taken into account, the closer we are to the means, the more the model involves the different parts. This method of modeling puts new light on the subjects concerned and shows their complementarity. It most importantly shows that disciplines other than the physical sciences are involved in this vast issue. The method of modeling of the human operator has also followed a similar evolution.
A lot of multi-disciplinary research has been conducted on human factors since World War II, and a significant amount of methodological know-how has resulted. The goal here is not to produce a comprehensive review of this, but to introduce the designer with well-established and understandable models that help bring constructive and accurate results, even if, in the eyes of the most meticulous specialist, they may appear incomplete or simplified. Three points appear to be most important, knowing that in reality human behavior is a lot more complex; we will discuss other sociological aspects later in the chapter:
– the operator’s adaptive behavior to regulate his workload during the execution of a task;
– reasoning mechanisms and decision-making mechanisms that the operator uses during complex decision tasks, such as in the supervision of the large, risky automated systems (nuclear power plants, chemical plants, etc.), but also during reactive tasks (which a short-response time) such as piloting an airplane or during the driving of an automobile; and
– mechanisms of errors and suggestions of solutions to deal with them.
The system of the “human operator carrying out a task” can be considered to be a complex, adaptive system, made up of interconnected sub-systems, partially observable and partially commandable. A summary produced by Millot [MIL 88a] is presented in Figure 1.5. It brings together the inputs of the system, disturbances and the internal state parameters affecting the outputs of the system. Human functioning is modeled by three regulation loops with three objectives: in the short term, the regulation of performance; in the medium term, the regulation of the workload caused by the task; and in the long term, regulation of the global load, due to the global environment but also to the internal state of the operator.
Figure 1.5.Model of the regulation of human activity (from a detailed summary in [MIL 88])
The inputs are the demands of the task, i.e. the characteristics of the work to be accomplished, gathering the objectives to be reached. These are translated as timeframes to be respected and specific difficulties due to the task and/or the interface.
Nuisances that are not linked to the task can disturb the system. They are induced by the physical environment, and come in the form of vibrations, sound, light, heat, etc. Some of these disturbances increase the difficulties of the task, for example the reflection of lights on a screen, or vibrations of the work surface during manual control. One of the objectives of ergonomics is first to arrange the environment of workstations to reduce or even eliminate these nuisances.
The output is the performance obtained during the execution of the task. Observation of the performance is one of the methods possible to evaluate the ergonomic characteristics of a human–machine system. It is obvious that one of the big methodological difficulties relates to the choice of performance evaluation criteria. Generally, it is defined in terms of production indices, whether quantitative or qualitative, that are directly relative either to the procedures applied by the operator (response time, error rate, strategies, etc.) or to the output of the human–machine system, for example a product in the case of a production system. It also integrates criteria linked to safety and to security, particularly for critical systems [MIL 88]. Ten years later, ergonomists have joined this idea by underlining the necessity to take into account the performance of the human–machine system as an evaluation criterion during conception (see Chapter 2 by C. Chauvin and J.-M Hoc).
To carry out the task, the operator chooses the operating modes, which, once applied, produce a certain performance. If the operator has some knowledge of his performance, he can refine it by modifying his operating modes. But the performance alone is not enough to characterize the state of mobilization of the operator induced by the task and thus to evaluate the difficulties really encountered during its execution. For this, ergonomists use a state variable, called workload, which corresponds to the fraction of work capacity that the operator invests in the task. Sperandio defines this as the “level of mental, sensorimotor and physiological activity required to carry out the task” [SPE 72].
The operator carrying out the task has a certain amount of work capacity, which is limited and different depending on the individual and susceptible to vary in function of the state of the individual. The notion of a limited maximum work capacity corresponds, in the case of mental tasks, to the old notion of a canal with limited capacity. The treatment and/or the filtering of the disturbances uses some of the work capacity, thus reducing the capacity available for the task.
Seeing as the human operator is adaptable, he regulates his workload, as limited by his available capacity, but modifying his operating modes so as to satisfy the task demands. The modification of the operating modes is done through a different organization of work, in which the operator dynamically hierarchizes the assigned objectives of the system. This behavior is made evident by Sperandio in a study on the tasks of air traffic control, the levels of the demands of the tasks being defined by the number of airplanes to be simultaneously dealt with by the controller. When this number increases, the controller successively employs different strategies that are more economical with regard to his workload by reducing the number of variables to be considered for each plane [SPE 78].
The maximum work capacity depends on the state of the operator: physiological, psychological and cognitive. This state is influenced by the task carried out, notably by the performance obtained and the induced workload: a state of permanent stress can, for example, cause physiological problems in the operator (overworking, insomnia) as well as psychological ones (lack of motivation, depression, etc.).
This state is equally affected by other parameters linked to the individual himself, such as his physical and intellectual aptitudes, his lifestyle outside of work (quality of sleep, hobbies, trips, psychological issues, etc.), his training and motivation, these being both influenced by the individual himself and by the organization of work (circadian rhythms, psychosociological environment in the team or the company, level of automation, etc.).
The workload is therefore a variable that is characteristic of the state of the working operator, and its evaluation must be able to estimate the appropriateness of the tasks with the capabilities and human resources. Many methods of Work Load evaluation were studied in the 1980s [MIL 88a], based on the creation of an observer of the working operator, physiologically (by electrocardiogram, electroencephalogram and pupil dilation) or psychologically (by the double-task method and methods based on questionnaires). Among the latter methods, two questionnaire methods were found to give the best results: the subjective workload assessment technique (SWAT) [RED 87]) and the task load index (TLX) developed by NASA [HAR 88].
The first one establishes a load index from the operator’s answers to a series of questions asked online, related to three indicators: temporal demands, functional demands and stress. The questions are asked with relatively low sampling frequency (a few minutes); the triplets of indicators obtained are accumulated to give the evolution of the load during the task.
The second one gives a global index by conducting the questionnaire after the end of the task according to six indicators: temporal demands, mental demands, physical demands, satisfaction with regard to the performance, effort and stress. The indicators are combined into a weighted sum whose weightings are also determined by the operator.
An extension targeting the real-time assessment of the load [MIL 88a] provided encouraging results in simulated car driving tasks [RIE 90]. This load indicator has also been used as a criterion of the allocation of the tasks between human and machine in supervision tasks of a simulated continuous process [MIL 88b] and in an air traffic control simulator [CRE 93, MIL 93]. We will look at this again in Chapter 9, which focuses on the dynamic allocation of tasks between human and machine and further on the human–machine cooperation.
Today, these methods benefit from resurging interest due to the fact that the ergonomic evaluation of the human–machine systems needs measurements to be able to compare several positions, and they have become references, notably when testing new methods [MIT 05, PIC 10, RUB 04].
The human operator integrated in the control or the supervision of a big system is no longer only considered as reacting, but also as a problem-solver. A model is therefore needed so as to describe these different behaviors. The Rasmussen model [RAS 83, 86] has been revolutionary in this effect; a more recent revision by Hoc [HOC 96] is presented in Figure 1.6.
The operator detects an abnormal event and evaluates the situation by observing the information available, by identifying the state of the system (diagnosis) or by anticipating its evolution (prognosis). He then elaborates a solution in function of the limitations and the risks run. This solution is planned into goals, sub-goals and implementation procedures, which makes up the task to be conducted. If this task results in an action, the task is executed. This can be compared with the hierarchical decomposition into strategic, tactical and operational objectives mentioned earlier.
Figure 1.6.Problem-solving model (Rasmussen revised by Hoc and Reason)
Hoc’s revisions complete Rasmussen’s initial model, by detailing the cognitive mechanisms of the situation evaluation, inspired by Reason [REA 90]: diagnosis and/or prognosis by a method of hypothesis generation (data-driven reasoning), followed by tests of these hypotheses (goal-driven reasoning). It also introduces a temporal dimension (diagnosis: current state, prognosis: future state, expectations of the evolution of the system leading to a new evaluation of the situation).
The second strength of this model lies in the three levels of behavior it is comprised of:
– the lower level is reactive; the well-trained operator spontaneously carries out the appropriate action as soon as he detects abnormal conditions: it is called a skill-based behavior and comes back to the control theory models.
The two higher levels, on the other hand, are cognitive:
– Rule-based behavior, where the expert operator, after having identified the state of the system, directly applies a pre-defined task that he has learned: the corresponding models were very important in the knowledge-based AI systems in the 1980s and the 1990s; and
– Knowledge-based behavior, where the operator is faced with a problem he has never encountered before and must invent a solution.
We can therefore remark, just as in modeling the technical system, that modeling the human operator requires the cooperation of several sciences, notably the cognitive sciences: cognitive psychology to propose concepts that describe human reasoning, AI to put them into place, control theory to model the reactive behaviors, for example the guiding of a vehicle.
The human operator attempts to compensate for his errors, either in the short term, correcting them as they occur, or in the longer term to learn from them. Human error and human reliability have been studied over the last three decades. Rasmussen’s model presented in the chapter has also served as a starting point for Reason in understanding the mechanisms of human error [REA 90] and to define barriers that would help prevent and/or deal with these errors. For example, an erroneous action can be a result of the incorrect application of a good decision, or the correct application of an inappropriate action. This erroneous decision can itself result in a bad solution, even when it is based on a correct evaluation of the situation.
Figure 1.7.Taxonomy of human errors according to Reason [REA 90]
Reason divides human error into two categories: non-intentional and intentional (see Figure 1.7). These categories are themselves sub-divided into slips and lapses for the non-intentional actions, and into mistakes and violations for actions/decisions that are intentional. The violations comprise two categories: one is without bad intent, for example preventing an accident through procedure, while the other is malicious (sabotage).
Rasmussen [97] proposes an explanation for the appearance of certain errors through the necessity for the operator to achieve a compromise between three conjoined objectives that are sometimes contradictory (see Figure 1.8):
– the performance objectives imposed either by the management of the company, either by the operator himself, and which equate to extra efforts;
– cognitive or physiological costs (workload, stress) to achieve the objectives indicated previously and that the operator attempts to regulate;
– the efforts that result from the precautions to be taken to uphold the safety of the system operation of the environment and of the operators themselves.
Figure 1.8.The compromises that rule human action (adapted from [RAS 97])
If the pressure from management and the cognitive costs caused by the task increase, the operator will tend to “push back” the fixed safety limit, and eventually pass the “error margin” and even the ultimate limit, which can lead to a loss of control, and as a result, an incident or an accident.
Several methods of risk analysis have been defined to detect risky situations and propose methods of countering them [FAD 94, HOL 99, HOL 03, POL 03, VAN 03]. Barriers can then be designed to stop these crossings of the limits [POL 02]. Technical, organizational or procedural defenses can aim to prevent or correct erroneous actions or decisions [ZHA 04].
Generally, risk management involves three complementary steps during the conception of a system:
– Prevention: this involves anticipating risky behavior during conception by putting technical and organization defenses in place to avoid this behavior (using standards, procedures, maintenance policies, efficient supervision, etc.).
– Recovery: if prevention is not sufficient, the second step consists of attempting to detect these unexpected behaviors (by alarm systems like in nuclear power plants or through a mutual control system with a second operator, like in commercial airplanes) and to correct them; this method can quite simply mean informing the operator of his error and allowing him to correct it himself.
– Management of consequences: if recovery through correcting actions is not efficient, an accident can happen. The third step is therefore to anticipate the occurrence of the accident so as to minimize the negative consequences, for example through the establishment of quick emergency aid response systems for road accidents, or the building of an isolation chamber around a nuclear reactor to block any possible leak.
Situation awareness (SA), according to Endsley, refers to the human capacity to develop an internal representation of the current situation, of the environment, and to predict likely future states of this environment [END 95a].
Formally, SA is defined by three components, “the perception of elements in space time (SA1), the understanding of their significance (SA2) and the projection of their state into the near future (SA3)” (see Figure 1.9).
Figure 1.9.Situation awareness model during a dynamic decision according to [END 95a]
This is therefore of major importance in the control and/or the supervision of human–machine systems. Also, Endsley proposed methods of measurement of the three SA components [END 95b].
The most well known is situation awareness global assessment technique (SAGAT), but it only works in a simulated environment: the simulation is frozen at moments randomly chosen and the screens of human–machine interface are hidden while the human operator is solicited to quickly answer questions on his understanding of the running situation. The operator’s perception is then compared with the real situation on the basis of the information provided by an expert in the subject (subject matter expert, SME) who answers the same questions but can see the screens. The method has been validated in a context of air combat simulation, the temporary pausing of the simulation for five to six minutes apparently having no effect on the operator’s memory of the situation. The questions raised are very specific to the work situation and the operator cannot answer if he is not conscious of the situation: for example, for SA1, “which routes present an urgent situation?” According to Jones and Endsley, this method constitutes an objective and “unbiased” evaluation of SA [JON 04]. However, this method can only be applied in a simulated context.
Another well-known method is situational awareness rating scale (SART) for tasking in real situations [TAY 90]. It consists of a 10-point scale where the operator can indicate the amount of SA he had during the task, after the task is finished. The 10 items are then combined to form a measurement of the main factors: attention resources, attention requirement and understanding. This method is criticized by [JON 04] because of its application at the end of the task, which can introduce bias due to changes in the operator’s memories. In our opinion, this criticism is valid, and situation awareness must be estimated during the execution of the task. However, it is true that SAGAT is hard to put into practice. In its place, Jones and Endsley have proposed and tested the real time probes method, which is a variation of situation present assessment method (SPAM) by Durso [DUR 98], which consists of periodically asking well-targeted questions related to the three levels of SA (and without blanking the screens), but also, by measuring the delay between question and answer as an additional indicator of the quality of SA. This method has been partially validated experimentally in comparison with SAGAT.
Other methods can be cited which rely on other theoretical bases, such as Neisser’s theory of perception [NEI 76], or on very empirical principles that use metrics for each of the operator’s tasks like in the man–machine integration design and analysis system (MIDAS) simulator in the Next Gen project study of “new generation” American air traffic control. However, Endsley’s definition based on three levels of SA remains the most commonly retained because it is the easiest to understand [SAL 08].
It acts again as a support for an extension of the studies for the evaluation of SA in teams or during cooperative work [END 00, SAL 08] and its interrelation with the level of automation of the human–machine system. We will return to this in Chapter 9.
