Modeling HumanSystem Interaction - Thomas B. Sheridan - E-Book

Modeling HumanSystem Interaction E-Book

Thomas B. Sheridan

0,0
97,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

This book presents theories and models to examine how humans interact with complex automated systems, including both empirical and theoretical methods.

  • Provides examples of models appropriate to the four stages of human-system interaction
  • Examines in detail the philosophical underpinnings and assumptions of modeling
  • Discusses how a model fits into "doing science" and the considerations in garnering evidence and arriving at beliefs for the modeled phenomena

Modeling Human-System Interaction is a reference for professionals in industry, academia and government who are researching, designing and implementing human-technology systems in transportation, communication, manufacturing, energy, and health care sectors.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 281

Veröffentlichungsjahr: 2016

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

COVER

TITLE PAGE

PREFACE

INTRODUCTION

1 KNOWLEDGE

GAINING NEW KNOWLEDGE

SCIENTIFIC METHOD: WHAT IS IT?

FURTHER OBSERVATIONS ON THE SCIENTIFIC METHOD

REASONING LOGICALLY

PUBLIC (OBJECTIVE) AND PRIVATE (SUBJECTIVE) KNOWLEDGE

THE ROLE OF DOUBT IN DOING SCIENCE

EVIDENCE: ITS USE AND AVOIDANCE

METAPHYSICS AND ITS RELATION TO SCIENCE

OBJECTIVITY, ADVOCACY, AND BIAS

ANALOGY AND METAPHOR

2 WHAT IS A MODEL?

DEFINING “MODEL”

MODEL ATTRIBUTES: A NEW TAXONOMY

EXAMPLES OF MODELS IN TERMS OF THE ATTRIBUTES

WHY MAKE THE EFFORT TO MODEL?

ATTRIBUTE CONSIDERATIONS IN MAKING MODELS USEFUL

SOCIAL CHOICE

WHAT MODELS ARE NOT

3 IMPORTANT DISTINCTIONS IN MODELING

OBJECTIVE AND SUBJECTIVE MODELS

SIMPLE AND COMPLEX MODELS

DESCRIPTIVE AND PRESCRIPTIVE (NORMATIVE) MODELS

STATIC AND DYNAMIC MODELS

DETERMINISTIC AND PROBABILISTIC MODELS

HIERARCHY OF ABSTRACTION

SOME PHILOSOPHICAL PERSPECTIVES

4 FORMS OF REPRESENTATION

VERBAL MODELS

GRAPHS

MAPS

SCHEMATIC DIAGRAMS

LOGIC DIAGRAMS

CRISP VERSUS FUZZY LOGIC (SEE ALSO APPENDIX, SECTION “MATHEMATICS OF FUZZY LOGIC”)

SYMBOLIC STATEMENTS AND STATISTICAL INFERENCE (SEE ALSO APPENDIX, SECTION “MATHEMATICS OF STATISTICAL INFERENCE FROM EVIDENCE”)

5 ACQUIRING INFORMATION

INFORMATION COMMUNICATION (SEE ALSO APPENDIX, SECTION “MATHEMATICS OF INFORMATION COMMUNICATION”)

INFORMATION VALUE (SEE ALSO APPENDIX, SECTION “MATHEMATICS OF INFORMATION VALUE”)

LOGARITHMIC‐LIKE PSYCHOPHYSICAL SCALES

PERCEPTION PROCESS (SEE ALSO APPENDIX, SECTION “MATHEMATICS OF THE BRUNSWIK/KIRLIK PERCEPTION MODEL”)

ATTENTION

VISUAL SAMPLING (SEE ALSO APPENDIX, SECTION “MATHEMATICS OF HOW OFTEN TO SAMPLE”)

SIGNAL DETECTION (SEE ALSO APPENDIX, SECTION “MATHEMATICS OF SIGNAL DETECTION”)

SITUATION AWARENESS

MENTAL WORKLOAD (SEE ALSO APPENDIX, SECTION “RESEARCH QUESTIONS CONCERNING MENTAL WORKLOAD”)

EXPERIENCING WHAT IS VIRTUAL: NEW DEMANDS FOR HUMAN–SYSTEM MODELING (SEE ALSO APPENDIX, SECTION “BEHAVIOR RESEARCH ISSUES IN VIRTUAL REALITY”)

6 ANALYZING THE INFORMATION

TASK ANALYSIS

JUDGMENT CALIBRATION

VALUATION/UTILITY (SEE ALSO APPENDIX, SECTION “MATHEMATICS OF HUMAN JUDGMENT OF UTILITY”)

RISK AND RESILIENCE

TRUST

7 DECIDING ON ACTION

WHAT IS ACHIEVABLE

DECISION UNDER CONDITION OF CERTAINTY (SEE ALSO APPENDIX, SECTION “MATHEMATICS OF DECISIONS UNDER CERTAINTY”)

DECISION UNDER CONDITION OF UNCERTAINTY (SEE ALSO APPENDIX, SECTION “MATHEMATICS OF DECISIONS UNDER UNCERTAINTY”)

COMPETITIVE DECISIONS: GAME MODELS (SEE ALSO APPENDIX “MATHEMATICS OF GAME MODELS”)

ORDER OF SUBTASK EXECUTION

8 IMPLEMENTING AND EVALUATING THE ACTION

TIME TO MAKE A SELECTION

TIME TO MAKE AN ACCURATE MOVEMENT

CONTINUOUS FEEDBACK CONTROL (SEE ALSO APPENDIX, SECTION “MATHEMATICS OF CONTINUOUS FEEDBACK CONTROL”)

LOOKING AHEAD (PREVIEW CONTROL) (SEE ALSO APPENDIX, SECTION “MATHEMATICS OF PREVIEW CONTROL”)

DELAYED FEEDBACK

CONTROL BY CONTINUOUSLY UPDATING AN INTERNAL MODEL (SEE ALSO APPENDIX, SECTION “STEPPING THROUGH THE KALMAN FILTER SYSTEM”)

EXPECTATION OF TEAM RESPONSE TIME

HUMAN ERROR

9 HUMAN–AUTOMATION INTERACTION

HUMAN–AUTOMATION ALLOCATION

SUPERVISORY CONTROL

TRADING AND SHARING

ADAPTIVE/ADAPTABLE CONTROL

MODEL‐BASED FAILURE DETECTION

10 MENTAL MODELS

WHAT IS A MENTAL MODEL?

BACKGROUND OF RESEARCH ON MENTAL MODELS

ACT‐R

LATTICE CHARACTERIZATION OF A MENTAL MODEL

NEURONAL PACKET NETWORK AS A MODEL OF UNDERSTANDING

MODELING OF AIRCRAFT PILOT DECISION‐MAKING UNDER TIME STRESS

MUTUAL COMPATIBILITY OF MENTAL, DISPLAY, CONTROL, AND COMPUTER MODELS

11 CAN COGNITIVE ENGINEERING MODELING CONTRIBUTE TO MODELING LARGE‐SCALE SOCIO‐TECHNICAL SYSTEMS?

BASIC QUESTIONS

WHAT LARGE‐SCALE SOCIAL SYSTEMS ARE WE TALKING ABOUT?

WHAT MODELS?

POTENTIAL OF FEEDBACK CONTROL MODELING OF LARGE‐SCALE SOCIETAL SYSTEMS

THE STAMP MODEL FOR ASSESSING ERRORS IN LARGE‐SCALE SYSTEMS

PAST WORLD MODELING EFFORTS

TOWARD BROADER PARTICIPATION

APPENDIX

MATHEMATICS OF FUZZY LOGIC (CHAPTER 4, SECTION “CRISP VERSUS FUZZY LOGIC”)

MATHEMATICS OF STATISTICAL INFERENCE FROM EVIDENCE (CHAPTER 4, SECTION “SYMBOLIC STATEMENTS AND STATISTICAL INFERENCE”)

MATHEMATICS OF INFORMATION COMMUNICATION (CHAPTER 5, SECTION “INFORMATION COMMUNICATION”)

MATHEMATICS OF INFORMATION VALUE (CHAPTER 5, SECTION “INFORMATION VALUE”)

MATHEMATICS OF THE BRUNSWIK/KIRLIK PERCEPTION MODEL (CHAPTER 5, SECTION “PERCEPTION PROCESS”)

MATHEMATICS OF HOW OFTEN TO SAMPLE (CHAPTER 5, SECTION “VISUAL SAMPLING”)

MATHEMATICS OF SIGNAL DETECTION (CHAPTER 5, SECTION “SIGNAL DETECTION”)

RESEARCH QUESTIONS CONCERNING MENTAL WORKLOAD (CHAPTER 5, SECTION “MENTAL WORKLOAD”)

BEHAVIOR RESEARCH ISSUES IN VIRTUAL REALITY (CHAPTER 5, SECTION “EXPERIENCING WHAT IS VIRTUAL; NEW DEMANDS FOR MODELING”)

MATHEMATICS OF HUMAN JUDGMENT OF UTILITY (CHAPTER 6, SECTION “VALUATION/UTILITY”)

MATHEMATICS OF DECISIONS UNDER CERTAINTY (CHAPTER 7, SECTION “DECISION UNDER CONDITION OF CERTAINTY”)

MATHEMATICS OF DECISIONS UNDER UNCERTAINTY (CHAPTER 7, SECTION “DECISION UNDER CONDITION OF UNCERTAINTY”)

MATHEMATICS OF GAME MODELS (CHAPTER 7, SECTION “COMPETITIVE DECISIONS: GAME MODELS”)

MATHEMATICS OF CONTINUOUS FEEDBACK CONTROL (CHAPTER 8, SECTION “CONTINUOUS FEEDBACK CONTROL”)

MATHEMATICS OF PREVIEW CONTROL (CHAPTER 8, SECTION “LOOKING AHEAD (PREVIEW CONTROL)”)

STEPPING THROUGH THE KALMAN FILTER SYSTEM (CHAPTER 8, SECTION “CONTROL BY CONTINUOUSLY UPDATING AN INTERNAL MODEL”)

REFERENCES

INDEX

END USER LICENSE AGREEMENT

List of Tables

Chapter 02

TABLE 2.1 A taxonomy of model attributes

Chapter 09

TABLE 9.1 Fitts’ list

TABLE 9.2 The original levels of automation scale

List of Illustrations

Chapter 04

FIGURE 4.1 Trends in telephone company data (hypothetical).

FIGURE 4.2 Gaussian probability density function. .

FIGURE 4.3 Hypothetical supply–demand curves.

FIGURE 4.4 Map of the United States. .

FIGURE 4.5 Rasmussen’s schematic diagram depicting levels of behavior.

FIGURE 4.6 Wickens’ (1984) model of human multiple resources (modified by author).

FIGURE 4.7 Forward chaining tree.

FIGURE 4.8 Backward chaining tree, where AND indicates necessity and OR indicates sufficiency.

FIGURE 4.9 Kanizsa square illusion.

Chapter 05

FIGURE 5.1 The complexity of communication with a person or a machine.

FIGURE 5.2 Interpretation of Brunswik lens model (after a diagram by Kirlik, 2006).

FIGURE 5.3 Wickens’ SEEV model of attention. .

FIGURE 5.4 Senders’ model: sampling matches the Nyquist criterion.

FIGURE 5.5 Properties of mental workload (effects of very low workload not shown).

FIGURE 5.6 Regions of workload accommodation. .

FIGURE 5.7 Two images of a video showing superposition of computerized truck images on actual driver view in a test drive on a country road. White objects on trees along the roadway are fiduciary markers to enable continuous geometric correspondence of the AR image to the real world.

FIGURE 5.8 Variables contributing to “presence” in VR.

FIGURE 5.9 Relationship of VR created by computer and telepresence resulting from high‐quality sensing and display of events at an actual remote location. The dashed line around the remote manipulator arm suggests that the remote arm can be either real or virtual, and that if the visual and/or tactile feedback are good enough, there will be no difference in the human operator’s perception (mental model, shown in the cloud) of the (real or virtual) reality.

Chapter 06

FIGURE 6.1 A hypothetical form for performing a task analysis.

FIGURE 6.2 An example of calibration for a three‐dimensional problem space.

FIGURE 6.3 Stress–strain analogy to resilience.

FIGURE 6.4 Variables affecting trust (after Lee and See, 2004).

Chapter 07

FIGURE 7.1 Example of determining the space of what is achievable within the space defined by what is aspired to and what is acceptable (in a simple two‐dimensional problem space).

FIGURE 7.2 Tulga’s task for deciding where to attend and act.

Chapter 08

FIGURE 8.1 Fitts’ index of difficulty test.

FIGURE 8.2 Classical feedback control system.

FIGURE 8.3 Ferrell (1965) results for time to make accurate positioning movements with delayed feedback.

FIGURE 8.4 Response times of nuclear plant operator teams to properly respond to a major accident alarm. For the particular mathematical function used (log normal), using specialized graph paper (logarithm of response time on

y

‐axis, Gaussian percentiles on

x

‐axis) reduces that function to a straight line. The 95th percentile mark is seen to be roughly 100 s. .

FIGURE 8.5 Reason’s taxonomy of human error.

FIGURE 8.6 Capture error.

FIGURE 8.7 The Swiss Cheese model of accident occurrence as a result of penetrating multiple barriers. After Reason (1991).

Chapter 09

FIGURE 9.1 Four stages of human operator activity.

FIGURE 9.2 Supervisory control, as originally proposed for lunar rover operations (Ferrell and Sheridan, 1967).

FIGURE 9.3 Functions of the supervisor in relation to elements of the local human‐interactive computer (Figure 9.2) and multiple remote task‐interactive computers.

FIGURE 9.4 Supervisory control in relation to degree of automation and task entropy.

FIGURE 9.5 Distinctions with and between trading and sharing control. .

FIGURE 9.6 Adaptable control (from Sheridan, 2011).

FIGURE 9.7 Model‐based failure detection.

Chapter 10

FIGURE 10.1 The ACT‐R cognitive architecture (after Byrne et al., 2008).

FIGURE 10.2 An example of Moray’s 1990 lattice model of the operation of a pump: (a) causality relations and (b) purpose relations.

FIGURE 10.3 Formation of neuronal packets in Yufik’s model of understanding.

FIGURE 10.4 Multiple model representations in teleoperation. .

Chapter 11

FIGURE 11.1 The Leveson STAMP model. .

FIGURE 11.2 An example of system dynamics. .

FIGURE 11.3 Relationships in a policy flight simulator. .

bapp

FIGURE A.1 Hypothetical fuzzy membership functions for basketball players.

FIGURE A.2 Information relationships.

FIGURE A.3 How often to sample.

FIGURE A.4 Payoff matrix for signal detection.

FIGURE A.5 Probability densities for evidence in signal detection.

FIGURE A.6 Receiver operating characteristic (ROC).

FIGURE A.7 The definition and experimental elicitation of a person’s utility function.

FIGURE A.8 Pareto frontier and utility curve intersection determine optimal choice.

FIGURE A.9 Sample payoff matrix for decisions under probabilistic contingencies.

FIGURE A.10 Dominating and nondominating strategies (at left) and prisoner’s dilemma (right).

FIGURE A.11 Dynamic programming model of preview control.

FIGURE A.12 Kalman model of control.

Guide

Cover

Table of Contents

Begin Reading

Pages

ii

iii

iv

xi

xii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

95

96

97

98

99

100

101

102

103

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

159

160

161

162

163

164

165

166

669

167

168

169

170

171

STEVENS INSTITUTE SERIES ON COMPLEX SYSTEMS AND ENTERPRISES

William B. Rouse, Series Editor

WILLIAM B. ROUSEModeling and Visualization of Complex Systems and Enterprises

 

ELISABETH PATE‐CORNELL, WILLIAM B. ROUSE, AND CHARLES M. VESTPerspectives on Complex Global Challenges: Education, Energy, Healthcare, Security, and Resilience

 

WILLIAM B. ROUSEUniversities as Complex Enterprises: How Academia Works, Why It Works These Ways, and Where the University Enterprise Is Headed

 

THOMAS B. SHERIDANModeling Human–System Interaction: Philosophical and Methodological Considerations, with Examples

MODELING HUMAN–SYSTEM INTERACTION

Philosophical and Methodological Considerations, with Examples

 

THOMAS B. SHERIDAN

 

 

 

 

 

 

 

 

Copyright © 2017 by John Wiley & Sons, Inc. All rights reserved

Published by John Wiley & Sons, Inc., Hoboken, New JerseyPublished simultaneously in Canada

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per‐copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750‐8400, fax (978) 750‐4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748‐6011, fax (201) 748‐6008, or online at http://www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762‐2974, outside the United States at (317) 572‐3993 or fax (317) 572‐4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging‐in‐Publication Data:

Names: Sheridan, Thomas B., author.Title: Modeling human‐system interaction : philosophical and methodological considerations, with examples / Thomas B. Sheridan.Description: Hoboken, New Jersey : John Wiley & Sons, [2017] | Series: Stevens Iinstitute series on complex systems and enterprises | Includes bibliographical references and index.Identifiers: LCCN 2016038455 (print) | LCCN 2016051718 (ebook) | ISBN 9781119275268 (cloth) | ISBN 9781119275299 (pdf) | ISBN 9781119275282 (epub)Subjects: LCSH: Human‐computer interaction. | User‐centered system design.Classification: LCC QA76.9.H85 S515 2017 (print) | LCC QA76.9.H85 (ebook) | DDC 004.01/9–dc23LC record available at https://lccn.loc.gov/2016038455

Cover Image: Andrey Prokhorov/Gettyimages

PREFACE

This book has evolved from a professional lifetime of thinking about models and, more generally, thinking about thinking. I have previously written seven books over a span of 42 years, and they all have all talked about models, except for one privately published as a memoir for my family. One even dealt with the concept of God and whether God is amenable to modeling (mostly no). So what is new or different in the present book?

The book includes quite a bit of the philosophy of science and the scientific method as a precursor to discussing human–system models. Many aspects of modeling are discussed: the purpose and uses of models for doing science and thinking about the world and examples of different kinds of models in what has come to be called human–system interaction or cognitive engineering. Along with new material, the book also includes many modeling ideas previously discussed by the author. When not otherwise cited, illustrations were drawn by the author for the book or were original works under the author’s copyright or previously declared by the author to be in public domain prior to publication.

I gratefully acknowledge contributions to these ideas from many colleagues I have worked with, especially Neville Moray, who has been my friend and invaluable critic over the years, and Bill Rouse, who shepherded the book as Wiley series editor. Modeling contributions of past coauthors Russ Ferrell, Bill Verplank, Gunnar Johannsen, Toshi Inagaki, Raja Parasuraman, Chris Wickens, Peter Hancock, Joachim Meyer, and many other colleagues and former graduate students are gratefully acknowledged.

Finally, I dedicate this effort to Rachel Sheridan, my inspiration and life partner for 63 years.

INTRODUCTION

This is a book about models, scientific models, of the interaction of individual people with technical environments, which has come to be called human–system interaction or cognitive engineering. The latter term emphasizes the role of the human intelligence in perceiving, analyzing, deciding, and acting rather than the biomechanical or energetic interactions with the physical environment.

Alphonse Chapanis (1917–2002) is widely considered to be one of the founders of the field of human factors, cognitive engineering, or whatever term one wishes to use. He coauthored one of the (if not THE) first textbooks in the field (Chapanis et al., 1949). I had the pleasure of working with him on the original National Research Council Committee in our field (nowadays called Board on Human Systems Integration, originally chaired by Richard Pew). I recall that Chapanis, while a psychologist by training, repeatedly emphasized the point that our field is ultimately applied to designing technology to serve human needs; in other words it is about engineering. Models are inherent to doing engineering.

More generally, models are the summaries of ideas we hang on to in order to think, communicate to others, and refine in order to make progress in the world. They are cognitive handles. Models come in two varieties: (1) those couched in language we call connotative (metaphor, myth other linguistic forms intended to motivate a person to make his or her own interpretation of meaning based on life experience) and (2) language we call denotative (where forms of language are explicitly selected to minimize the variability of meaning across peoples and cultures). Concise and explicit verbal statements, graphs, and mathematics are examples of denotative language. There is no doubt that connotative language plays a huge role in life, but science depends on denotative expression and models couched in denotative language, so that we can agree on what we’re talking about.

The book focuses on the interaction between humans and systems in the human environment of physical things and other people. The models that are discussed are representations of events that are observable and measurable. In experiments, these necessarily include the causative factors (inputs, independent variables), the properties of the human operator (experimental subject), the assigned task, and the task environment. They also include the effects (outputs, dependent variables), the measures of human response correlated to the inputs.

Chapters 1–3 of the book are philosophical, and apply to science and scientific models quite generally, models in human–system interaction being no exception. Chapter 1 begins with a discussion of what knowledge is and what the scientific method is including the philosophical distinction between private (subjective) knowledge and public (objective) knowledge, the importance of doubt, using and avoiding evidence, objectivity and advocacy, bias, analogy, and metaphor.

Chapter 2 defines the meaning of “model” and offers a six‐factor taxonomy of model attributes. It poses the question of what is to be gained by modeling and the issue of social (group) behavior (choice) as related to that of the individual.

Chapter 3 discusses various distinctions in modeling: those between objective and subjective data, simple and complex models, descriptive and prescriptive models, static and dynamic models, deterministic and probabilistic models, level of abstraction relative to the target thing, or events being characterized, and so on.

Chapter 4 describes various forms of model representation and provides examples: modeling in words, graphs, maps, diagrams, logic diagrams, symbols (mathematic equations), and statistics.

Chapters 5–8 offer specific examples of cognitive engineering models applicable to humans interacting with systems. In each case, a brief discussion is provided to explain the gist of the model. These chapters are in the sequence popularly ascribed to what are known as the four sequential stages of how humans do tasks (and the chapters are so titled): acquiring information, analyzing the information, deciding on action, and implementing and evaluating the action. There is no intention to be comprehensive in the selection of these models. Many of the models discussed are those that I have had a hand in developing, so that admittedly there is a bias. I have tried to include other authors’ models so as to provide a variety and to populate the four model categories representing the accepted taxonomy of four sequential stages for a human operator doing a given task.

Chapter 9 deals with the many aspects of human–automation interaction, wherein some of the functions of information acquisition, analysis, decision‐making, and action implementation are performed by machine, and the human is likely to be operating as a supervisor of the machine (computer) rather than being the only intelligence in the system.

Chapter 10 takes up the issue of mental modeling: representing what a person is thinking, which is not directly observable and measurable, and so must be inferred from what a subject says or does in a contrived experiment. A mental model is surely not a scientific model in the same sense as those covered in previous chapters, yet the cognitive scientists working on mental modeling would surely claim that they are doing science by inferential measurement of what is in the head. The chapter provides four very different and contrasting types of mental model.

Chapter 11 deals with modeling of large‐scale societal issues including health care, climate change, transportation, jobs and automation, wealth inequity, privacy and security, population growth, and governance. Such issues and the associated macromodels involve a large number of people. The book cannot possibly cope with reviewing the many such models that exist. Instead, the purpose of the chapter is to ask whether and how our cognitive engineering micro‐models relate to the societal macro‐models. Is there a connection that can be useful? Do or can the human performance models scale up to predict how larger groups of people, for example, at the family, tribe, region, nation, or world, interact with their technology and physical environments? Is our modeling community remiss in not contributing more to these larger‐scale modeling efforts? Are there some specific human factors or cognitive engineering aspects that we can help with?

Finally the Appendix gives the mathematical particulars of selected models for which there is a sound mathematic basis. Thus the reader of the main text can skip the mathematical details or refer to them as convenient. However, I add the caveat that insofar as quantification is appropriate and warranted based on relevant empirical data, such quantitative models are predictive and thus more useful for engineering purposes.

I must reiterate that the models I have selected are necessarily ones that I am familiar with. I make no pretense that the selection is even handed, and it certainly is not comprehensive. The reader may feel that some models are dated and no longer in fashion (yes models can have runs of fashion), though I would maintain that all those included have passed the test of time and continue to have relevance.

As noted before, the book focuses on what has come to be called human–system interaction or cognitive engineering models. It does not include biomechanical models and kinematic or energetic aspects of humans performing tasks, which subfield is commonly associated with “ergonomics.” This is surely a shortcoming for a book purporting to be about “human–system interaction.” I would simply assert that for most of the tasks implied by the example models, the constraints of biomechanics are not major factors in task performance. Further, I have barely mentioned artificial intelligence, robotics, computer science, control theory, or other engineering theories that are emerging as companion fields to human–automation interaction.

1KNOWLEDGE

GAINING NEW KNOWLEDGE

Knowledge can be acquired by humans in many ways. Surely, there are also many ways to classify the means to acquire knowledge. Here are just a few ways.

One’s brain can acquire knowledge during the evolutionary process by successive modifications to the genes. That finally results in fertilization of egg by sperm and the gestation process in the mother. Certainly, all this depends on the sensory–motor “operating system” software that makes the sense organs and muscles work together. But evolution also plays at the level of higher cognitive function. As Noam Chomsky has shown us (Chomsky, 1957), much of the syntactic structure of grammar is evidently built in at birth. What knowledge we acquire after birth is a function of what we attend to, and what we attend to is a function of our motivation for allocating our attention, which ultimately is a function of what we know, so knowledge acquisition after birth is a causal circle.

Learning has to do with how we respond to the stimuli we observe. Perhaps, the oldest theory of learning is the process of Pavlovian (classical) conditioning, where a stimulus, originally neutral in its effect, becomes a signal that an inherently significant (reward or punishment) unconditioned stimulus is about to occur. This results only after multiple pairings, and the brain somehow remembers the association. The originally neutral stimulus becomes conditioned, meaning that the person (or animal) responds reflexively to the conditioned stimulus the same as the person would respond to the unconditioned stimulus (e.g., the dog salivates with the light or bell).

A different kind of learning is Skinnerian or operant conditioning (Skinner, 1938). This is where a voluntary random action (called a free operant) is rewarded (reinforced), that association is remembered, and after sufficient repetitions, the voluntary actions occur more often (if previously rewarded). Operant learning can be maintained even when rewards are infrequently paired with the conditioned action.

There are many classifications of learning (http://www.washington.edu/doit/types‐learning). Bloom et al. (1956) developed a classification scheme for types of learning which includes three overlapping domains: cognitive, psychomotor, and affective. Skills in the cognitive domain include knowledge (remembering information), comprehension (explaining the meaning of information), application (using abstractions in concrete situations), analysis (breaking down a whole into component parts), and synthesis (putting parts together to form a new and integrated whole).

Gardner (2011) developed a theory of multiple intelligences based upon research in the biological sciences, logistical analysis, and psychology. He breaks down knowledge into seven types: logical–mathematical intelligence (the ability to detect patterns, think logically, reason and analyze, and compute mathematical equations), linguistic intelligence (the mastery of oral and written language in self‐expression and memory), spatial intelligence (the ability to recognize and manipulate patterns in spatial relationships), musical intelligence (the ability to recognize and compose music), kinesthetic intelligence (the ability to use the body or parts of the body to create products or solve problems), interpersonal intelligence (the ability to recognize another’s intentions and feelings), and intrapersonal intelligence (the ability to understand oneself and use the information to self‐manage).

Knowledge can be public, where two or more people agree on some perception or interpretation and others can access the same information. Or it can be private, where it has not or cannot be shared. The issue is tricky, and that is why modelability is proposed as a criterion for what can be called public knowledge. Two people can look at what we call a red rose, and agree that it is red, because they have learned to respond with the word red upon observing that stimulus. But ultimately exactly what they experienced cannot be shared, hence is not public.

We can posit that some learning is simply accepting, unquestioningly, information from some source because that source is trusted or because the learner is compelled in some way to learn. We finally contrast the aforementioned models to learning by means of the scientific method, which is detailed in the following text. Critical observation and hypothesizing are followed by collection of evidence, analysis, logical conclusions, and modeling to serve one’s own use or to communicate to others.

SCIENTIFIC METHOD: WHAT IS IT?

How to determine the truth? Science has its own formal model for this. The scientific method is usually stated as consisting of nine steps as follows:

Gather information and resources (informal observation).

Question the relationships between aspects of some objects or events, based on observation and contemplation. An incipient mental model may already form in the observer’s head.

Hypothesize a conjecture resulting from the act of questioning. This can be either a predictive or an explanatory hypothesis. In either case, it should be stated explicitly in terms of independent and dependent variables (causes and effects).

Predict the logical consequences of the hypothesis. (A model will begin to take shape.)

Test the hypothesis by doing formal data collection and experiments to determine whether the world behaves according to the prediction. This includes taking pains to design the data‐taking and the experiment to minimize risks of experimental error. It is critical that the tests be recorded in enough detail so as to be observable and repeatable by others. The experimental design will have a large effect on what model might emerge.

Analyze the results of the experiment and draw tentative conclusions. This often involves a secondary hypothesis step, namely exercising the statistical

null hypothesis

. The null hypothesis is that some conjecture about a population of related objects or events is false, namely that observed differences have occurred by chance, for example, that some disease is not affected by some drug. Normally, the effort is to show a degree of statistical confidence in the failure and thus rejection of the null hypothesis. In other words, if there is enough confidence that the differences did not occur by chance, then the conjectured relationship exists.

Draw formal conclusions and model as appropriate.

Communicate the results, conclusions, and model to colleagues in publication or verbal presentation, rendering the model in a form that best summarizes and communicates the determined relationships.

Retest and refine the model (frequently done based on review and critique by other scientists).

FURTHER OBSERVATIONS ON THE SCIENTIFIC METHOD

The scientific method described earlier is also called the hypothetico‐deductive method. As stated, it is an idealization of the way science really works, as the given scientific steps are seldom cleanly separated and the process is typically messy. Often experimentation is done in order to make observations that provoke additional observations, questions, hypotheses, predictions, and rejections or refinements of the starting hypothesis. Especially at the early observation stage, the process can be very informal. One of the writer’s students used to say that what we did in the lab was “piddling with a purpose.” Einstein is said to have remarked that the most important tool of the scientist is the wastebasket.

Philosopher statesman Francis Bacon (1620) asserted that observations must be collected “without prejudice.” But as scientists are real people, there is no way they can operate free of some prejudice. They start with some bias as to their initial knowledge and interests, their social status and physical location, and their available tools of observation. They are initially prejudiced as to what is of interest, what observations are made, and what questions are asked.

Philosopher Karl Popper (1997) believed that all science begins with a prejudiced hypothesis. He further asserted that actually a theory can never be proven correct by observation, but it can only be proven incorrect by disagreement with observation. Scientific method is about falsifiability. That is the basis of the null hypothesis test in statistics. (But, of course, the falsifiability is itself subject to statistical error; one can only reject the null hypothesis with some small chance of being wrong.) The American Association for the Advancement of Science asserted in a legal brief to the U.S. Supreme Court (1993) that “Science is not an encyclopedic body of knowledge about the universe. Instead, it represents a process for proposing and refining theoretical explanations about the world that are subject to further testing and refinement.”

Historian Thomas Kuhn (1962) offered a different perspective on how science works, namely, in terms of paradigm shifts. Whether in psychology or cosmology, researchers seem to make small and gradual refinements of accepted models, until new evidence and an accompanying model provokes a radical shift in paradigm, to which scientists then adhere for a time. When a new paradigm is in process of emerging the competition between models and their proponents can be fierce, even personal (who discovered X first, who published first, whose model offers the best explanation). We also must admit that search for truth is not the only thing that motivates us as scientists and modelers. We are driven by ambition for recognition from our peers as well as by money.

The idea of reproducible observability deserves emphasis. Having to deal with observables is the most critical factor in an epistemological sense (what we know). This is because it distinguishes what may be called truth based on scientific evidence that is openly observable from experiences that are not observable by others (e.g., personal testimony and anecdotal evidence). Observability also comes into play for what are called mental models.