105,99 €
Explains multi-level models of enterprise systems and covers modeling methodology This book addresses the essential phenomena underlying the overall behaviors of complex systems and enterprises. Understanding these phenomena can enable improving these systems. These phenomena range from physical, behavioral, and organizational, to economic and social, all of which involve significant human components. Specific phenomena of interest and how they are represented depend on the questions of interest and the relevant domains or contexts. Modeling and Visualization of Complex Systems and Enterprises examines visualization of phenomena and how understanding the relationships among phenomena can provide the basis for understanding where deeper exploration is warranted. The author also reviews mathematical and computational models, defined very broadly across disciplines, which can enable deeper understanding. * Presents a 10 step methodology for addressing questions associated with the design or operation of complex systems and enterprises * Examines six archetypal enterprise problems including two from healthcare, two from urban systems, and one each from financial systems and defense systems * Provides an introduction to the nature of complex systems, historical perspectives on complexity and complex adaptive systems, and the evolution of systems practice Modeling and Visualization of Complex Systems and Enterprises is written for graduate students studying systems science and engineering and professionals involved in systems science and engineering, those involved in complex systems such as healthcare delivery, urban systems, sustainable energy, financial systems, and national security.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 460
Veröffentlichungsjahr: 2015
Cover
Series
Title Page
Copyright
Preface
CHAPTER 1: INTRODUCTION AND OVERVIEW
SYSTEMS PERSPECTIVES
COMPLEXITY AND COMPLEX SYSTEMS
COMPLEX VERSUS COMPLICATED SYSTEMS
SYSTEMS PRACTICE
PHENOMENA AS THE STARTING POINT
OVEVIEW OF BOOK
REFERENCES
CHAPTER 2: OVERALL METHODOLOGY
INTRODUCTION
PROBLEM ARCHETYPES
METHODOLOGY
AN EXAMPLE
SUPPORTING THE METHODOLOGY
CONCLUSIONS
REFERENCES
CHAPTER 3: PERSPECTIVES ON PHENOMENA
INTRODUCTION
DEFINITIONS
HISTORICAL PERSPECTIVES
CONTEMPORARY PERSPECTIVES
TAXONOMY OF PHENOMENA
VISUALIZING PHENOMENA
CONCLUSIONS
REFERENCES
CHAPTER 4: PHYSICAL PHENOMENA
INTRODUCTION
NATURAL PHENOMENA
DESIGNED PHENOMENA
DETERRING OR IDENTIFYING COUNTERFEIT PARTS
CONCLUSIONS
REFERENCES
CHAPTER 5: HUMAN PHENOMENA
DESCRIPTIVE VERSUS PRESCRIPTIVE APPROACHES
MODELS OF HUMAN BEHAVIOR AND PERFORMANCE
TRAFFIC CONTROL VIA CONGESTION PRICING
MENTAL MODELS
FUNDAMENTAL LIMITS
CONCLUSIONS
REFERENCES
CHAPTER 6: ECONOMIC PHENOMENA
INTRODUCTION
MICROECONOMICS
MACROECONOMICS
BEHAVIORAL ECONOMICS
ECONOMICS OF HEALTHCARE DELIVERY
CONCLUSIONS
REFERENCES
CHAPTER 7: SOCIAL PHENOMENA
INTRODUCTION
PHYSICS-BASED FORMULATIONS
NETWORK THEORY
GAME THEORY
SIMULATION
URBAN RESILIENCE
CONCLUSIONS
REFERENCES
CHAPTER 8: VISUALIZATION OF PHENOMENA
INTRODUCTION
HUMAN VISION AS A PHENOMENON
BASICS OF VISUALIZATION
PURPOSES OF VISUALIZATIONS
DESIGN METHODOLOGY
EXAMPLE – BIG GRAPHICS AND LITTLE SCREENS
VISUALIZATION TOOLS
IMMERSION LAB
POLICY FLIGHT SIMULATORS
CONCLUSIONS
REFERENCES
CHAPTER 9: COMPUTATIONAL METHODS AND TOOLS
INTRODUCTION
MODELING PARADIGMS
LEVELS OF MODELING
REPRESENTATION TO COMPUTATION
MODEL COMPOSITION
COMPUTATIONAL TOOLS
CONCLUSIONS
REFERENCES
CHAPTER 10: PERSPECTIVES ON PROBLEM SOLVING
INTRODUCTION
WHAT IS? VERSUS WHAT IF?
CASE STUDIES
OBSERVATIONS ON PROBLEM SOLVING
RESEARCH ISSUES
CONCLUSIONS
REFERENCES
Index
End User License Agreement
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
Cover
Table of Contents
Preface
Begin Reading
CHAPTER 1: INTRODUCTION AND OVERVIEW
Figure 1.1 Relationship of Complexity and Intentions
Figure 1.2 Hierarchy of Phenomena
CHAPTER 2: OVERALL METHODOLOGY
Figure 2.1 Hierarchical Visualization of Congestion Pricing Problem
CHAPTER 3: PERSPECTIVES ON PHENOMENA
Figure 3.1 Hierarchy of Phenomena
CHAPTER 4: PHYSICAL PHENOMENA
Figure 4.1 Multilevel Representation of Human Biological Phenomena
Figure 4.2 Urban Oceanography Model
Figure 4.3 Vehicle Powertrain
Figure 4.4 Vehicle Manufacturing
Figure 4.5 Multilevel Model of the Context of Counterfeiting
CHAPTER 5: HUMAN PHENOMENA
Figure 5.1 Block Diagram of Manual Control
Figure 5.2 Multitask Decision Making
Figure 5.3 Functions of Mental Models
Figure 5.4 Relationships among Key Variables
CHAPTER 6: ECONOMIC PHENOMENA
Figure 6.1 and VC Models
Figure 6.2 Price Surface
Figure 6.3 Profit Surface
Figure 6.4 Elements of Gross Domestic Product (GDP)
Figure 6.5 Klein's Recognition-Primed Decision-Making Model (
Source
: Reproduced with permission from Klein (2003). Copyright © 2003, Doubleday.)
Figure 6.6 The Dynamics of Escalating Healthcare Costs
CHAPTER 7: SOCIAL PHENOMENA
Figure 7.1 Earth as a System
Figure 7.2 Castes Make Outcastes and Outcastes Make Castes
Figure 7.3 Needs–Beliefs–Perceptions Model
CHAPTER 8: Visualization of Phenomena
Figure 8.1 Visualization of Hoboken Being Flooded (
Source
: Reproduced with permission from Blumberg (2013). Copyright © 2013, Stevens Institute of Technology.)
Figure 8.2 Hospital Acquisition Game (Yu, 2014)
Figure 8.3 Dashboard for Emory Simulator (Park et al., 2012)
Figure 8.4 Product Planning Advisor
Figure 8.5
S
-Curve Projections and Option Valuation
Figure 8.6 Virtual Antarctica
Figure 8.7 Architecture of Public–Private Enterprises
CHAPTER 9: COMPUTATIONAL METHODS AND TOOLS
Figure 9.1 Basic Dynamic System, with Control, and with Estimator
Figure 9.2 Basic Queuing System
Figure 9.3 Example Service Network
Figure 9.4 Modeling Framework Underlying
Product Planning Advisor
Figure 9.5 Models Underlying the
Technology Investment Advisor
CHAPTER 1: INTRODUCTION AND OVERVIEW
Table 1.1 Hard versus Soft Systems Thinking (Pidd, 2004)
Table 1.2 Systems Approaches (Jackson, 2003)
Table 1.3 Methodologies versus Problems (Jackson & Keys, 1984)
Table 1.4 Levels of Systems Practice (Ulrich, 1988)
Table 1.5 Critical Systems Practice (Jackson, 2003)
Table 1.6 Eight Classes of Phenomena
CHAPTER 3: PERSPECTIVES ON PHENOMENA
Table 3.1 Class of Phenomena versus Example Phenomena of Interest
Table 3.2 Hierarchy of Complexity versus Approaches (Adapted from Harvey & Reed, 1997)
Table 3.3 Phenomena Associated with Archetypal Problems
CHAPTER 4: PHYSICAL PHENOMENA
Table 4.1 Physical Phenomena Associated with Archetypal Problems
Table 4.2 Phenomena Associated with the Natural Sciences
Table 4.3 Phenomena Associated with Engineering
CHAPTER 5: HUMAN PHENOMENA
Table 5.1 Human Phenomena Associated with Archetypal Problems
Table 5.2 Problem-Solving Decisions and Responses
Table 5.3 Knowledge Content of Mental Models for Teamwork
Table 5.4 Implications and Consequences of Modeling Limits
CHAPTER 6: ECONOMIC PHENOMENA
Table 6.1 Economic Phenomena Associated with Archetypal Problems
Table 6.2 Theories of Human Decision Making
Table 6.3 Intuition versus Analysis for Different Situations
CHAPTER 7: SOCIAL PHENOMENA
Table 7.1 Social Phenomena Associated with Archetypal Problems
Table 7.2 Hierarchy of Complexity versus Approaches (Adapted from Harvey & Reed, 1997)
Table 7.3 Payoff Matrix for Prisoners' Dilemma
CHAPTER 8: VISUALIZATION OF PHENOMENA
Table 8.1 Effects of Users' Tasks on Use of Abstraction/Aggregation Levels (Based on Frey et al., 1992)
Table 8.2 Abstraction–Aggregation Space (Based on Frey et al., 1993)
Table 8.3 Experimental Findings (Based on Frey et al., 1993)
CHAPTER 9: COMPUTATIONAL METHODS AND TOOLS
Table 9.1 Archetypal Phenomena and Modeling Paradigms
Table 9.2 Modeling Paradigms, Typical Assumptions, and Phenomena Predicted
Table 9.3 Levels of Modeling, Example Issues, and Potential Models
CHAPTER 10: PERSPECTIVES ON PROBLEM SOLVING
Table 10.1 Observations on Problem Solving
WILLIAM B. ROUSE
Copyright © 2015 by John Wiley & Sons, Inc. All rights reserved
Published by John Wiley & Sons, Inc., Hoboken, New Jersey
Published simultaneously in Canada
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data:
Rouse, William B.
Modeling and visualization of complex systems and enterprises : explorations of physical, human, economic, and social phenomena / William B. Rouse.
pages cm. – (Stevens Institute Series on Complex Systems and Enterprises)
Includes bibliographical references and index.
ISBN 978-1-118-95413-3 (cloth)
1. System theory–Mathematical models. 2. System analysis. 3. System theory–Philosophy. I. Title.
Q295.R68 2915
003–dc23
2015007877
The seeds for this book were sown almost 50 years ago when, as a budding engineer at Raytheon, I was given the assignment of determining the optimal number and types of spare parts to take on a submarine for the sonar system. I had to integrate reliability, maintainability, and availability models into an overall mission simulation to assess the effectiveness of alternative spare parts plans.
I have been immersed in mathematical and computational modeling ever since. For several years, I focused on operation and maintenance of complex vehicle and process systems, with particular emphasis on model-based decision aiding and training of personnel in these systems.
I next addressed design of new products and strategic business management. We developed a suite of software tools to support these processes and worked with over 100 companies and thousands of executives and senior managers. These intense experiences led to fascination with enterprises and the great difficulties they had with recognizing needs to change and especially accomplishing change.
Over much of the past decade, I have been immersed in transformation of the healthcare industry, often through collaborations with the National Academy of Engineering and the Institute of Medicine. It quickly became apparent that the poor performance of the US system could only be coherently understood by looking at the interactions of the multiple levels of the system.
Multi-level models of enterprise systems soon became a paradigm I applied to a range of types of enterprises. The need to better understand these types of models and better inform modeling methodology motivated this book. The material presented in this book is intended to help the many stakeholders in such modeling endeavors to understand the intricacies of this approach and achieve greater success.
I am grateful to many people who strongly influenced my thinking in planning and preparing this book. Mike Pennock has been an amazing sounding board and source of critiques and ideas. He and I are currently pursuing several of the research issues outlined later in this book. Conversations with John Casti, as well as several of his books, also have been very helpful.
Ken Boff, Alec Garnham, Bill Kessler, and Hal Sorenson helped me to understand the aerospace and defense industry. Bill Beckenbaugh, Jim Prendergast, and Dennis Roberson were my guides in the electronics and semiconductor industry. Denis Cortese, Mike Johns, and Bill Stead provided insights into healthcare delivery. Brainstorming with Alan Blumberg and Alex Washburn, as well as Michael Bruno and Dinesh Verma, regarding urban resilience broadened my perspective substantially.
The research of Rob Cloutier, Babak Heydari, Jose Ramirez-Marquez, and Steve Yang at Stevens provided interesting insights. Rahul Basole, Doug Bodner, Leon McGinnis, and Nicoleta Serban at Georgia Tech were kindred spirits in modeling pursuits. Research sponsors Kristin Baldwin, Judith Dahmann, and Scott Lucero often asked insightful questions and offered suggestions that shaped my thinking.
William B. Rouse
Hoboken, NJ
August 2014
Addressing complex systems such as health-care delivery, sustainable energy, financial systems, urban infrastructures, and national security requires knowledge and skills from many disciplines, including systems science and engineering, behavioral and social science, policy and political science, economics and finance, and so on. These disciplines have a wide variety of views of the essential phenomena underlying such complex systems. Great difficulties are frequently encountered when interdisciplinary teams attempt to bridge and integrate these often-disparate views.
This book is intended to be a valuable guide to all the disciplines involved in such endeavors. The central construct in this guide is the notion of phenomena, particularly the essential phenomena that different disciplines address in complex systems. Phenomena are observed or observable events or chains of events. Examples include the weather, climate change, traffic congestion, aggressive behaviors, and cultural compliance. A team asked to propose policies to address the problem of overly aggressive motorist behaviors during inclement weather in the evening rush hour might have to consider the full range of these phenomena.
Traditionally, such problems would be decomposed into their constituent phenomena, appropriate disciplines would each be assigned one piece of the puzzle, and each disciplinary team would return from their deliberations with insights into their assigned phenomena and possibly elements of solutions. This reductionist approach often leads to inferior solutions compared to what might be achieved with a more holistic approach that explicitly addresses the interactions among phenomena and central trade-offs underlying truly creative solutions. This book is intended to enable such holistic problem solving.
Five themes are woven throughout this book.
Understanding the essential phenomena underlying the overall behaviors of complex systems and enterprises can enable improving these systems.
These phenomena range from physical, behavioral, and organizational, to economic and social, all of which involve significant human components.
Specific phenomena of interest and how they are represented depend on the questions of interest and the relevant domains or contexts.
Visualization of phenomena and relationships among phenomena can provide the basis for understanding where deeper exploration is warranted.
Mathematical and computational models, defined
very
broadly across disciplines, can enable the necessary deeper understanding.
This chapter proceeds as follows. We first consider the nature of a range of perspectives on systems. This begins with an exploration of historical perspectives, drawing upon several disciplines. We then consider the nature of complexity and complex systems. This leads to elaboration of the contrast between complex and complicated systems and the notion of complex adaptive systems. We then consider systems practice over the past century. This background is intended to provide a well-informed foundation that will enable digesting the material discussed in later chapters.
It is useful to reflect on the roots of systems thinking. This section begins with a discussion of the systems movement. We then elaborate the philosophical underpinnings of systems thinking. Finally, we review a range of seminal concepts. Brief sketches of these concepts are presented here; they are elaborated in greater depth in later chapters.
The systems movement emerged from the formalization of systems theory as an area of study during and following World War II, although it can be argued that the physicists and chemists of the 19th century contributed to the foundations of this area. Before delving into the ideas emerging in the 1940s and beyond, it is important to distinguish four aspects of the systems movement:
Systems Thinking
is the process of understanding how things influence one another within a whole and represents an approach to problem solving that views “problems” as components of an overall system.
Systems Philosophy
is the study of systems, with an emphasis on causality and design. The most fundamental property of any system is the arbitrary boundary that humans create to suit their own purposes.
Systems Science
is an interdisciplinary field that studies the characteristics of complex systems in nature and society, to develop interdisciplinary foundations, which are applicable in a variety of areas, such as engineering, biology, medicine, and economics.
Systems Engineering
is an interdisciplinary field focused on identifying how complex engineering undertakings should be designed, developed, and managed over their life cycles.
Contrasting these four aspects of systems, it is important to recognize that different disciplines tend to see “systems” quite differently, for the most part due to the varying contexts of interest (Adams et al., 2014). Thus, a systems scientist studying marsh ecosystems and a systems engineer designing and developing the next fighter aircraft will, from a practical perspective at least, have much less in common than the term “system” might lead one to expect. The key point is that systems exist in contexts and different contexts may (and do) involve quite disparate phenomena.
There are many interpretations of what system thinking means and the nature of systems thinkers. Some are inclined toward model-based deduction, while others are oriented toward data-driven inference. The former extol the deductive powers of Newton and Einstein, while the latter are enamored with the inferential capabilities of Darwin. These different perspectives reflect different epistemologies.
The study of epistemology involves the questions of what is knowledge, how can it be acquired, and what can be known. The empiricism branch of epistemology emphasizes the value of experience. The idealism branch sees knowledge as innate. The rationalism branch relies on reason. The constructivism branch seeks knowledge in terms of creation. These branches differ in terms of how they represent knowledge, in particular how this knowledge is best modeled and simulated (Tolk, 2013).
There are many possible ways of thinking about complex systems and enterprises (Rouse, 2005, 2007). Systems paradigms for representation of knowledge include hierarchical mappings, state equations, nonlinear mechanisms, and autonomous agents (Rouse, 2003). For hierarchical mappings, complexity is typically due to large numbers of interacting elements. With uncertain state equations, complexity is due to large numbers of interacting state variables and significant levels of uncertainty. Discontinuous, nonlinear mechanisms attribute complexity to departures from the expectations stemming from continuous, linear phenomena. Finally, autonomous agents generate complexity via the reactions of agents to each other's behavior and lead to emergent phenomena. The most appropriate choice among these representations depends on how the boundaries of the system of interest are defined (Robinson et al., 2011).
Horst Rittel argued that the choice of representation is particularly difficult for “wicked problems” (Rittel & Webber, 1973). There is no definitive formulation of a wicked problem. Wicked problems have no stopping rule – there is always a better solution, for example, “fair” taxation and “just” legal systems. Solutions to wicked problems are not true or false, but good or bad. There is no immediate or ultimate test of a solution to a wicked problem. Wicked problems are not amenable to trial-and-error solutions. There is no innumerable (or an exhaustively describable) set of potential solutions and permissible operations. Every wicked problem is essentially unique. Every wicked problem can be considered a symptom of another problem. Discrepancies in representations can be explained in numerous ways – the choice of explanation determines the nature of problem's resolution. Problem solvers are liable for the consequences of the actions their solutions generate. Many real-world problems have the aforementioned characteristics.
The notion of wicked problems raises the possibility of system paradoxes (Baldwin et al., 2010). Classic paradoxes include whether light is a particle or a wave. Contemporary paradoxes include both collaborating and competing with the same organization. The conjunction paradox relates to the system including element A and element not A. The biconditional paradox holds if A implies B and B implies A. For the equivalence paradox, system elements have contradictory qualities. With the implication paradox, one or more system elements lead to its own contradiction. The disjunction paradox involves systems that are more than the sum of their parts. Finally, the perceptual paradox reflects perceptions of a system that are other than reality.
Finally, there are fundamental theoretical limits as to what we can know about a system and its properties (Rouse and Morris, 1986; Rouse et al., 1989; Rouse and Hammer, 1991). There are limits of system information processing capabilities (Chaitin, 1974), limits to identifying signal processing and symbol processing models, limits of validating knowledge bases underlying intelligent systems, and limits of accessibility of mental models in terms of forms and content of representations. The implication is that models are inherently approximations of reality and may be biased and limited in significant ways. This topic is pursued in more depth in Chapter 5.
This broad – and very brief – review of the philosophical underpinnings of the systems arena leads to two very important observations. First, the range of disciplines involved and the variety of formalisms they employ has led to a lack of crispness in the nature of the field. Second, this state of affairs can, to a great extent, be attributed to the very wide range of phenomena of interest, for example, biological cells to urban infrastructures to macroeconomic policies. Chapters 4–7 address this variety by partitioning it into classes of phenomena, then recombining these elements in Chapters 8–10.
The experiences of the problem-driven research in World War II led many now-notable researchers to develop new concepts, principles, models, methods, and tools for specific military problems that they later generalized to broader classes of phenomena. The systems theorists included Norbert Wiener (1961), who generalized control theory into the concept of cybernetics. Weiner defined cybernetics as the study of control and communication in the animal and the machine. Studies in this area focus on understanding and defining the functions and processes of systems that have goals and that participate in circular, causal chains that move from action to sensing to comparison with desired goals and back again to action. Concepts studied include, but are not limited to, learning, cognition, adaptation, emergence, communication, efficiency, and effectiveness. Later extensions of control theory include optimal state filtering (Kalman, 1960) and optimal control (Bellman, 1957; Pontryagin et al, 1962).
Shannon (1948) developed information theory to address the engineering problem of the transmission of information over a noisy channel. The most important result of this theory is Shannon's coding theorem, which establishes that, on average, the number of bits needed to represent the result of an uncertain event is given by its entropy, where entropy is a measure of the uncertainty associated with a random variable. In the context of information theory, the term refers to Shannon entropy, which quantifies the expected value of the information contained in a message, typically measured in binary digits or bits. Shannon's noisy-channel coding theorem states that reliable communication is possible over noisy channels provided that the rate of communication is below a certain threshold, called the channel capacity. The channel capacity can be approached in practice by using appropriate encoding and decoding systems.
Ashby (1952, 1956) added the Law of Requisite Variety to the canon. Put succinctly, only variety can destroy variety. More specifically, if a system is to be fully regulated, the number of states of its control mechanism must be greater than or equal to the number of states in the system being controlled. Thus, in order for an enterprise to reduce the variety manifested by its environment to yield less varied products and services, it must have sufficient variety in its business processes.
Bertalanffy (1968) developed General Systems Theory over several decades, with particular interest in biological and open systems, that is, those that continuously interact with their environments. The areas of systems science that he included in his overall framework encompass cybernetics; theory of automata; control theory; information theory; set, graph, and network theory; decision and game theory; modeling and simulation; and dynamical systems theory – in other words, virtually all of systems science. Bertalanffy includes consideration of systems technology including control technology, automation, computerization, and communications. Had the field of artificial intelligence existed in his time, that area would have surely been included as well. As is often the case with grand generalizations, it is often difficult to argue with the broad assertions but sometimes not easy to see the leverage gained.
Ackoff (1971) coined the term “system of systems” that has gained great currency of late. He recognized that organizations could be seen as systems. In this context, he outlined a classification of systems (self-maintaining, goal-seeking, multigoal seeking, purposive system), and elaborated the notions of system state, system changes, and system outcomes, where outcomes are seen as the consequences of system responses, not just the response variables in themselves. He further elaborated organizational systems as being variety increasing or variety decreasing, and also discusses adaptation and learning.
It may seem odd to group economics with cognition. However, much seminal thinking arose from people who studied behavioral and social phenomena associated with economic processes. Nobel Prize winner Kenneth Arrow (Arrow, 1951; Arrow and Debreu, 1954) developed social choice theory, the associated impossibility theorem, equilibrium theory, and the economics of information. Nobel Prize winner Herbert Simon (1957, 1962) studied bounded rationality, satisficing versus optimizing, behavioral complexity as a reflection of environmental complexity, human information processing, and artificial intelligence. Nobel Prize winner Daniel Kahneman (2011), with his colleague Amos Tversky, studied human decision-making biases and heuristics for several decades. Finally, George Miller (1956) contributed to cognitive psychology, cognitive science, psycholinguistics (which links language and cognition), and studies of short-term memory – coming up with oft-cited “magic number seven.”
This body of work provides important insights into complex systems laced with behavioral and social phenomena (as well as into how to win a Nobel Prize in Economics). Put simply, the classical notional of “economic man” as a completely rational, decision maker who can be counted on to make optimal choices is often a wildly idealistic assumption. The phenomena studied by Arrow, Simon, Kahneman, and Miller make classical mathematical economics quite difficult. On the other hand, these phenomena can make agent-based simulations quite important. In Chapters 5, 7, and 9, human decision making and problem solving are considered in some depth, with many concepts traceable back to the seminal thinkers discussed in this section.
Operations research (OR) emerged from World War II and efforts to study and improve military operations. Philip Morse was a pioneer in the research philosophy of immersing problem solvers in the complex domains where solutions are sought. The key element was the emphasis on research in operational contexts rather than just study of mathematical formalisms. Morse and Kimball (1951) and Morse (1958) authored the first books in the United States in this area, and went on to publish an award-winning book on the application of OR to libraries (Morse, 1968).
C. West Churchman was internationally known for his pioneering work in OR, system analysis, and ethics. He was recognized for his then radical concept of incorporating ethical values into operating systems (Churchman, 1971). Ackoff received his doctorate in philosophy of science in 1947 as Churchman's first doctoral student (Ackoff et al., 1957). He became one of the most important critics of the so-called “technique-dominated Operations Research” and proposed more participative approaches. He argued that any human-created system can be characterized as a “purposeful system” when its “members are also purposeful individuals who intentionally and collectively formulate objectives and are parts of larger purposeful systems” (Ackoff & Emery, 1972).
More recently, OR has come to be dominated by applied mathematicians who pursue mathematical techniques as ends in themselves. The quest for provably optimal solutions of problems has resulted in problems being scaled down, often dramatically, to enable analytical proofs of optimality. The constructs of theorems and proofs have often displaced the intention to actually solve realistically complex problems. The value of immersing researchers in complex operational domains has often come to be discounted as impractical by the researchers themselves.
Talcott Parsons was one of the first social scientists to become interested in systems approaches. He developed action theory, the principle of voluntarism, understanding of the motivation of social behavior, the nature of social evolution, and the concept of open systems (Parsons, 1937, 1951a, 1951b; Parsons and Smelser, 1956). This very much set the stage for the emergence of socio-technical systems as an area of study in its own right.
The idea of work systems and the socio-technical systems approach to work design was originated by Trist, Emery, and colleagues (Trist & Bamforth, 1951; Emery & Trist, 1965, 1973). This included research on participative work design structures and self-managing teams. It also led to a deep appreciation of the roles of behavioral and social phenomena in organizational outcomes and performance.
The six archetypal problems that are introduced in Chapter 2 can be viewed as complex systems problems. This begs the question of the meaning of complexity and complex systems. There is a range of differing perspectives on the nature of complex systems (Rouse, 2003, 2007; Rouse & Serban, 2011). In particular, different disciplines, in part due to the contexts in which they work, can have significantly varying views of complexity and complex systems (Rouse et al., 2012).
Several concepts are quite basic to understanding complex systems. One key concept is the dynamic response of a system as a function of structural and parametric properties of the system. The nature of the response of a system, as well as the stability and controllability of this response, is a central concern. Many OR studies focus on steady-state behavior, while economics research addresses equilibrium behavior. However, transient behaviors – whether of the weather or the financial system – are often the most interesting and sometimes the most damaging.
Another basic concept is uncertainty about a system's state. The state of a system is the quantities/properties of the system whose knowledge, along with future inputs, enables prediction of future values of this set of variables. Uncertainty of system state limits the effectiveness of control strategies in assuring system performance. State estimation – filtering, smoothing, and prediction – is an important mechanism for obtaining the best information for controlling a complex system. Related topics include the value of information and performance risks, for example, consequences of poor performance.
It is useful to differentiate the notions of “system” and “complex system” (Rouse, 2003). A system is a group or combination of interrelated, interdependent, or interacting elements that form a collective entity. Elements may include physical, behavioral, or symbolic entities. Elements may interact physically, computationally, and/or by exchange of information. Systems tend to have goals/purposes, although in some cases, the observer ascribes such purposes to the system from the outside so to speak.
Note that a control system could be argued to have elements that interact computationally in terms of feedback control laws, although, one might also argue that the interaction takes place in terms of the information that embodies the control laws. One could also describe the control function in terms of physical entities such as voltages and displacements. Thus, there are (at least) three different representations of the same functionality – hence, the “and/or” in the definition.
A complex system is one whose perceived complicated behaviors can be attributed to one or more of the following characteristics: large numbers of elements, large numbers of relationships among elements, nonlinear and discontinuous relationships, and uncertain characteristics of elements and relationships. From a functional perspective, the presence of complicated behaviors, independent of underlying structural features, may be sufficient to judge a system to be complex. Complexity is perceived because apparent complexity can decrease with learning.
More specifically, system complexity tends to increase with
Number of elements
Number of relationships
Nature of relationships
Logical: AND versus OR and NAND
Functional: linear versus nonlinear
Spatial: lumped versus distributed
Structural: for example, feedforward versus feedback
Response: static versus dynamic
Time constant: (not too) fast versus (very) slow
Uncertainty: known properties versus unknown properties
Knowledge, experience, and skills
Relative to all of the above
Relative to observer's intentions
Of course, the preceding list begs the question of whether the elements of a system are knowable. For example, this list is of limited use in describing a city, except perhaps for the utility infrastructures. Thus, as elaborated later, we have to differentiate complex and complicated systems.
The issue of intentions is summarized in Figure 1.1 (Rouse, 2007). If one's intention is simply to classify as observed object as an airplane, the object is not particularly complex. If one wanted to explain why it is an airplane, the complexity of an explanation would certainly be greater than that of a classification. For these two intentions, one is simply describing an observed object.
Figure 1.1 Relationship of Complexity and Intentions
If one's intention is to predict the future state of the airplane, complexity increases substantially as one would have to understand the dynamic nature of the object, at least at a functional level but perhaps also at a structural level. Control requires a higher level of knowledge and skill concerning input–output relationships. Intentions related to detection and diagnosis require an even greater level of knowledge and skill concerning normal and off-normal behaviors in terms of symptoms, patterns, and structural characteristics of system relationships. The overall conclusion is that the complexity of a system cannot be addressed without considering the intentions associated with addressing the system.
This observation is fundamental, and often hotly debated. It argues that complexity involves a relationship between an observer and an object or system. The implication is that there is no absolute complexity independent of the observer. In other words, the complexity of a system depends on why you asked the question, as well as your knowledge and skill relative to the system of interest.
Snowden and Boone (2007) have argued that there are important distinctions that go beyond those outlined earlier. Their Cynefin Framework includes simple, complicated, complex, and chaotic systems. Simple systems can be addressed with best practices. Complicated systems are the realm of experts. Complex systems represent the domain of emergence. Finally, chaotic systems require rapid responses to stabilize potential negative consequences. The key distinction with regard to the types of contexts discussed in this book is complex versus complicated systems. There is a tendency, they contend, for experts in complicated systems to perceive that their expertise, methods, and tools are much more applicable to complex systems than is generally warranted.
Poli (2013) also elaborates the distinctions between complicated and complex systems. Complicated systems can be structurally decomposed. Relationships such as listed earlier can be identified, either by decomposition or, in some cases, via blueprints. “Complicated systems can be, at least in principle, fully understood and modeled.” Complex systems, in contrast, cannot be completely understood or definitively modeled. He argues that biology and all the human and social sciences address complex systems.
Poli also notes that problems in complicated systems can, in principle, be solved. The blueprints, or equivalent, allow one to troubleshoot problems in complicated systems. In contrast, problems in complex systems cannot be solved in the same way. Instead, problems can be influenced so that unacceptable situations are at least partially ameliorated.
Alderson and Doyle (2010) also discuss contrasting views of complexity and distinguish the constructs of simplicity, disorganized complexity, and organized complexity, drawing upon several of the post World War II seminal thinkers discussed earlier. With simplicity, “Questions of interest can be posed using models that are readily manageable and easy to describe, theorem statements are short, experiments are elegant and easy to describe, and require minimal interpretation. Theorems have simple counterexamples or short proofs, algorithms scale, and simulations and experiments are reproducible with predictable results.”
Disorganized complexity “Focuses on problems with asymptotically infinite dimensions and develops powerful techniques of probability theory and statistical mechanics to deal with problems. As the size of the problem and the number of entities become very large, specific problems involving ensemble average properties become easier and more robust and statistical methods apply.”
They illustrate this view with a discussion of the new science of complex networks, “which emphasizes emergent fragilities in disorganized systems. Proponents of this paradigm view architecture as graph topography.” Their view of the Internet, for example, is “random router and web graphs without system-level functions other than graph connectivity, architecture solely in terms of graph topology, and components as homogeneous functionless links and nodes.”
In contrast, organized complexity “Addresses problems where organization is an essential feature, which include biological systems, urban systems, and technological systems. Organized complexity manages the fragility–complexity spiral.” For example, it views architecture as involving layering and protocols, rather than just the random connections of disorganized complexity.
The distinctions articulated by these authors are well taken. Complicated systems have often been designed or engineered. There are plans and blueprints. There may be many humans in these systems, but they are typically playing prescribed roles. In contrast, complex systems, as they define them, typically emerge from years of practice and precedent. There are no plans and blueprints. Indeed, much research is often focused on figuring out how such systems work. A good example is human biology.
The nature of human and social phenomena within such systems is a central consideration. Systems where such phenomena play substantial roles are often considered to belong to a class of systems termed complex adaptive systems (Rouse, 2000, 2008). Systems of this type have the following characteristics:
They tend to be
nonlinear, dynamic
and do not inherently reach fixed equilibrium points. The resulting system behaviors may appear to be random or chaotic.
They are composed of
independent agents
whose behaviors can be described as based on physical, psychological, or social rules, rather than being completely dictated by the physical dynamics of the system.
Agents' needs or desires, reflected in their rules, are not homogeneous and, therefore, their
goals and behaviors are likely to differ or even conflict
– these conflicts or competitions tend to lead agents to adapt to each other's behaviors.
Agents are
intelligent and learn
as they experiment and gain experience, perhaps via “meta” rules, and consequently change behaviors. Thus, overall system properties inherently change over time.
Adaptation and learning tends to result in
self-organization
and patterns of behavior that emerge rather than being designed into the system. The nature of such emergent behaviors may range from valuable innovations to unfortunate accidents.
There is
no single point(s) of control
– system behaviors are often unpredictable and uncontrollable, and no one is “in charge.” Consequently, the behaviors of complex adaptive systems usually can be influenced more than they can be controlled.
As might be expected, understanding and influencing systems having these characteristics creates significant complications. For example, the simulation of such models often does not yield the same results each time. Random variation may lead to varying “tipping points” among stakeholders for different simulation runs. These models can be useful in the exploration of leading indicators of the different tipping points and in assessing potential mitigations for undesirable outcomes. This topic is addressed in more detail later.
The evolving collection of approaches to understanding and influencing complex systems and enterprises can be termed systems practice. The development of systems practice has a rich history.
During the 1900–1920s, Henry Gantt (1861–1919), Frederick Taylor (1856–1919), and Frank Gilbreth (1868–1924) pioneered scientific management.
Quality assurance and quality control emerged in the 1920–1930s, led by Walter Shewhart (1891–1967).
Peter Drucker (1909–2005) and Chester Barnard (1886–1961) formalized corporate operations management in the 1940–1950s.
During and following World War II, Philip Morse (1903–1985), C. West Churchman (1913–2004), George Dantzig (1914–2005), and Russell Ackoff (1919–2009) were leading thinkers in OR.
Stafford Beer (1926–2002) articulated the foundations of management cybernetics in the 1960–1970s.
W. Edwards Deming (1900–1993) and Joseph Juran (1904–2008) brought total quality management to the United States in the 1970–1980s.
Michael Hammer (1948–2008) and James Champy led the wave of business process reengineering in the 1990s.
Taiichi Ohno's (1912–1990) innovations in six sigma and lean production gained traction in the United States in the 1990–2000s.
Most recently, Daniel Kahneman has led the way for behavioral economics in the 2010s.
The cornerstone of systems practice is usually considered to be systems thinking, which has been characterized in a variety of ways, depending on the analytic paradigms of interest, for example, (Checkland, 1993; Weinberg, 2001; Jackson, 2003; Meadows, 2008; Gharajedaghi, 2011). Over more than a century, systems thinking tried to become increasingly rigorous, focusing on mathematics, statistics, and computation. During the 1960–1970s, many thought leaders began to recognize that forcing all phenomena into this mold tended to result in many central phenomena being assumed away to allow for the much-sought theorems and proofs to be obtained. In particular, behavioral and social phenomena associated with complex systems were simplified by viewing humans as constrained but rational decision makers who always made choices that optimized the objective performance criteria – which were linear, if lucky.
The reaction, particularly in the United Kingdom, to such obviously tenuous assumptions was the emergence of the notion of hard versus soft systems thinking (Pidd, 2004). Table 1.1 contrasts these two points of view. Hard systems thinking seeks quantitative solutions of mathematical models that are assumed to be valid representations of the real world and, consequently, will inherently be embraced once they are calculated. Soft systems thinking sees modeling as a means for exploration and learning via intellectual and inherently approximate constructs open to discussion and debate.
Table 1.1 Hard versus Soft Systems Thinking (Pidd, 2004)
Hard Systems Thinking
Soft Systems Thinking
Oriented to goal seeking
Oriented to learning
Assumes the world contains systems that can be “engineered”
Assumes the world is problematical but can be explored using models or purposeful activity
Assumes systems models to be models of the world
Assumes systems models to be intellectual constructs to help debate
Talks the language of problems and solutions
Talks the language of issues and accommodations
Philosophically positivistic
Philosophically phenomenological
Sociologically functionalist
Sociologically interpretative
Systematicity lies in the world
Systematicity lies in the process of inquiry into the world
Table 1.2 contrasts systems approaches (Jackson, 2003). Hard systems thinking represents but one cell in this table. Other methods are much less “closed form” in orientation, relying more on simulation as well as participative mechanisms. The keys for these latter mechanisms are insights and consensus building.
Table 1.2 Systems Approaches (Jackson, 2003)
Participants
Unitary
Pluralist
Coercive
Systems
Simple
Hard systems thinking
Soft systems approaches
Emancipatory systems thinking
Complex
System dynamicsOrganizational cyberneticsComplexity theory
Postmodern systems thinking
Table 1.3 contrasts methodologies and problems (Jackson & Keys, 1984). Again, only one cell of the table includes traditional OR and systems analysis. For other than mechanical problems with a single decision maker, much more participative approaches are warranted, at least if the goal is solving the problem of interest rather than just modeling the “physics” of the context.
Table 1.3 Methodologies versus Problems (Jackson & Keys, 1984)
Mechanical
Systemic
Unitary – one decision maker
Operations researchSystems engineeringSystems analysis
Organizational cybernetics socio-technical systems
Pluralist – multiple independent decision makers
Singerian inquiry systemsStrategic assumption methodsWicked problem formulations
General systems theoryComplex adaptive systemsSoft systems methodology
Table 1.4 summarizes Ulrich's (1988) levels of system practice. He differentiates hard versus soft in terms of three categories – one hard and two versions of soft. One class of soft management addresses change while the other addresses conflict. The key disciplines and tools vary substantially across these three categories.
Table 1.4 Levels of Systems Practice (Ulrich, 1988)
Operational Systems
Strategic Systems
Normative Systems
Aspect
Management
Management
Management
Dominating interpretation
Systematic
Systemic
Critical idea of reason
Strand of systems thinking
Hard – mechanistic paradigm
Soft – evolutionary paradigm
Soft – normative paradigm
Dimension of rationalization
Instrumental
Strategic
Communicative
Main object of rationalization
Resources – means of production
Policies – steering principles
Norms – collective preferences
Task of the expert
Management of scarceness
Management of complexity
Management of conflict
Type of pressure
Costs
Change
Conflict
Basic approach
Optimization
Steering capacity
Consensus
Goodness criterion
Efficient
Effective
Ethical
Theory-practice mediation
Decisionistic
Technocratic
Pragmatistic
Key disciplines
Decision theory, economics, engineering
Game theory, ecology, social sciences
Discourse theory, ethics, critical theory
Example tools
Cost–benefit analysis, linear optimization
Sensitivity analysis, large-scale simulation
Systems assessment, ideal planning
Trap to avoid
Suboptimization
Social technology
Excluding the affected
Table 1.5 summarizes Jackson's (2003) Critical Systems Practice. The most important aspect of his guidance is to remain open to the range of possibilities in Tables 1.1–1.4. From the perspective of understanding complex systems, this means that the nature of models entertained should be driven by the issues of interest, the phenomena underlying these issues, and the orientations of the key stakeholders in the problem framing and solving processes.
Table 1.5 Critical Systems Practice (Jackson, 2003)
Creativity
Task
To highlight significant concerns, issues, and problems
Tools
Creativity-enhancing devices employing multiple perspectives
Outcome
Dominant and dependent concerns, issues, and problems
Choice
Task
To choose an appropriate generic systems methodology
Tools
Methods for revealing methodological strengths and weaknesses
Outcome
Dominant and dependent generic systems methodologies
Implementation
Task
To arrive at and implement specific positive change proposals
Tools
Generic systems methodologies
Outcome
Highly relevant and coordinated change yielding improvements
Reflection
Task
To produce learning about the problem and solution
Tools
Clear understanding about the current state of knowledge
Outcome
Research findings that fed back into practice
Pidd (2004) offers the notion of complementarity as a way of rationalizing the relationship between hard and soft approaches. He argues that hard and soft approaches are complementary to each other, but their complementarity is asymmetric. He asserts that any problem situation in human affairs will always at some level entail differences in world views that the “soft” approaches can be used to explore. Within that exploration, any or all of the hard approaches can be adopted as a conscious strategy. The reverse strategy is not available because it entails abandoning the ontological stance of hard approaches. In other words, hard approaches are often inextricably tied to paradigms and assumptions that are central to their problem-solving power.
There is a wealth of formal methods that can play a role in systems practice. Approaches to systems modeling, from a range of disciplinary perspectives, are discussed in Haskins (2006) and in Sage and Rouse (2009). A variety of paradigm-specific treatments are also available, such as Forrester's (1961) classic on systems dynamics modeling and Sterman's (2000) contemporary treatment of system dynamics modeling. Chapter 9 elaborates a variety of formal theories in terms of typical assumptions and outcomes predicted, along with brief expositions of the basic mathematics.
Gharajedaghi (2011) articulates a system methodology for supporting complex adaptive systems. The methodology focuses on functions, structure, and processes. To define functions, he argues that one should clarify which products solve which problems for which customers. To define structure, he advances the idea of a modular design that defines complementary relationships among relatively autonomous units. Finally, design of processes involves using a multidimensional modular design based on the triplet input (technology), output (products), and environments (markets).
This brief discussion of systems approaches serves to set the stage for alternative approaches to understanding complex systems and enterprises. The nature of these systems usually precludes fully modeling them with first-principles physics models. These systems are, by no means, as mechanistic and predictable as purely physical systems like bouncing balls or gear trains. Yet, there are well-developed approaches for addressing problem solving in complex systems and enterprises. Valid predictions, and occasionally optimization, are certainly of interest. However, insights into phenomena, sensitivities to key parameters, and consensus building are often the overarching goals.
The material discussed in this section sets the stage for the methodological discussions in Chapter 2. The emphasis on problem formulation and hard versus soft approaches are highly relevant. The first four steps of our ten-step methodology are very much focused on problem formulation. The great emphasis placed on visualization first and computation later enables taking advantage of “soft” approaches early and only resorting to “hard” approaches for aspects of problems that warrant such investments.
The construct of “phenomena” is central to this book. Problem solving should not begin with the selection of mathematical or computational models, but instead should commence with consideration of the phenomena that must be understood to successfully answer the questions that motivated the modeling effort in the first place.
Figure 1.2 provides a framework for thinking about phenomena and relationships among phenomena. There are four levels – physical, human, economic, and social – as well as typical relationships among phenomena. Of course, there are many subtleties that are not reflected in Figure 1.2, but will be elaborated in Chapters 3–7.
Figure 1.2 Hierarchy of Phenomena
Table 1.6 provides a glimpse into the eight classes of phenomena addressed in this book. The overall taxonomy of phenomena is elaborated in Chapter 3, while physical, human, economic, and social phenomena are addressed in Chapters 4–7, respectively. We also discuss in Chapter 3 the phenomena associated with the six archetypal problems that are introduced in Chapter 2.
Table 1.6 Eight Classes of Phenomena
Class of Phenomena
Example Phenomena of Interest
Physical, natural
Temporal and spatial relationships and responses
Physical, designed
Input–output relationships, responses, stability
Human, individuals
Task behaviors and performance, mental models
Human, teams or groups
Team and group behavior and performance
Economic, micro
Consumer value, pricing, production economics
Economic, macro
Gross production, employment, inflation, taxation
Social, organizational
Structures, roles, information, resources
Social, societal
Castes, constituencies, coalitions, negotiations
This final section of this introductory chapter provides synopses of the chapters in this book and the lines of reasoning that connect them.
This chapter begins by placing modeling and visualization of complex systems in the context of the evolution of the systems movement, its philosophical background, and a wide range of seminal concepts. Constructs associated with complexity and complex systems are discussed. The important contrast between complex and complicated systems is elaborated. The last century of systems practice is briefly reviewed to provide a foundation for the methodology advocated in this book. The use of phenomena as a starting point is then argued. Finally, an overview of the book is provided.
This chapter begins with a discussion of human-centered methods and tools. The emphasis is on assuring that a methodology is both useful and usable. We then discuss six problems archetypes that are addressed throughout this book. The choice of this set of problems was motivated by the desire to assure that the methodology not be problem specific. Attention then turns to the overall ten-step methodology. An example is used to illustrate application of the methodology to congestion pricing of urban traffic. The chapter concludes with discussion of an environment that supports use of the methodology.
In this chapter, we explore the fundamental nature of phenomena and the role that this construct plays in technology development and innovation. The construct is discussed from both historical and contemporary perspectives. Numerous historical and contemporary examples are used to illustrate the evolution of technology as well as the increasing scope of its application. A taxonomy of phenomena is introduced, with particular attention paid to its behavioral and social components. Use of the taxonomy is illustrated using the six archetypal problems introduced in Chapter 2. Finally, visualization of phenomena is discussed in the context of the examples used throughout Chapters 4–7.
This chapter considers two types of physical phenomena. First, we discuss naturally occurring phenomena such as weather and water flow. We consider two examples – human biology and urban oceanography. Then we address designed phenomena such as systems engineered to move people and goods. Examples of interest here include vehicle powertrains and manufacturing processes. We also discuss the intersection of designed and natural physical phenomena, which is a central issue in urban oceanography. The chapter concludes with an elaboration of the archetypal example of deterring or identifying counterfeit parts.
This chapter begins by contrasting descriptive and prescriptive approaches. Descriptive approaches focus on data from past instances of the phenomena of interest. Prescriptive approaches attempt to calculate what humans should do given the constraints within which they have to operate. A wide range of models of human behavior and performance are outlined. Examples discussed include manual control, problem solving, and multitask decision making. This exposition is followed by a detailed look at the traffic control problem from our set of archetypal problems. Many of the models discussed make assumptions about what humans know relative to the tasks at hand. Some of this knowledge is characterized using the notion of “mental models.” The nature of this construct is discussed in terms of approaches to assessing mental models and use of the outcomes of such assessments. Finally, fundamental limits in modeling human behavior and performance are addressed.
