132,99 €
A comprehensive guide to the theory, methodology, and development for modeling systems of systems
Modeling and Managing Interdependent Complex Systems of Systems examines the complexity of, and the risk to, emergent interconnected and interdependent complex systems of systems in the natural and the constructed environment, and in its critical infrastructures. For systems modelers, this book focuses on what constitutes complexity and how to understand, model and manage it.Previous modeling methods for complex systems of systems were aimed at developing theory and methodologies for uncoupling the interdependencies and interconnections that characterize them. In this book, the author extends the above by utilizing public- and private- sector case studies; identifies, explores, and exploits the core of interdependencies; and seeks to understand their essence via the states of the system, and their dominant contributions to the complexity of systems of systems.
The book proposes a reevaluation of fundamental and practical systems engineering and risk analysis concepts on complex systems of systems developed over the past 40 years. This important resource:
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 1488
Veröffentlichungsjahr: 2018
Cover
Foreword
Philosophical and Historical Perspectives on Understanding Commonalities Characterizing Complexity
Complexity, Interdependency, Interconnectedness, and Reinvention of Fault Trees
References
1 Modeling and Managing Interdependent Complex Systems of Systems: Fundamentals, Theory and Methodology
Part I: An Overview
1.I.1 Introduction
1.I.2 Capturing the Essence of a System via Modeling
1.I.3 A Brief History of Modern Systems Engineering
1.I.4 Building Blocks of Mathematical Models and the Centrality of State Variables in Systems Modeling
1.I.5 The Centrality of the States in Modeling Complex Systems of Systems
1.I.6 The Centrality of Time in Modeling Multidimensional Risk and Uncertainty
1.I.7 Systems Modeling and Integration
1.I.8 Structure, States, and Functions of Complex Systems of Systems
1.I.9 The Multifarious Perspectives and Dimensions of Complex Systems of Systems
1.I.10 What Have We Learned from Other Contributors
1.I.11 Conclusions
1.I.12 Modeling and Managing Interdependent Complex Systems of Systems: Book Overview
Part II: On The Resilience and Vulnerability of Complex Systems of Systems
1.II.1 Introduction
1.II.2 Relating the Centrality of State Variables to the Definitions of Risk, Vulnerability, and Resilience
1.II.3 Systems Engineering and Relating Vulnerability and Resilience to the Risk Function
1.II.4 Modeling and Quantifying the Consequences and Risks to Threatened Complex Systems of Systems and Their Vulnerability and Resilience
1.II.5 On the Relationship Among Vulnerability, Resilience, and Preparedness
1.II.6 Infrastructure Interdependencies and the Tragedy of the Commons
1.II.7 Epilogue
References
2 Modeling, Decomposition, and Multilevel Coordination of Complex Systems of Systems
Part I: Decomposition and Hierarchical Modeling
2.I.1 Introduction
2.I.2 Attributes of Decomposition and Multilevel Modeling of Complex Systems of Systems
2.I.3 General Hierarchical Structures
2.I.4 Decomposition and Coordination of Complex Systems of Systems
2.I.5 Hierarchical Structures in Water Resources Complex Systems of Systems
2.I.6 Overlapping Decompositions
2.I.7 Case Studies
2.I.8 Solved Example Problems
Part II: Incorporating Probability Distribution, and Uncertainty Analysis in Modeling Complex Systems of Systems
2.II.1 An Overview
2.II.2 Risk of Extreme Events
2.II.3 Bayesian Methods and Risk Analysis
2.II.4 Risk and Uncertainty
2.II.5 Stakeholders and Risk Perceptions
2.II.6 How to Address Tails of Distributions
2.II.7 Overconfidence in Measurements Involving Uncertainty and Variability
2.II.8 Facilitate the Two‐way Process Between Assessors and Decision Makers
2.II.9 Value of Information as a Strategy for Effective Use of Resources in Decision Making
2.II.10 Sources of Uncertainty
2.II.11 Disincentives for Decision Makers in Making Decisions
2.II.12 Organizational Problems in Using Uncertainty and Risk Analysis
2.II.13 Dealing with Divergent Information
2.II.14 Compounding Margins of Safety
2.II.15 Relevancy of Bringing Types of Uncertainty into Decisions
2.II.16 Uncertainty and Variability in Assuring the Quality of Uncertainty Analysis
2.II.17 Conclusions
References
3 Hierarchical Holographic Modeling and Multilevel Coordination of Complex Systems of Systems
3.1 Introduction
3.2 Hierarchical Holographic Modeling
3.3 Definition and Literature Review
3.4 Relevance of HHM to Complex Systems of Systems
3.5 Matrix Organization Illustration
3.6 Theoretical and Practical Contributions of HHM to Modeling Complex Systems of Systems
3.7 Decomposition and Multilevel Coordination of Complex Systems of Systems
3.8 Attributes of Decomposition and Hierarchical‐Multilevel Coordination
3.9 The Role of Policy Formulation in the Management of Complex Systems of Systems
3.10 The Role of Hierarchical Holographic Modeling in Conflict Management of Complex Systems of Systems
References
4 Modeling Complex Systems of Systems with Phantom System Models
4.1 Introduction
4.2 Complex Interdependencies that Characterize Systems of Systems
4.3 Studying and Modeling the Multiple Dimensions of Complex Systems of Systems
4.4 Historical Perspectives of Modeling Complex Systems of Systems
4.5 Risk Modeling of Interdependent and Interconnected Complex Systems of Systems
4.6 Risk and Uncertainty Analysis of Complex Systems of Systems
4.7 The Multifarious Perspectives of the Maumee River Basin Complex Systems of Systems
4.8 Reflections on Risk to Complex Systems of Systems
4.9 Phantom Systems Models (PSM) and Metamodels
4.10 Summary
References
5 Complex Systems of Systems: Multiple Goals and Objectives
5.1 Uniqueness of Multiple Goals and Objectives to Complex Systems of Systems
5.2 The Surrogate Worth Tradeoff (SWT) Method
5.3 Characterizing Noninferior Solutions
5.4 Examples of Complex Systems of Systems with Multiobjectives
5.5 Sequential Pareto‐Optimal Decisions Made during Emergent Complex Systems of Systems: A Case Study
5.6 Summary
References
6 Hierarchical Coordinated Bayesian Modeling of Complex Systems of Systems
6.1 Hierarchical Coordinated Bayesian Modeling: Theory and Methodology
6.2 Hierarchical Coordinated Bayesian Modeling (HCBM) for Complex Systems of Systems
6.3 Integrating HCBM with the Partitioning Multiobjective Risk Method (PMRM)
6.4 Modeling Complex Systems of Systems by Integrating Bayes’ Theorem with Dynamic Programming
References
7 Hierarchical Multiobjective Modeling and Decision Making for Complex Systems of Systems
Part I: Exploring Systemic Risk to Systems of Systems with Multiple Objectives
7.I.1 Introduction
7.I.2 Modeling Physical Infrastructure Complex Systems of Systems
7.I.3 Systemic Risks in Complex Systems of Systems
Part II: Risk Modeling of Cyber–Physical Infrastructure with Precursor Analysis
7.II.1 Precursor Analysis for Physical Infrastructure Complex Systems of Systems
7.II.2 Precursor Analysis Framework Using an Example of a Bridge Complex Systems of Systems
7.II.3 Subsystems of a Highway Bridge Complex Systems of Systems and Its Failure Modes
7.II.4 Modeling and Control Infrastructure Systems of Systems
7.II.5 Identifying Causal Factors of System Failure
7.II.6 Precursor Prioritization
7.II.7 Precursor Evaluation: Detecting Uncertainty
Part III: Metamodeling of Interdependent Systems: An Application to Bridge Infrastructure Management
7.III.1 Introduction
7.III.2 Modeling Internal and External Interdependencies and Interconnectedness Characterizing Bridge Complex Systems of Systems
7.III.3 The Role of State Space in the Modeling of Bridge Complex Systems of Systems
7.III.4 Metamodeling Bridge Complex Systems of Systems
7.III.5 Ten Systems‐Based Guiding Principles for the Next Generation of the Design, Construction, Operation, Maintenance and Repair of Bridge Complex Systems of Systems
7.III.6 A Case Study on Maintenance of Bridge Complex Systems of Systems
7.III.7 Conclusion
References
8 Modeling Economic Interdependencies among Complex Systems of Systems
Overview
8.1 Inoperability Input–Output Model (IIM)
8.2 The Original Leontief I–O Model
8.3 Inoperability Input–Output Model (IIM)
8.4 Regimes of Recovery
8.5 Supporting Databases for IIM Analysis
8.6 National and Regional Databases for IIM Analysis
8.7 Regional Input–Output Multiplier System (RIMS II)
8.8 Development of IIM and its Extensions
8.9 Dynamic IIM
8.10 Practical Uses of the IIM
8.11 Example Problems
8.12 Summary of the Inoperability Input–Output Model (IIM)
8.13 Communications, Electricity, Water, and Supply Chain as Interdependent and Interconnected Complex Systems of Systems
References
9 Guiding Principles for Modeling and Managing Complex Systems of Systems
Part I: Risk Modeling of Complex Systems of Systems: An Overview
9.I.1 Introduction
9.I.2 Fundamental Differences Between Risk Analyses of Single Systems and Complex Systems of Systems
9.I.3 Updating the Basic Questions Leading to Risk Assessment and Management
9.I.4 Risk of Low Probability and Extreme Consequences to Complex Systems of Systems
Part II: Systems‐based Guiding Principles for Risk to Complex Systems of Systems
9.II.1 The Journey
9.II.2 The Journey: Guiding Principles in the Broader Context of the FAA’s NextGen
9.II.3 The Compass – Fundamental Guiding Principles for Analysis of Complex Systems of Systems
9.II.4 The Continuous Process of Risk Assessment, Management, and Communication
9.II.5 Epilogue
References
10 Modeling Cyber–Physical Complex Systems of Systems: Four Case Studies
Overview
Part I: Modeling of Interdependent and Interconnected Aviation Complex Systems of Systems
10.I.1 Introduction and Overview
10.I.2 Theoretical and Methodological Approach
10.I.3 Modeling Shared States and Essential Entities in Complex Systems of Systems: Specific Lessons Learned
10.I.4 Reinventing the Use of Fault Trees for Modeling Complexity via Shared States and Other Essential Entities
10.I.5 Risk Modeling of the Cyber–Physical CNS Complex System of Systems
10.I.6 Summary and Conclusions
Part II: Quantitative Modeling of Interdependent Cyber–Physical Complex Systems of Systems
Overview
10.II.1 Introduction
10.II.2 Theoretical and Methodological Contributions
10.II.3 Reinventing the Use of Fault‐Tree Analysis
10.II.4 What Have We Learned from the Literature on GPS
10.II.5 Computational Results: Validation of the Theoretical and Methodological Contributions
10.II.6 Conclusions
Part III: Regional Infrastructures as Complex Systems of Systems: A Shared State Model for Regional Resilience
Overview
10.III.1 Introduction: Regional and Community Resilience
10.III.2 Innovative Use of Fault‐Tree Analysis
10.III.3 Shared States and Essential Entities
10.III.4 Centrality of States in Modeling Complex Systems of Systems
10.III.5 Regional Infrastructure Subsystems
10.III.6 Methodological Approach
10.III.7 Shared State Model
10.III.8 A Sample of Behavioral Failures
10.III.9 Summary and Conclusions
Part IV: Assessing Systemic Risk to Cloud‐Computing Technology as Complex Systems of Systems
10.IV.1 Introduction and Overview
10.IV.2 Cloud‐Computing Technology as Complex Systems of Systems
10.IV.3 Cloud‐Computing Technology as an Interdependent and Interconnected Complex Systems of Systems
10.IV.4 Higher Risk to Cloud‐Computing Technology and to Its Users as Complex Systems of Systems
10.IV.5 Economic Analysis of the Security of CCT Complex SoS
10.IV.6 Conclusions and Lessons Learned
References
11 Global Supply Chain as Complex Systems of Systems
11.1 Introduction
11.2 Importance of Leontief Input–Output Model to the Supply Chain
11.3 Modeling Supply Chain Interdependencies via Leontief Input–Output Model
11.4 Inoperability Input–Output Model (IIM)
11.5 The Centrality of the Supply Chain in the Global Economy
11.6 Centrality of Organizational Infrastructure to Effective Performance of the Supply Chain Complex SoS
11.7 Hierarchical‐Multilevel Coordination among Subsupply Chain Commodities
11.8 The Role of Shared States in Risk Modeling of Supply Chain Complex Systems of Systems
11.9 The Role of Organizational Management of the Supply Chain Complex Systems of Systems
11.10 Risk Analysis of the Supply Chain Complex Systems of Systems
11.11 Analytical Method for Modeling and Managing the Supply Chain as Complex Systems of Systems
11.12 Inventory Control of Supply Chain Complex Systems of Systems
11.13 Summary
References
12 Understanding and Managing the Organizational Dimension of Complex Systems of Systems
Part I: Organizational Culture, Vision, and Quality of Leadership as Critical Drivers to Effective and Successful Performance of Complex Systems of Systems
12.I.1 Introduction
12.I.2 Philosophical Perspectives on Organizational Behavior of Complex Systems of Systems
12.I.3 Multifarious Perspectives on Organizational Behavior of Complex Systems of Systems
12.I.4 Successful Habits of Visionary Organizations
12.I.5 Harmony Versus Disharmony Among Subsystems That Constitute Complex Systems of Systems
12.I.6 Multifarious Perspectives on Organizational Behavior of Complex Systems of Systems: Revisited
12.I.7 What Have We Learned from Philosophers About Systems
12.I.8 The Role of Policy Formulation in Organizational Complex Systems of Systems
12.I.9 Organizational Role in Planning and Management of Water Resources Complex Systems of Systems
12.I.10 Final Philosophical Reflections on Part I
Part II: Modeling the Role of Organizations in the Resilience of Cyber–Physical Complex Systems of Systems
Overview
12.II.1 Introduction
12.II.2 The Complexity of the Security of Cyber–Physical Complex Systems of Systems
12.II.3 Organizational and Cyber–Physical Resilience
12.II.4 Critical Factors Affecting the Resilience of Cyber–Physical Complex Systems of Systems
12.II.5 The Art and Science of Modeling Cyber–Physical Complex Systems of Systems
12.II.6 Intrusion Detection
12.II.7 Building Deception into Cyber–Physical Complex Systems of Systems
12.II.8 Epilogue
References
13 Software Engineering
Overview
Part I: Systems Integration via Software Risk Management
13.I.1 Introduction
13.I.2 Role of Risk Assessment and Management in Systems Integration
13.I.3 Shift from Hardware to Software
13.I.4 Software as Integrator of Complex Systems of Systems
13.I.5 The Interface Between Users and Buyers
13.I.6 Systems Integration: Software Engineering and the Software Engineer Integrator
13.I.7 Hierarchical Holographic Modeling and the Complexity of Systems Integration
13.I.8 Acquisition as a Precursor to Successful Systems Integration
13.I.9 The Need for Metrics
13.I.10 Epilogue
Part II: High Performance Computing Technology (HPC) with Complex Systems of Systems in Computational Science and Engineering
13.II.1 Introduction
13.II.2 The Interplay Between Complex Systems and HPC Software Development Environment Technology
13.II.3 Risks Associated with the Intra‐ and Interdependencies Between HPC Technology and Complex Systems of Systems
13.II.4 Systems Integration in High Performance Computing (HPC)
13.II.5 The Role(s) of Systems Engineers, Software Engineers, and Scientists in HPC Complex Systems of Systems Technology
13.II.6 The Role of Models in HPC Complex Systems of Systems
13.II.7 Conclusions
Part III: Assessment and Management of Software Technical Risk
13.III.1 Introduction
13.III.2 A Conceptual Framework
13.III.3 Assessing Software Technical Risk
13.III.4 Software Technical and Nontechnical Risks
References
14 Infrastructure Preparedness for Communities as Complex Systems of Systems
Part I: Infrastructure Preparedness: Primer
14.I.1 Introduction
14.I.2 Developing a Preparedness Roadmap Using the Adaptive Multiplayer Hierarchical Holographic Model (AMP‐HHM)
14.I.3 On the Relationship between Preparedness, Resilience, and Risk to Complex Systems of Systems
14.I.4 Impact Analysis and the Efficacy of Risk Management Plans for Complex SoS
14.I.5 Epilogue
Part II: Balancing Hurricane Protection and Resilience to Complex Systems of Systems
Overview
14.II.1 Calculating Forecast Transition Probabilities
14.II.2 Model Integration for Calculating Resilience Measures
14.II.3 Epilogue
Part III: Insights into Decentralizing Risk Management for Strategic Preparedness Through Decomposition and the Inoperability Input–Output Model
14.III.1 Introduction
14.III.2 Multiobjective Strategic Preparedness
14.III.3 Application of Decomposition for Strategic Preparedness
14.III.4 Conclusions
References
15 Modeling Safety of Transportation Complex Systems of Systems via Fault Trees
Part I: Modeling, Understanding, and Managing the Risk to Transportation Infrastructure as a Complex System of Systems
15.I.1 Introduction
15.I.2 Modeling and Managing Transportation Infrastructure Complex Systems of Systems
15.I.3 Maintenance of the Bridge Complex Systems of Systems
15.I.4 Modeling the Transportation Complex Systems of Systems
15.I.5 Summary
Part II: Modeling Transportation Complex Systems of Systems via Fault Trees
Overview
15.II.1 Introduction
15.II.2 A Functional Decomposition of the Highway Transportation Complex Systems of Systems
15.II.3 Accident Causation Fault Tree
15.II.4 Parameterization of the Model
15.II.5 Unconditional Probability Assessment
15.II.6 An Applied Quantification of the Fault Tree
15.II.7 Conclusions
References
Appendix
A.1 Introduction to Modeling and Optimization
A.2 Fault Trees
A.3 The Partitioned Multiobjective Risk Method
References
Author Index
Subject Index
End User License Agreement
Chapter 05
Table 5.1 Pareto‐optimal solutions.
Table 5.2 Noninferior solutions and tradeoff values for Example Problem 5.2.
Table 5.3 Noninferior solutions and tradeoff values for Example Problem 5.2.
Table 5.4 Shared states among the major NextGen objectives.
Chapter 06
Table 6.1 Cyber attack on current SCADA of city XYZ.
Table 6.2 Cyber attack on current SCADA.
Table 6.3 Simulation results for the posterior distributions of city decomposition.
Table 6.4 Complete SCADA data of eight cities with three attacker types.
Table 6.5 Simulation results for the posterior distributions of attacker type decomposition.
Table 6.6 Expected value and conditional expected value of the coordinated distribution of city XYZ.
Table 6.7 Cost of SCADA system risk management alternatives.
Table 6.8 Costs of risk management alternatives determined by unconditional and conditional expectations.
Table 6.9 Evidence ratio and likelihood of attack.
Table 6.10 Subscenarios for the food poisoning example.
Table 6.11 Parameters of scenario A.
Table 6.12 Abridged description of return (scenario A).
Table 6.13 Abridged description of return (scenario B).
Table 6.14 Optimal allocation results for scenario A.
Table 6.15 Optimal allocation results for scenario B.
Table 6.16 Optimal allocations for scenario A resulting from reordering of stages:
s
a
=
s
4
,
s
b
=
s
3
,
s
c
=
s
2
, and
s
d
=
s
1
.
Table 6.17 Pareto‐optimal frontier for integrated scenarios A and B.
Chapter 07
Table 7.II.1 Example precursors resulting from HHM for a bridge system.
Table 7.III.1 Maintenance decisions are made every 2 years, following a bridge inspection.
Chapter 08
Table 8.1 Sample
make
matrix for 1992 US economy.
Table 8.2 Sample
use
matrix for 1992 US economy.
Table 8.3 Multiregional
origin–destination
table for commodity
i
.
Table 8.4 Interdependency matrix (
A
).
Table 8.5 Inoperabilities resulting from 20% degraded functionality of the electric infrastructure.
Chapter 09
Table 9.I.1 Shared states (system view)–dynamic RNP configuration.
Table 9.I.2 Shared decisions–dynamic RNP configuration–approach.
Chapter 10
Table 10.I.1 Shared states (system view) – required navigation performance (RNP) configuration.
Table 10.I.2 Shared decision–dynamic RNP configuration–approach.
Table 10.I.3 Shared decision makers–dynamic RNP configuration–approach phase.
Table 10.I.4 Shared resources–dynamic RNP configuration–approach phase.
Table 10.II.1 Compilation of shared states.
Table 10.IV.1 Net profit margins, IT expenditure as a percent of revenues, and security expenditures as a percent of IT for five economic sectors in the United States.
Chapter 11
Table 11.1 Summary of results for all five stages.
Chapter 12
Table 12.I.1 Pearl Harbor planning deficiencies.
Chapter 14
Table 14.II.1 Transition probabilities for 72‐h forecast of a 200‐year hurricane.
Table 14.II.2 Transition probabilities for various 24‐h forecasts.
Chapter 15
Table 15.II.1 Standard vehicle maneuver events.
Table 15.II.2 Roadway parameters.
Table 15.II.3 Weather and lighting parameters.
Table 15.II.4 Driver condition parameters.
Table 15.II.5 Basic events and relevant Virginia database fields.
Table 15.II.6 Roadway parameters and relevant Virginia database fields.
Table 15.II.7 Basic event probability quantifications.
Table 15.II.8 Intermediate and top‐level event probability quantifications.
Appendix
Table A.1 Laws of the algebra of sets.
Chapter 02
Figure 2.1 Two‐level structure.
Figure 2.2 Subsystem
i
.
Figure 2.3 Types of subsystems.
Figure 2.4 Feasible decomposition.
Figure 2.5 Nonfeasible decomposition.
Chapter 03
Figure 3.1 Matrix organization of a production system.
Figure 3.2 Product–plant decomposition.
Figure 3.3 Plant–product decomposition.
Chapter 04
Figure 4.1 Political–geographic decomposition of the Maumee River Basin.
Figure 4.2 Hydrological decomposition of the Maumee River Basin.
Figure 4.3 Extrinsic input–output submodel coordination and integration.
Figure 4.4 Intrinsic submodel, coordination, and integration via system state variables.
Figure 4.5 PSM‐based metasystem intrinsic coordination via the shared and nonshared state variables of the system.
Figure 4.6 (a) Structure of HBM. (b) Structure of CHBM.
Chapter 05
Figure 5.1 Relationships between proper noninferiority and Kuhn–Tucker multipliers.
Figure 5.2 Graphical illustration of relationships between positivity of
λ
’s and proper noninferiority.
Figure 5.3 Flood damage and hydroelectric power loss in the decision space.
Figure 5.4 Flood damage versus hydroelectric power loss in the functional space.
Figure 5.5 Noninferior solution in the functional space.
Figure 5.6 Noninferior solution in the decision space.
Figure 5.7 Tradeoff function
λ
12
(
f
2
) versus
f
2
(
x
).
Figure 5.8 Noninferior solution in the decision space.
Figure 5.9 Airborne reroute and airborne trajectory options capabilities.
Figure 5.10 A dynamic Pareto‐optimal frontier are related for two objective functions: access versus capacity. (a)
t
=
k
and (b)
t
=
k
+ 1.
Figure 5.11 A dynamic Pareto‐optimal frontier are related for three objective functions: access versus capacity and access versus efficiency. (a)
t
=
k
and (b)
t
=
k
+ 1.
Figure 5.12 The envelope of the combined Pareto‐optimal frontier of policies
A
,
B
,
C
, and
D
for
k
+ 1st period.
Figure 5.13 State‐space depiction of the National Airspace System.
Chapter 06
Figure 6.1 Posteriors with informative priors. The solid lines are the posteriors with the prior provided by analyst 1, and the dot lines are the posteriors with the prior provided by analyst 2.
Figure 6.2 Posteriors with noninformative priors.
Figure 6.3 Hierarchical Bayesian model.
Figure 6.4 City decomposition of the SCADA cyber attack database.
Figure 6.5 Posterior distribution of
θ
i
, city decomposition.
Figure 6.6 Predictive distribution of “time to recovery” after a cyber attack for each city.
Figure 6.7 Decomposition from two perspectives.
Figure 6.8 Posterior distribution of
, “attacker type” decomposition.
Figure 6.9 Predictive distribution of “time to recovery” after a cyber attack for each attacker type.
Figure 6.10 Coordinated distribution of the two decomposition perspectives.
Figure 6.11 Plot of tradeoff between
time to recovery
and
cost
.
Figure 6.12 Embedding Bayes’ theorem in Bellman’s principle of optimality.
Figure 6.13 Pareto‐optimal frontier in objective functional space for integrated scenarios A and B.
Chapter 07
Figure 7.I.1 Two subsystems sharing one state variable.
Figure 7.I.2 Decomposition of systems sharing one state variable.
Figure 7.I.3 Theoretical trajectory of two stationary points as a function of decreasing
θ
.
Figure 7.I.4 The theoretical trajectories of system states under different levels of perturbation, with black dots indicating system states immediately after different levels’ perturbation. Solid lines show system trajectories returning to stationary point (indicated by the hollow circle at the right), and dashed lines show system trajectory leaving stationary point.
Figure 7.II.1 An overview of a bridge SoS.
Figure 7.II.2 A bridge SoS with interdependent maintenance and traffic engineering subsystems.
Figure 7.II.3 Example HHM for the bridge SoS.
Figure 7.II.4 Comparison of simulation results of the estimated system failure probability with 90% confidence interval between the baseline and precursor inspection error scenarios.
Figure 7.II.5 Evaluation of failure probabilities of each failure mode with multiple detected precursors.
Figure 7.III.1 System dynamics (SD) diagram for the engineering perspective enables us to envision the dynamic relationships among the different factors under consideration.
Figure 7.III.2 SD diagram for the social perspective.
Figure 7.III.3 SD diagram for the economic perspective.
Figure 7.III.4 SD diagram for integrated considerations. The shared state variable connecting the three modeling perspectives (bridge traffic capacity) is circled.
Figure 7.III.5 End‐of‐planning‐horizon superstructure condition rating versus NPV for five maintenance policies (discounted at 10%). Conditional expected superstructure condition rating is calculated as mean +1.525
σ
Figure 7.III.6 Consistency of superstructure condition rating versus NPV for the five maintenance policies. Consistency is calculated as the ratio between the maximum and minimum superstructure condition rating achievable under a certain policy over the entire planning time frame. We prefer values closer to 1.
Figure 7.III.7 Aggregate values of the loss of bridge traffic capacity at period
k
= T. Bridge traffic capacity is compared at each
k
to bridge demand, and additional capacity is recorded. This graph shows the sum of the additional bridge traffic capacity. We see that all five maintenance policies result in a lack of traffic capacity over the entire planning time frame.
Chapter 08
Figure 8.1 Three temporal regimes of recovery that are considered in IIM analysis of impacts resulting from EFCs.
Figure 8.2 Summary of economic input–output accounts.
Figure 8.3 Economic input–output accounts reconfigured for workforce analysis.
Figure 8.4 Sample interpretation of RIMS II multipliers.
Figure 8.5 Individual recovery trajectory of power sector.
Figure 8.6 Dynamic inoperability and equivalent static inoperability.
Figure 8.7 Framework for the infrastructure inoperability I–O model.
Figure 8.8 Communication, electricity, water, and supply chain fault tree.
Chapter 09
Figure 9.II.1 Dynamic Roadmap for risk modeling, assessment, management, and communication.
Figure 9.II.2 Impact of policies at time
t
=
k
on future options at time
k
+ 1.
Figure 9.II.3 (a) and (b) The envelope of the combined Pareto‐optimal frontier of policies A, B, C, and D for
k
+ 1st period.
Chapter 10
Figure 10.I.1 Data link fault tree: expanded fault tree with failure modes caused by data link failure.
Figure 10.I.2 Positioning system fault tree: expanded fault tree with failure modes caused by data link failure.
Figure 10.I.3 Simple dynamic RNP fault tree.
Figure 10.I.4 The FMC fault tree.
Figure 10.I.5 Data link failure fault tree.
Figure 10.I.6 Positioning system fault tree.
Figure 10.II.1 Interdependencies and interconnectedness among GPS timing, electricity, communications, and SCADA/PMUs.
Figure 10.II.2 Fault tree of smart electrical power grid.
Figure 10.II.3 State variable model for GPS timing‐dependent communications CI used by the electrical sector.
Figure 10.II.4 State variable model for the electrical sector.
Figure 10.II.5 State variable model for SCADA/PMU systems.
Figure 10.III.1 Representative regional water distribution.
Figure 10.III.2 Representative regional electric power distribution.
Figure 10.III.3 Representative regional communications.
Figure 10.III.4 Power–water–communications fault tree.
Figure 10.III.5 Prime water supply fault tree.
Figure 10.III.6 Minimal cut set of power–water–communications fault tree.
Figure 10.IV.1 Essential components and functionalities of a simplified IaaS public cloud Complex SoS.
Figure 10.IV.2 Identified subsystems and components of a simplified IaaS public cloud SoS.
Figure 10.IV.3 Fault tree top events and three potential failure modes.
Figure 10.IV.4 Expanded fault tree with failure modes caused by real‐time intrusion.
Figure 10.IV.5 Expanded fault tree with failure modes caused by Trojan Horse.
Figure 10.IV.6 Expanded fault tree with failure modes caused by residual traces.
Figure 10.IV.7 IT expenditures as % of revenues for US industries.
Figure 10.IV.8 Illustration of a Pareto‐optimal curve for different companies in the cloud.
Chapter 11
Figure 11.1 The supply chain process.
Figure 11.2 A sample of sources of risk to the global supply chain complex systems of systems.
Chapter 12
Figure 12.I.1 Common information sharing failures, grouped in four categories.
Figure 12.I.2 Political–geographic decomposition of the Maumee River Basin.
Figure 12.I.3 Hydrological decomposition of the Maumee River Basin.
Chapter 13
Figure 13.I.1 The cost of no risk mitigation.
Figure 13.I.2 Hierarchical Holographic Modeling framework for the identification of sources of risk in systems integration.
Figure 13.I.3 Risk assessment quality‐based HHM structure.
Figure 13.III.1 A roadmap of this chapter’s conceptual framework.
Chapter 14
Figure 14.I.1 An HHM for DHS preparedness – perspective A.
Figure 14.I.2 HHM for DHS preparedness – perspective A (Part (i)).
Figure 14.I.3 HHM for DHS preparedness – perspective A (Part (ii)).
Figure 14.I.4 HHM for government preparedness – perspective B.
Figure 14.I.5 Risk management of interdependencies: considering resilience measures and preventive measures.
Figure 14.I.6 An integrated approach to risk management.
Figure 14.II.1 Transition probabilities for 24‐h forecast of a 200‐year storm.
Figure 14.II.2 Result of MODT analysis with Pareto‐optimal frontier.
Figure 14.II.3 Pareto‐optimal frontier shifts as a result of analysis‐responsive preparedness activities, such as protective infrastructure hardening.
Figure 14.II.4 Example charts representing model results with resilience measures graphed against responsive and protective action costs.
Figure 14.II.5 Decision strategy given tradeoffs between protection and responsive costs.
Figure 14.III.1 Diagram of product flows in the two‐sector IIM.
Figure 14.III.2 Diagram of decomposition points in the two‐sector IIM.
Figure 14.III.3 Decomposed two‐sector IIM coordinated by shadow prices (
λ
and
μ
).
Chapter 15
Figure 15.I.1 Modeling the transportation Complex SoS.
Figure 15.II.1 Methodological framework.
Figure 15.II.2 Top‐level fault tree.
Figure 15.II.3 Vehicle leaves roadway subtree.
Figure 15.II.4 Vehicle‐directed off‐road subtree.
Figure 15.II.5 Driver action error subtree.
Figure 15.II.6 Poor understanding of situation subtree.
Figure 15.II.7 Poor understanding of environment subtree.
Figure 15.II.8 Correct natural information not provided subtree.
Figure 15.II.9 Correct visual information not received subtree.
Figure 15.II.10 Correct artificial information not provided subtree.
Figure 15.II.11 Vehicle leaves roadway truncated subtree.
Figure 15.II.12 Unconditional probability of poor driver steering decision.
Figure 15.II.13 Overall probability of accident causation.
Figure 15.II.14 Accident probability profile.
Appendix
Figure A.1 Basic components of a fault tree.
Figure A.2 Components in series.
Figure A.3 (a) OR Gate. (b) OR Gate for pumping system. (components in series).
Figure A.4 Water pumping system.
Figure A.5 Schematic diagram for the two pumps in parallel.
Figure A.6 (a) AND Gate (b) AND Gate for water pumping system. (components in parallel).
Figure A.7 Venn diagram representation.
Figure A.8 A five‐component fault tree.
Figure A.9 Minimal cut sets.
Figure A.10 Example fault tree.
Figure A.11 Basic components of a fault tree.
Figure A.12 PDF of failure rate distributions for four designs.
Cover
Table of Contents
Begin Reading
iii
iv
vii
viii
ix
x
xi
xii
xiii
xv
xvi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
Yacov Y. Haimes
This edition first published 2019© 2019 John Wiley & Sons, Inc.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
The right of Yacov Y. Haimes to be identified as the author of this work has been asserted in accordance with law.
Registered OfficeJohn Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA
Editorial Office111 River Street, Hoboken, NJ 07030, USA
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.
Limit of Liability/Disclaimer of WarrantyThe publisher and the authors make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties; including without limitation any implied warranties of fitness for a particular purpose. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for every situation. In view of on‐going research, equipment modifications, changes in governmental regulations, and the constant flow of information relating to the use of experimental reagents, equipment, and devices, the reader is urged to review and evaluate the information provided in the package insert or instructions for each chemical, piece of equipment, reagent, or device for, among other things, any changes in the instructions or indication of usage and for added warnings and precautions. The fact that an organization or website is referred to in this work as a citation and/or potential source of further information does not mean that the author or the publisher endorses the information the organization or website may provide or recommendations it may make. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this works was written and when it is read. No warranty may be created or extended by any promotional statements for this work. Neither the publisher nor the author shall be liable for any damages arising here from.
Library of Congress Cataloging‐in‐Publication DataNames: Haimes, Yacov Y., author.Title: Modeling and managing interdependent complex systems of systems / by Yacov Y. Haimes.Description: Hoboken, NJ : John Wiley & Sons, 2018. | Includes bibliographical references and index. |Identifiers: LCCN 2018000550 (print) | LCCN 2018009974 (ebook) | ISBN 9781119173700 (pdf) | ISBN 9781119173694 (epub) | ISBN 9781119173656 (cloth)Subjects: LCSH: Systems engineering. | System analysis.Classification: LCC TA168 (ebook) | LCC TA168 .H28 2018 (print) | DDC 003–dc23LC record available at https://lccn.loc.gov/2018000550
Cover Design: WileyCover Image: © Digital_Art/Shutterstock
The growing interest by the systems modeling community in the concept and in the literature on complexity deserves a fresh reflection on its essence and on its evolving definitions and characterizations. For systems modelers, the starting point begins by focusing on what constitutes complexity and how to understand, model, and manage it. The English language fails to provide a succinct definition of the term complexity in one short or long sentence. This is because each of the two words – “modeling and managing” – used in the title of this book has multiple connotations, interpretations, and associations of the term complexity depending on the individuals using the terms and the specific context in which they are used.
We define and model complexity in this book via the interdependencies and interconnectedness (I‐I) characterizing complex systems of systems (SoS) (Complex SoS). We further model and quantify the I‐I by building on the shared/common states and other essential entities (shared decisions, resources, functions, policies, decision makers, stakeholders, and organizational setups) within and among the subsystems that, in their totality, constitute Complex SoS. Indeed, the above, along with hierarchical decomposition and higher‐level coordination, encompass the essence of the modeling, theory, methodology, and practice espoused in this book. We build on the fact that all outputs from a system are functions of the states of that system and thus also of the decisions and all other inputs to the system. This fact is of particular significance to modeling Complex SoS. For example, Chen (2012) offers the following succinct definition of state variable: “The state x(to) of a system at time to is the information at to that together with the input u(t), for t ≥ to, determines uniquely the output y(t) for all t ≥ to.”
Indeed, the states of a system are commonly a multidimensional vector that characterizes the system as a whole and plays a major role in estimating its future behavior for any given input. Thus, (i) the behavior of the states of the system as a function of time enables modelers to determine, under certain conditions, the system’s future behavior for any given input, or initiating event – and (ii) the shared states and other essential entities within and among the subsystems and systems constitute the essence of the multifarious attributes of the I‐I characterizing Complex SoS.
Thus, in modeling Complex SoS, we exploit the I‐I characterizing Complex SoS that are manifested via shared states and other essential entities in multiple ways. The following sample of modeling methodologies beyond Chapter 1 includes (i) decomposition and multilevel‐hierarchical coordination (Chapters 2 and 4) with a primer on modeling risk and uncertainty in Part II of Chapter 2; (ii) hierarchical holographic modeling (HHM) (Chapter 3); (iii) multiple conflicting, competing, and noncommensurate goals and objectives and the associated tradeoffs (Chapter 5); (iv) hierarchical coordinated Bayesian modeling of Complex SoS (Chapter 6); (v) hierarchical‐multiobjective modeling and decision making of Complex SoS (Chapter 7); (vi) modeling economic interdependencies among Complex SoS (Chapter 8); (vii) guiding principles for modeling and managing Complex SoS (Chapter 9); (viii) modeling cyber–physical Complex SoS – four case studies (Chapter 10); (ix) global supply chain as Complex SoS (Chapter 11); (x) understanding and managing the organizational dimension of Complex SoS (Chapter 12); (xi) software engineering – the driver of cyber–physical Complex SoS (Chapter 13); (xii) infrastructure preparedness for communities as Complex SoS (Chapter 14); and (xiii) modeling safety of highway Complex SoS via fault trees (Chapter 15).
Throughout this book, we introduce the reader, via examples and case studies, to decomposition, hierarchical modeling, multilevel decision making, and optimization and to multiobjective tradeoff analyses. Decomposition is employed to decouple the I‐I characterizing Complex SoS. We postulate that decisions made at the subsystem’s lower levels of the hierarchy can serve as a pretext that they are “independent.” The discrepancies and conflicts, fundamental differences, and the associated tradeoffs are harmonized at the highest levels of the model’s hierarchical decision‐making process.
Starting in the 1960s, many scholars aimed at identifying the fundamental commonalities that characterize modeling and managing Complex SoS. Most of the theory and methodology that were developed employed decomposition using pseudo‐variables at the lower levels of the hierarchical models and were ultimately harmonized at a higher level of the hierarchy. Over the years, we continued to study and improve our modeling perspectives supported by new tools and methodologies that led to a better understanding and more useful modeling of the I‐I that constitute Complex SoS. In the past, modeling the I‐I was directed at the coupled decisions and decision makers that characterized Complex SoS. This was mostly achieved by the deployment of pseudo‐variables, which enabled the reliance on decomposition at lower levels of the hierarchy, and a higher‐level hierarchical coordination of tightly interdependent and interconnected systems and subsystems.
Previous methods developed for modeling Complex SoS were aimed at advancing theory and methodology for uncoupling the I‐I that characterize them. In this book, we will also study and identify interdependencies and interconnections by seeking a better comprehension of their essence and their dominant contributions to the complexity of SoS. We address this challenge by identifying the I‐I of Complex SoS manifested via shared states and other essential entities. We also embrace the fact that all outputs from a system are functions of the states of that system and the latter are functions of all decisions and all inputs to the system. This notion is also of particular significance and central to modeling Complex SoS. For example, to determine the reliability and functionality of a car, one must know the states of the fuel, oil, tire pressure, and other mechanical and electrical systems. All systems are characterized at any moment by their respective states and the conditions thereof, and these conditions are subject to continuous variation and fluctuation. Similarly, the states of health of a human are multifaceted, including blood composition and pressure, among myriad others, and the I‐I that exist among the states of biological systems.
The time frame has always been recognized as a major driver of what we term complexity. This is due to the fact that all systems continue to evolve, emerge, and thus change, while the capability of our modeling tools to keep pace with these changes continues to lag behind. Our inability to model the dynamic changes that characterize Complex SoS remains an impediment that characterizes and impairs our modeling and managing the I‐I characterizing Complex SoS. We embrace the fact that complexities cannot, by their essence and definition, be compounded, packaged, understood, or modeled via one “straightjacket” modeling schema. Rather, we have to keep building on what we have learned from past contributions developed by other scholars, researchers, and practitioners, and augment this past knowledge into our current thinking, thereby creating new and improved theories and methodologies. Furthermore, seeking to discover what makes the I‐I of Complex SoS so difficult to model will ultimately help us better manage them. This is not a fatalistic view of modeling complexity, rather a sober understanding of the reality characterizing Complex SoS.
For decades engineers and scientists have explored the modeling power of fault trees in their quest to study and discover connections between two or among several systems that may lead to catastrophic failure of safety‐critical systems. The fundamental difference characterizing the previous use of fault trees and our present reinvention stems from the basic characteristics of the two approaches. In this book we investigate and identify the genesis of the I‐I by exploring the shared/common states and other essential entities within the systems and subsystems that comprise Complex SoS. By doing so, we also discover and quantify the genesis of potential failure of the entire Complex SoS, whether the interdependencies and interconnections are manifested by connections in series and/or in parallel. In this book we also benefit from decades of experience that engineers and scientists have gained from the intrinsic power of fault trees. Furthermore, to model and improve our understanding of the I‐I that characterize Complex SoS, we have reinvented the use of fault trees via an innovative interpretation of the contributions that they offer systems modelers. We further exploit the I‐I characterizing Complex SoS by tracing (via fault trees) prospective and inevitable failures due to their inherent specific connections via shared states and/or other shared essential entities. This process enables us to determine early in the modeling cycle “what not to do” during planning, design, and future decision making. By investigating the essence of the I‐I characterizing Complex SoS, we can discover future failures that could be avoided. In the parlance of fault‐tree analysis, the shared states and other essential entities are translated into systems connected in series or in parallel, rather than being seen as completely independent.
There exists an insightful correlation and lesson to be learned from the spread of disease in the human body due to the I‐I that are enabled by the continuous flow of blood nourishing every cell (subsystem) of every organ (system) and ultimately of the entire body as Complex SoS. Similarly, all cyber–physical infrastructures are, in their essence, Complex SoS, and their modeling, understanding, and management can be characterized by using their shared states and other essential entities (e.g. communication channels, decisions, decision makers, resources, and organizational setups). Our ability to observe, study, and learn from the behavior of the animal kingdom as Complex SoS and develop knowledge based on lessons learned have been central to the insight from which we benefit today.
Although the above observations, as well as the theoretical discoveries, seem obvious to us now, they do shed light on, and provide insightful understanding of, the genesis of the I‐I characterizing both living entities and cyber–physical Complex SoS. This finding constitutes another building block in the repertoire of the theory, methodologies, and tools that enable modelers of Complex SoS to gain invaluable insight into deciphering the genesis of the I‐I that characterize Complex SoS.
Consider the nearly two‐decade‐old perspectives on complexity offered by scholars in the 1999 Special Issue of the journal Science:
Goldenfeld and Kadanoff (Science, 1999, p. 87) state:
“To us, complexity means that we have structure and variations. Thus, a living organism is complex because it has many different working parts, each formed by variations in the working out of the same genetic coding…a complex world is interesting because it is highly structured. A chaotic world is interesting because we do not know what is coming next.”
Whitesides and Ismagilov (Science, 1999, p. 89) state:
“Complexity is a word rich with ambiguity and highly dependent on context (citing Mainzer, 1977). Chemistry has its own understanding of this word. In one characterization, a complex system is one whose evolution is very sensitive to initial conditions or to small perturbations, one in which the number of independent interacting components is large, or one in which there are multiple pathways by which the system can evolve.”
Weng et al. (Science, 1999, p. 92) state:
“Biological signaling pathways interact with one another to form complex networks. Complexity arises from the large number of components, many with isoforms that have partially overlapping functions; from the connections among components; and from the spatial relationship between components. The origins of the complex behavior of signaling networks and analytical approaches to deal with the emergent complexity are discussed here.”
Koch and Laurent (Science, 1999, p. 96) state:
“Advances in the neurosciences have revealed the staggering complexity of even ‘simple’ nervous systems. This is reflected in their function, their evolutionary history, their structure, and the coding schemes they use to represent information. These four viewpoints need all play a role in any future science of ‘brain complexity.’”
Parrish and Edelstein‐Keshet (Science, 1999, p. 99) state:
“One of the most striking patterns in biology is the formation of animal aggregations. Classically, aggregation has been viewed as an evolutionarily advantageous state, in which members derive the benefits of protection, mate choice, and centralized information, balanced by the costs of limiting resources. Consisting of individual members, aggregations nevertheless function as an integrated whole, displaying a complex set of behaviors not possible at the level of the individual organism. Complexity theory indicates that large populations of [biological] units can self‐organize into aggregations that generate pattern, store information, and engage in collective decision making. This begs the question, are all emergent properties of animal aggregations functional or are some simply pattern? Solutions to this dilemma will necessitate a closer marriage of theoretical and modeling studies linked to empirical work addressing the choices, and trajectories, of individuals constrained by membership in the group.”
In September 1999, the author of this book organized a three‐month‐long seminar series and invited nine experts on complexity and complex systems to participate in and contribute to it. The following themes were presented and discussed:
Modeling Risk in Infrastructures of Large‐Scale Complex
Systems
Yacov Y. Haimes
Adaptive Complexity Theory and the Engineering and Management of Large Systems
Andrew P. Sage
What is Complexity and What Can Models Tell Us About It?
Mitch Waldrop
Life Beyond Chaos: Non‐linear Dynamics in Ecology
Carl Zimmer
Autonomous Control of Complex Systems
Mohammed Jamshidi
Origins of Complexity in Cell Signaling Networks
Ravi Iyengar
Complexity and Critical Infrastructures
Steven Rinaldi
Complexity in Optimization
Leon Lasdon
Understanding and Managing Complex Systems
Mihajlo Mesarovic
Patterns in Nature: The Epiphenomenology of Aggregation
Julia Parrish
Epilogue
In sum, the multifarious interpretations of complexity by the many scholars, who have studied it during the last several decades, attest to its intricacy and the associated challenges it affords to its modeling. This book is dedicated to the understanding of complexity through the discovery of its specific attributes, thereby enhancing our ability to effectively manage complexity with new analytical models and help us improve our understanding and management of Complex SoS. For pedagogical purpose, concepts, theory and methodologies are introduced throughout the book via case studies, including transportation, cyber‐physical infrastructure, bridges, software engineering, electricity, communications, water resources, and others.
Chen, C. (2012).
Linear System Theory and Design
, 4e. New York: Oxford University Press.
Goldenfeld, N. and Kandanoff, L. (1999). Simple lessons from complexity.
Science
284 (5411): 87–89.
Koch, C. and Laurent, G. (1999). Complexity and the nervous system.
Science
284 (5411): 96–98.
Parrish, J. and Edelstein‐Keshet, L. (1999). Complexity, pattern, and evolutionary trade‐offs in animal aggregation.
Science
284 (5411): 99–101.
Weng, G., Bhalla, U., and Iyengar, R. (1999). Complexity in biological signaling systems.
Science
284 (5411): 92–96.
Whitesides, G. and Ismagilov, R. (1999). Complexity in chemistry.
Science
284 (5411): 89–92.
Writing this acknowledgment is probably one of the most rewarding moments in the preparation of this book on complexity, because each of the individuals cited here played some significant role during what might be viewed as the “life cycle” of this project. Even with a careful accounting, there will likely be some individuals who have been inadvertently missed. A great sage once said: “From all my teachers I learned and became an educated person, but my students contributed the most to my knowledge and wisdom.” This statement epitomizes the gratitude that I owe to more than 120 of my doctoral and masters students whom I have had the privilege of serving as their theses advisor and from whom I learned the most.
This book on complexity was made possible through the generous support and technical help of many individuals to whom I owe heartfelt gratitude. My long‐term professional collaboration with Duan Li, Kenneth Crowther, Zhenyu Guo, Eva Andrijcic, Joost Santos, Vira Chankong, Zhenyu Yan, Joshua Bogdanor, and Bryan Lewis and numerous other graduate students, and the many papers that we published together during the more than four decades, have had a major impact on the scope and contents of this book. I will always cherish their contributions to my professional growth. I also want to acknowledge my current colleagues at the University of Virginia, Jim Lambert and Barry Horowitz, for our daily conversations and association.
The painstaking and judicious technical editorial work of Pat Levine is most appreciated and heartily acknowledged. I would like to thank undergraduate students Madeleine Fleshman, Claire Trevisan, and Tyler Brown who labored long hours converting and retyping the text and modifying the figures and tables. Material from papers published jointly with several of my colleagues and graduate students have been incorporated into this book. These colleagues are Clyde Chittister, Duan Li, Kenneth Crowther, Barry Horowitz, Jim Lambert, Zhenyu Guo, Joost Santos, Kash Barker, Steve Chase, Andy Anderegg, and Keith Hipel. The seminal works by Professor Hipel on conflict resolution associated with complex systems of systems have made their marks in the systems engineering field as well as in this book.
I am especially grateful to Rosemary Shaw, who, in addition to managing the Center for Risk Management of Engineering Systems, has worked tirelessly by my side with an abundance of grace and enthusiasm to bring this book to publication.
I am most appreciative and grateful to my editor Brett Kurzman and Victoria Bradshaw at Wiley, USA for their continued support and encouragement. Special thanks to the Wiley UK and India production team of Kshitija Iyer and Vishnu Priya for their expert production and tireless dedication. I am also thankful to the publishers who granted me and Wiley and the other publishers' permission to reproduce material from published in their journal articles.
I thank my wife Sonia for her constant encouragement and loving support throughout the demanding time and commitment to bringing this book to fruition.
I dedicate this book to my wife Sonia and to Rosemary, the Center's Manager.
