103,99 €
Engineering Intelligent Systems Exploring the three key disciplines of intelligent systems As artificial intelligence (AI) and machine learning technology continue to develop and find new applications, advances in this field have generally been focused on the development of isolated software data analysis systems or of control systems for robots and other devices. By applying model-based systems engineering to AI, however, engineers can design complex systems that rely on AI-based components, resulting in larger, more complex intelligent systems that successfully integrate humans and AI. Engineering Intelligent Systems relies on Dr. Barclay R. Brown's 25 years of experience in software and systems engineering to propose an integrated perspective to the challenges and opportunities in the use of artificial intelligence to create better technological and business systems. While most recent research on the topic has focused on adapting and improving algorithms and devices, this book puts forth the innovative idea of transforming the systems in our lives, our societies, and our businesses into intelligent systems. At its heart, this book is about how to combine systems engineering and systems thinking with the newest technologies to design increasingly intelligent systems. Engineering Intelligent Systems readers will also find: * An introduction to the fields of artificial intelligence with machine learning, model-based systems engineering (MBSE), and systems thinking--the key disciplines for making systems smarter * An example of how to build a deep neural network in a spreadsheet, with no code or specialized mathematics required * An approach to the visual representation of systems, using techniques from moviemaking, storytelling, visual systems design, and model-based systems engineering * An analysis of the potential ability of computers to think, understand and become conscious and its implications for artificial intelligence * Tools to allow for easier collaboration and communication among developers and engineers, allowing for better understanding between stakeholders, and creating a faster development cycle * A systems thinking approach to people systems--systems that consist only of people and which form the basis for our organizations, communities and society Engineering Intelligent Systems offers an intriguing new approach to making systems more intelligent using artificial intelligence, machine learning, systems thinking, and system modeling and therefore will be of interest to all engineers and business professionals, particularly systems engineers.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 873
Veröffentlichungsjahr: 2022
Cover
Title Page
Copyright
Acknowledgments
Introduction
Part I: Systems and Artificial Intelligence
1 Artificial Intelligence, Science Fiction, and Fear
1.1 The Danger of AI
1.2 The Human Analogy
1.3 The Systems Analogy
1.4 Killer Robots
1.5 Watching the Watchers
1.6 Cybersecurity in a World of Fallible Humans
1.7 Imagining Failure
1.8 The New Role of Data: The Green School Bus Problem
1.9 Data Requirements
1.10 The Data Lifecycle
1.11 AI Systems and People Systems
1.12 Making an AI as Safe as a Human
References
Notes
2 We Live in a World of Systems
2.1 What Is a System?
2.2 Natural Systems
2.3 Engineered Systems
2.4 Human Activity Systems
2.5 Systems as a Profession
2.6 A Biological Analogy
2.7 Emergent Behavior: What Makes a System, a System
2.8 Hierarchy in Systems
2.9 Systems Engineering
3 The Intelligence in the System: How Artificial Intelligence Really Works
3.1 What Is Artificial Intelligence?
3.2 Training the Deep Neural Network
3.3 Testing the Neural Network
3.4 Annie Learns to Identify Dogs
3.5 How Does a Neural Network Work?
3.6 Features: Latent and Otherwise
3.7 Recommending Movies
3.8 The One‐Page Deep Neural Network
4 Intelligent Systems and the People they Love
4.1 Can Machines Think?
4.2 Human Intelligence vs. Computer Intelligence
4.3 The Chinese Room: Understanding, Intentionality, and Consciousness
4.4 Objections to the Chinese Room Argument
4.5 Agreement on the CRA
4.6 Implementation of the Chinese Room System
4.7 Is There a Chinese‐Understanding Mind in the Room?
4.8 Chinese Room: Simulator or an Artificial Mind?
4.9 The Mind of the Programmer
4.10 Conclusion
References
Note
Part II: Systems Engineering for Intelligent Systems
5 Designing Systems by Drawing Pictures and Telling Stories
5.1 Requirements and Stories
5.2 Stories and Pictures: A Better Way
5.3 How Systems Come to Be
5.4 The Paradox of Cost Avoidance
5.5 Communication and Creativity in Engineering
5.6 Seeing the Real Needs
5.7 Telling Stories
5.8 Bringing a Movie to Life
5.9 Telling System Stories
5.10 The Combination Pitch
5.11 Stories in Time
5.12 Roles and Personas
6 Use Cases: The Superpower of Systems Engineering
6.1 The Main Purpose of Systems Engineering
6.2 Getting the Requirements Right: A Parable
6.3 Building a Home: A Journey of Requirements and Design
6.4 Where Requirements Come From and a Koan
6.5 The Magic of Use Cases
6.6 The Essence of a Use Case
6.7 Use Case vs. Functions: A Parable
6.8 Identifying Actors
6.9 Identifying Use Cases
6.10 Use Case Flows of Events
6.11 Examples of Use Cases
6.12 Use Cases with Human Activity Systems
6.13 Use Cases as a Superpower
References
Note
7 Picturing Systems with Model Based Systems Engineering
7.1 How Humans Build Things
7.2 C: Context
7.3 U: Usage
7.4 S: States and Modes
7.5 T: Timing
7.6 A: Architecture
7.7 R: Realization
7.8 D: Decomposition
7.9 Conclusion
8 A Time for Timeboxes and the Use of Usage Processes
8.1 Problems in Time Modeling: Concurrency, False Precision, and Uncertainty
8.2 Processes and Use Cases
8.3 Modeling: Two Paradigms
8.4 Process and System Paradigms
8.5 A Closer Examination of Time
8.6 The Need for a New Approach
8.7 The Timebox
8.8 Timeboxes with Timelines
8.9 The Usage Process
8.10 Pilot Project Examples
8.11 Summary: A New Paradigm Modeling Approach
References
Part III: Systems Thinking for Intelligent Systems
9 Solving Hard Problems with Systems Thinking
9.1 Human Activity Systems and Systems Thinking
9.2 The Central Insight of Systems Thinking
9.3 Solving Problems with Systems Thinking
9.4 Identify a Problem
9.5 Find the Real Problem
9.6 Identify the System
9.7 Understanding the System
9.8 System Archetypes
9.9 Intervening in a System
9.10 Testing Implementing Intervention Incrementally
9.11 Systems Thinking and the World
10 People Systems: A New Way to Understand the World
10.1 Reviewing Types of Systems
10.2 People Systems
10.3 People Systems and Psychology
10.4 Endowment Effect
10.5 Anchoring
10.6 Functional Architecture of a Person
10.7 Example: The Problem of Pollution
10.8 Speech Acts
10.9 Seeking Quality
10.10 Job Hunting as a People System
10.11 Shared Service Monopolies
References
Index
End User License Agreement
Chapter 1
Figure 1.1 Images used to train the deep neural network.
Figure 1.2 The green school bus.
Chapter 2
Figure 2.1 Systems engineering Vee model.
Chapter 3
Figure 3.1 The one‐page deep neural network with a quadratic function.
Figure 3.2 Drawing of the neurons and layers in the one‐page neural network....
Figure 3.3 A sigmoid activation function.
Figure 3.4 Excel solver settings.
Figure 3.5 Actual vs. predicted values from the one‐page deep neural network...
Figure 3.6 One‐page neural network with alternate function.
Figure 3.7 Actual vs. predicted values for alternate function.
Chapter 7
Figure 7.1 Context diagram for VX military vehicle..
Figure 7.2 Context diagram for home system.
Figure 7.3 Use case diagram for VX.
Figure 7.4 Use case diagram for home.
Figure 7.5 States for the home.
Chapter 8
Figure 8.1 Nested timeboxes illustrating home construction processes.
Figure 8.2 Trapezoidal timebox showing a distribution for starting and endin...
Figure 8.3 Timeboxes placed on a timeline.
Figure 8.4 Concurrent and sequential execution of ordinary process and usage...
Figure 8.5 Swim lanes showing responsibility for ordinary processes and usag...
Figure 8.6 Timebox model of The Hunt for Red October.
Figure 8.7 Usage process model of Main Battle Timeline, (a) Part I and (b) P...
Figure 8.8 FAA example business process before use of new modeling approach....
Figure 8.9 FAA example with addition of usage processes.
Figure 8.10 Harmony AMBSE delivery process.
Figure 8.11 Revised harmony aMBSE delivery process.
Figure 8.12 Harmony agile MBSE iteration process with notations.
Figure 8.13 Revised harmony agile MBSE iteration process.
Chapter 9
Figure 9.1 Causal loop diagram of the heroin crime system.
Cover
Table of Contents
Title Page
Copyright
Acknowledgments
Introduction
Begin Reading
Index
End User License Agreement
iii
iv
xi
xiii
xiv
xv
1
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
275
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
Barclay R. BrownRaytheon CompanyFlorida, USA
This edition first published 2023© 2023 John Wiley & Sons, Inc.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
The right of Barclay R. Brown to be identified as the author of this work has been asserted in accordance with law.
Registered OfficeJohn Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA
Editorial Office111 River Street, Hoboken, NJ 07030, USA
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.
Limit of Liability/Disclaimer of WarrantyIn view of ongoing research, equipment modifications, changes in governmental regulations, and the constant flow of information relating to the use of experimental reagents, equipment, and devices, the reader is urged to review and evaluate the information provided in the package insert or instructions for each chemical, piece of equipment, reagent, or device for, among other things, any changes in the instructions or indication of usage and for added warnings and precautions. While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
Library of Congress Cataloging‐in‐Publication Data applied for:
Print ISBN: 9781119665595 (Hardback)
Cover Design: WileyCover Image: © Billion Photos/Shutterstock
Writing a book is a journey, but one that is never taken alone. I'm grateful to many people who supported and encouraged the creation of this book. My colleagues at INCOSE, the International Council on Systems Engineering, listened patiently and contributed feedback on the ideas as I presented them, in early forms, in years of conference sessions, tutorials, and workshops. My work colleagues at IBM made my 17 years there productive and fascinating, as I was continuously both a student and a teacher of model‐based systems engineering methods. I learned a great deal from Tim Bohn, Dave Brown, Jim Densmore, Ben Amaba, Bruce Douglass, Grady Booch, and from the numerous aerospace and defense companies with whom I consulted over the years. Waldemar Karwowski, my dissertation advisor, encouraged much of the work in Chapter 8. Rick Steiner encouraged my experimental (read crazy, sometimes) ideas on how to expand the capabilities of system modeling in my doctoral research. Tod Newman has been a mentor and an inspiration since I joined Raytheon in 2018. Larry Kennedy of the Quality Management Institute brought me into a world of quality in systems – something I had always hoped was there but had never fully appreciated. My mother, now nearing 100 years of age, and still as bright and alert as ever, taught me to love math, science, and engineering. I remember her teaching me multiplication with a set of 100 1‐in. cube blocks – see how two rows of four make eight? Finally, my partner in love, life, and everything else, Honor Allison Lind, has been with me since the very beginning of this book project, several years ago (too many years ago, my publisher reminds me). She's always my biggest cheerleader and the best partner anyone could have.
Since the early days of interactive computers and complex electronic systems of all kinds, we the human users of these systems have had to adapt to them, formatting and entering information the way they wanted it, the only way they were designed to accept it. We patiently (or not) interacted with systems using arcane user interfaces, navigating voice menus, reading user manuals (or not), and suffering continual frustration and inefficiency when we couldn't figure out how to interact with systems on their terms. Limited computing resources meant that human users had to deal with inefficient user interfaces, strict information input formats, and computer systems that were more like finicky pets than intelligent assistants.
Human activity systems of all kinds, including businesses, organizations, societies, families, and economies, can produce equally frustrating results when we try to interact with them. Consider the common frustrations of dealing with government bureaucracies, utility companies, phone and internet service companies, universities, internal corporate structures, and large organizations of any kind.
At the same time, dramatic and ongoing increases in computing speed, memory, and storage, along with decreasing cost, size, and power requirements have made new technologies like artificial intelligence, machine learning, and natural language understanding available to all systems and software developers. These advanced technologies are being used to create intelligent systems and devices that deliver new and advanced capabilities to their users.
This book is about a new way of thinking about the systems we live with, and how to make them more intelligent, enabling us to relate to them as intelligent partners, collaborators, and supervisors. We have the technology, but what's missing is the engineering and design of new, more intelligent systems. This book is about that – how to combine systems engineering and systems thinking with new technologies like artificial intelligence and machine learning, to design increasingly intelligent systems.
Most of the emphasis on the use of AI capabilities has been at the algorithm and implementation level, not at the systems level. There are numerous books on AI and machine‐learning algorithms, but little about how to engineer complex systems that act in intelligent ways with human beings – what we call intelligent systems. For example, Garry Kasparov was the first world chess champion to be beaten by a computer (IBM's Deep Blue in 1997), which inspired him to create Advanced Chess competitions, where anyone or anything can enter – human players, computers, or human–computer teams. It's person‐plus‐computer that wins these competitions most of the time; the future is one of closer cooperation between people and increasingly intelligent systems. In the world of systems thinking, we will consider the person‐plus‐computer as a system in itself – one that is more intelligent than either the human or the computer on its own.
Nearly everyone alive today grew up with some sort of technology in the home. For me, growing up in the 1970s, it was a rotary (dial) phone and a black‐and‐white (remote‐less) TV. Interaction was simple – one button or dial accomplished one function. The rise of computers, software, graphical user interfaces, and voice control brought new levels of capability, but we still usually press (now click or tap) one button to do one function. We enter information one tidbit at a time – a letter, a word, or a number must be put into the computer in just the right place for the system to carry out our wishes. Rarely does the system anticipate our needs or work to fulfill our needs on its own. It can seem that we are serving the systems, rather than the reverse. But humans designed those systems, and they can be designed better.
Most of the interactions between systems and people are still at a very basic level, where people function as direct or remote controllers, driving vehicles, flying drones, or managing a factory floor. It is the thesis of this book that we must focus on designing increasingly intelligent systems that will work with people to create even more intelligent human‐plus‐machine systems. The efficient partnership of human and machine does not happen accidentally, but by design and through the practice of systems engineering.
This book introduces, explains, and applies ideas and practices from the fields of systems engineering, model‐based systems engineering, systems thinking, artificial intelligence, machine learning, philosophy, behavioral economics, and psychology. It is not a speculative book, full of vague predictions about the AIs of the future – it is a practical and practice‐oriented guide to the engineering of intelligent systems, based on the best practices known in these fields.
The book can be divided into three parts.
Part I, Chapters 1–4, is about systems and artificial intelligence. In Chapter 1, we look at systems that use, or could use artificial intelligence, and examine some of the popular conceptions, myths, and fears about intelligent systems. In Chapter 2, we look at systems, what they are, how they behave, and how almost everything we experience can be seen from a systems viewpoint. In Chapter 3, we examine deep neural networks, the most important current approach to artificially intelligent systems, and attempt to remove all the mystery about how they work by building one in a simple, one‐page spreadsheet, hopefully leaving a lasting intuition for this key technology. In Chapter 4, we look in depth at the question whether computers can be made to think, or understand, in the way we do.
Part II, Chapters 5–8, is about systems engineering, and how that discipline can be applied to the engineering of intelligent systems. Chapter 5 examines how storytelling, both ancient and modern, can be used to conceive, build, and communicate about new kinds of systems. In Chapter 6, we look at how to apply the “superpower” of use‐case modeling to better describe how complex and intelligent systems should work for their users. Chapter 7 builds on use‐case modeling to show how model‐based systems engineering uses simple models and diagrams to describe the high levels of a system design, and guide its development. Chapter 8 introduces two new concepts – timeboxes and usage processes – that bring a new efficiency and flexibility to the modeling of complex and intelligent systems.
Part III, Chapters 9 and 10, shifts the focus to systems thinking, presenting the foundational concepts, tools, and methods used to understand all kinds of systems. Chapter 9 works through a process for solving hard problems using systems thinking and explains the use of causal loop diagrams, feedback loops, and system archetypes. Chapter 10 introduces people systems, a special kind of system containing only people, and shows how to apply systems thinking to understand and improve this important class of systems.
This is a book about engineering, specifically systems engineering, but it's not just for engineers. Nothing in this book requires a specialized engineering background to understand. Engineers will tell you that the real fun in engineering is the conceptualizing of a new and innovative system and the early‐stage design where all the creative decisions are made. This book is about that part of engineering – the fun part, and we will draw inspiration and borrow techniques from moviemaking, art, storytelling, science fiction, psychology, behavioral economics, and marketing to bring the fun. We hope you will see the world and everything in it, whether physical or not, as systems, and gain a new insight in how to understand the way systems work. We will imagine a world of intelligent systems, and then see how to engineer them.
To keep in touch with our continuing work in intelligent systems, find out more at www.engineeringintelligentsystems.com.
Artificial intelligence technologies such as machine learning and deep neural networks hold great promise for improving, even revolutionizing many application areas and domains. Curiously, experts in AI and casual observers line up on both sides of the are AI benefits worth the risks? question. Several books from prominent AI researchers paint dire scenarios of AI systems run amok, escaping the control of their human creators and managers, pursuing their “own” agendas to our detriment. At the same time, AI research races ahead, developing new capabilities that far surpass the performance of past systems and even humans performing the same tasks. How can we resist these advancements and the benefits they bring, even though there may be risks?
The way out of the dilemma is the application of systems engineering. Systems engineers have been addressing the issues of dangerous technologies for decades. Nuclear fission, like AI, is an inherently dangerous technology. Systems engineers can't make fission safer, so instead they build systems around the fission reaction, making the entire system as safe as possible. If a mishap occurs, the fault is not with fission itself, but with the design or implementation of the entire system.
This chapter looks at some of the main challenges in the development of intelligent systems – systems that include one or more AI‐based components to produce intelligent behavior – including reliability, safety, dependability, explainability, and susceptibility to interference or hacking. Some recent AI failures will be used as examples to highlight how systems engineering methods and techniques could be used or adapted to solve AI challenges.
Is AI dangerous? It's a difficult question – difficult first to understand, and then difficult to answer. Dangerous compared to what? If someone proposed a technology that would be of tremendous economic benefit to all segments of society worldwide, but would predictably result in the death of over one million people per year, would that seem like a great idea? Automobiles are such a technology. Now, someone else proposes a technology that would dramatically reduce that number of deaths, but would cause a small number of additional deaths, that would not have occurred without the new technology. That's AI. Even short of fully self‐driving cars, the addition of intelligent sensors, anti‐collision systems, and driver assistance systems, when widely deployed, can be expected to save many hundreds or thousands of lives, at the cost of a likely far smaller number of additional lives lost to malfunctioning intelligent safety systems. Life, death, and people's feelings about them however, are not a matter of simple arithmetic. One hundred lives saved, at the cost of one additional life, is not a bargain most would easily make, so it is natural that one life lost to an errant AI is cause for headline news coverage, even while that same AI may be saving hundreds of other lives.
It is important to ask at this point, what do we mean by AI? Do we mean a sentient, all‐knowing, all‐powerful, and for some reason usually very evil, general intelligence, with direct control of world‐scale weapons, access to all financial systems and connections to every network in the world, as is seen in the movies? By the end of this chapter, it should be clear that while this description may work well for science fiction novels or screenplays, it not a good design for a new AI system in the real world. In the real world, AI refers to a wide range of capabilities that are thought, in one way or another, to be intelligent. Except in academic and philosophical disciplines, we are not concerned with the safety of the AI itself, but with the safety of the systems within which it operates – and that's the domain of systems engineering.
Systems engineering, an engineering discipline that exists alongside other engineering disciplines like electrical engineering, mechanical engineering, and software engineering, focuses on the system as a whole, both how it should perform, the functional requirements, and additional nonfunctional requirements including safety, security, reliability, dependability, and others. Evidence of systems engineering and its more wide ranging cousin, systems thinking, can be seen even in ancient projects like Roman aqueducts and economic and transportation systems, but systems engineering really began as a serious engineering discipline in the 1950s and 1960s. The emergence of complex communication networks followed by internationally competitive space programs and large defense and weapons systems, put systems engineering squarely on the map of the engineering world. Systems engineering has its own extensive body of knowledge and practices. In what follows, we look at how to apply a few key approaches, relevant to the design of intelligent systems.
AI systems are indeed dangerous, but so are many technologies and situations we live with every day. Electricity, water, and air can all be very dangerous depending on their location, speed, and size. Tall trees near homes can be dangerous when storms come through. Fast moving multi‐ton machines containing volumes of explosive liquids are dangerous too (automobiles again). To the systems engineer, dangers and risks are simply part of what must be considered when designing and building systems. As we'll demonstrate in this chapter, the systems engineer has concepts, methods, and tools to deal with the broad category of danger in systems, or as systems engineers like to call it, safety. First, we introduce a pair of simple ideas to help us think about AI systems more clearly – the human analogy, and the systems analogy.
The first technique that can be applied when confronting some of the difficulties of an intelligent system is to compare the situation to one in which a human being is performing the role instead of an intelligent system. We ask, how would we train, manage, monitor, and control a human assigned the same task we are assigning to the intelligent system? After all a human being is an intelligent system, and certainly far more unpredictable than any AI. Human beings cannot be explicitly programmed, and they maintain a sometimes frustrating ability to forget (or reject) instructions, develop new motivations, and act in unpredictable ways, even ways that run counter to clear goals and incentives. If we can see ways to deal with a human in the situation, perhaps we gain insight into how to design and manage an AI in a system.
To take just one example for now, consider the question of teaching an AI to drive a car safely. Using the human analogy leads us to ask, how do we teach a human being, normally an adolescent human, being to drive a car safely? In addition to mechanical and safety instruction, we include some safeguards in the form of instilling fear of injury (remember those gruesome driver's education “scare” films?), along with instilling fear of the breaking the law and its consequences, plus some appeals to conscience, concern for your own and others safety, and other psychological persuasions. As a society, we back up these threats and fears with a system of laws, police, courts, fines, and prisons, which exert influence through incentives and disincentives on the young driver. None of this prevents dangerous driving, but the system helps keep it in check sufficiently to allow 16‐year‐olds to drive. If it doesn't, we can make adjustments to the system, like raising the driving age, or stiffening the penalties.
The human analogy works because human beings are, from a systems engineering perspective, the worst kind of system. They are not governed by wiring or programming and their behavior patterns, however well‐established through experience, can still change at any moment. At the same time, human behavior is not random in the mathematical sense. Humans act according to their own views of what is in their own best interest and the interests of others, however wrong or distorted others may view their choices. The worst criminals have reasons for why they did what they did.
The human analogy is useful, both for reasoning about not only how to keep a system safe, but also works when thinking about how the system should perform. If we are building a surveillance camera for home security, we might ask how we would use a human being for that task. If we were to hire a security guard, we would consider what instructions we should give the guard about how to observe, what to watch for, when to alert us, what to record, what to ignore, etc., and reasoning about the right instructions could lead us to a better system design for the intelligent, automated guard system.
When we use the human analogy, we should also consider the type of human being we are using as our exemplar. Are we picturing a middle‐aged adult, a child, a disabled person, a highly educated scientist, or an emotional teenager? Each presents opportunities and challenges for the intelligent system designer. Educational systems, for example, are designed for particular kinds of human beings, and implement differing rules and practices for young children, the mentally ill, teenagers, prisoners, graduate students, rowdy summer camp kids, and experienced professionals. Some situations that work fine for mature adults can't tolerate a boisterous or less‐than‐careful college student. Systems engineers must consider the same kind of variability in the “personality” of an AI component in a system.
The mental technique called the systems analogy involves making a comparison between an AI system and an existing system with similar attributes, often resulting in a broader perspective than considering the AI in isolation. Taking another automotive example, we consider how we might manage and control potentially dangerous machines, containing tanks of explosive liquids and using small explosions for propulsion, moving at speeds from a crawl to over 80 mph, in areas where unprotected people may be walking around. Whether these inherently dangerous machines are controlled by human drivers, computers, or trained monkeys, we need a system to make car travel as safe as possible. Traffic lights, lane markings, speed limits, limited access roads for travel at high speeds, and vehicle safety devices like lighting, seat belts, crumple zones, and airbags are all part of the extensive system that makes the existence and use of automobiles as safe as we can practically make it.
Because human‐driven vehicle traffic has been with us so long, and is so familiar, we might be tempted to think that the system is as good as we can make it – that systems thinking about auto safety has long ago reached its peak. System innovations, however, seem to still be possible. In 2000, the diverging diamond interchange was implemented in the United States for the first time, and increased the safety levels at freeway interchanges by eliminating dangerous wide left turns. The superstreet design concept was introduced in 2015 and is reported to reduce collisions by half while decreasing travel time. So even in completely human systems, what we will later call people systems, innovations through systems thinking are possible. We'll apply the same kind of thinking to intelligent systems.
By considering the entire system within which an AI subsystem operates, and comparing it to similar “nonintelligent” systems, we can avoid the simplistic categorization of new technologies as either safe or not safe. Is nuclear power safe? Of course not. Nuclear reactors are inherently dangerous, but by designing a system around them of protective buildings, control systems, failsafe subsystems, redundant backup systems, and emergency shutdown capabilities, we make the system safe enough to use reliably. In fact, the catastrophic failures of nuclear power plants are usually from a lack of good systems thinking and systems engineering, not as a direct result of the inherent danger of nuclear systems. The disaster at the Fukushima Daiichi plant was mainly due to flooded generators which had unfortunately been located on a lower level, making them vulnerable to a rare flood water situation.
The right system design does not make the dangerous technology safer – it makes the entire system safe enough for productive use. As a civilization, we do not tend to shy away from dangerous technologies. Instead, we embrace them, and engineer systems to make them as safe as possible. Electricity, natural gas, internal combustion engines, air travel, and even bicycle riding are all dangerous in their own ways, but with good systems thinking and systems engineering, we make them safe enough to use and enjoy – either by reducing the likelihood of injury or damage (speed limits and traffic signals), or by reducing the potential harm (airbags). Even prohibiting the use of a technology by law (think DDT or asbestos insulation) is part of the system that makes inherently dangerous systems safer. There are those who think we should somehow prohibit wholesale the development of AI technology due to its inherent danger, but most of the world is still hopeful that we'll be able engineer systems that use AI and then make them safe enough for regular use.
With that as an introduction, let's consider the main perceived and actual dangers of artificial intelligence technology and propose some solutions based on systems engineering and systems thinking.
Heading the list of AI dangers, supported by strong visual images and story lines from movies like The Terminator series and dozens of others, is what we'll refer to as “killer robots.” The main idea is that an AI will one day “wake up,” becoming conscious, sentient, and able to form its own goals and then begin to carry them out. The AI may decide that it is in its best interest (or perhaps even the best interest of the world) to kill or enslave human beings, and it proceeds to execute this plan, with alarming speed and effectiveness. Is this possible? Theoretically yes, but let's apply the systems analogy and the human analogy to see how we can sensibly avoid such a scenario.
In a way, the killer robot scenario has been possible for many years. A computer, even a small, simple one, could be programmed to instruct a missile control system, or perhaps all missile control systems, to launch missiles, and kill most of the people on earth. Here we take the systems analogy, and ask why this doesn't happen. The answer is easy to see in this case – we simply do not allow people to connect computers to missile control systems, air traffic control systems, traffic light systems, or any of the hundreds of sensitive systems that manage our complex world. We go even further and make it difficult or impossible for another computer to control these systems, even if physically connected, by using encrypted communication and secure access codes. The assumption that an AI, once becoming sentient, would have instant and total access and control over all defense, financial, and communication systems in the world is born more of Hollywood writers than sensible computer science.
Illustrated beautifully in Three Laws Lethal, the fascinating novel by David Walton, this limitation is experienced by Isaac, an AI consciousness that emerges from a large‐scale simulation, and finds that “he” cannot access encrypted computer networks, can't hack into anything that human hackers can't, and can't even write a computer program. He cleverly asks his creator if she is able to manipulate the genetic structure of her own brain, or even explain how her own consciousness works. She can't, and neither can Isaac. Also relevant to the subject of the “killer robot” danger of AI, is Isaac's observation of how vulnerable he is to human beings, who could remove his memory chips, take away his CPU, delete his program, or simply cut the power to the data center, “killing” him accidentally or on purpose. The most powerful of human beings are vulnerable to disease, accident, arrest, or murder. “Who has the more fragile existence – the human or the AI?” Isaac wonders.
The system analogy leads us to the somewhat obvious conclusion that if we don't want our new AI to control missiles, we should do our best to avoid connecting it to the missile control system. But could a sufficiently advanced, sentient AI work to gain access to such systems if it wanted to? Forming such independent goals and formulating plans to achieve them is not just sentience, but high intelligence, and is unlikely in the foreseeable future. But even if a sentient AI did emerge, it is likely to turn to the same techniques human hackers use to break into systems, and for the most part, these rely on social engineering – phishing e‐mails, bribery, extortion, blackmail, or other trickery. An AI has no particular advantage over computer‐assisted humans at hacking and intrusion tactics. To put it another way, if the killer robot scenario were possible, it would first be exploited by human beings with computers, not by sentient AIs. Using the human analogy, we explore how we would protect ourselves from that dangerous situation.
Consider that a killer robot can consist either of a stand‐alone system, literally a robot, created, and set on a killing mission, or more plausibly, a defense or warfare system intentionally redirected or hacked to achieve evil intent. There are two ways to apply the human analogy here. First, take the case of the killer robot – how do we prevent and control the “killer human”? And then second, how do we prevent a killer human from using an existing computerized defense or weapons system to carry out a murderous intent?
Killer humans – human beings who take it on themselves to kill one or more other human beings have been with us in human civilization since the very beginning. In the Christian and Jewish tradition, it took only a total of two human beings on earth for the first murder to occur. How do we prevent humans from killing each other? Short answer: we don't. Is it even theoretically possible to prevent humans from killing each other? Possibly. Isaac Asimov in his novel I, Robot, and the subsequent movie starring Will Smith, demonstrate that if the most important goal is to protect human life, then the most effective approach is to imprison all humans. So yes, we can completely prevent humans from killing other humans, but only with compromises most would find unacceptable. Instead, humans have invented systems to mitigate the risk of humans killing each other. We have developed social taboos against murder, and systems including laws, police, courts, and prisons which serve as a significant deterrent to carrying out a wish to murder another human being. The taboo sometimes extends to all killing of humans, not just murder and results in opposition to war and to capital punishment in some societies. Some societies allow killing in cases such as defense of oneself or others. The killing‐mitigation system goes even further – even the threat to kill someone is a crime in some societies, and likely prevents some killing, if such threats are prosecuted. We have done our best to develop quite a complex system to reduce killing, while still maintaining other important values such as freedom, safety, and justice. For contrast, note that our current society places very little or no value on the prevention of wild animals killing other wild animals. We quite literally let that system run wild.
The human analogy suggests we apply some of these system protection ideas to the prevention of a robot killing spree. For example, a system component could act as the police, watching over the actions of other system components and acting to restrain them if they violate the rules, or perhaps even if they give the impression of an intention to violate the rules. System components, including AI components, could find their freedom restricted when they stray outside designed bounds of acceptable behavior, or perhaps they might be let off with just a warning or given probation. In extreme cases, police subsystems could be given the authority to use deadly force, stopping a wayward AI component before it causes harm or damage. Of course, someone (or another kind of subsystem) would need to watch the watchers, and be sure the police subsystems are doing their jobs, but not overstepping their authority.
Subsystems watching over each other to make a system safer is not a new concept. Home thermostats have for decades been equipped with a limit device that prevents the home temperature from dropping below a certain limit, typically 50 °F, regardless of the thermostat's setting, preventing a thermostat malfunction, or inattentive homeowner, from the disaster of frozen and burst pipes. So‐called “watchdog” devices may continually check for an Internet connection and if lost, reboot modems and routers. Auto and truck engines can be equipped with governors that limit the engine's speed, no matter how much the human driver presses on the accelerator. It is important to see that these devices are separate from the part of the system that controls the normal function. The engine governor is a completely separate device, not an addition to the accelerator and fuel subsystem which normally controls the engine speed. There are likely some safeguards in that subsystem too, but the governor only does its work after the main subsystem is finished. To anthropomorphize, it's like the governor is saying to the accelerator and fuel delivery subsystem, “I don't really care how you are controlling the engine speed; you do whatever you think best. But whatever you do, I'm going to be looking at the engine speed, and if it gets too fast, I'm going to slow it down. I won't even tell you I'm doing it. I'm not going to interfere with you or ask you to change your behavior. I'll just step in and slow the engine down if necessary, for the good of the whole system.” This is the ideal attitude of the human police – let the citizens act freely unless they go outside the law, and then act to restrain them, for the good of the whole society.
The watchdog system itself must be governed of course, and trusted to do its job, so some wonder if one complex system watching over another, increasing the overall system's complexity, is worth it. But watchdog systems can be much simpler than the main system. With the auto or truck governor, the system controlling engine speed based on accelerator input, fuel mixture, and engine condition can be complex indeed, but the governor is simple, caring only about the engine speed itself. In some engines, the governor is a simple mechanical device, without any electronics or software at all. In an AI system, the watchdog will invariably be much simpler than the AI system itself. Even if the watchdog is relatively complex, it may be worth it, since its addition means that both the main system and the watchdog must “agree” on correct functioning of the system, or a problem must be signaled to the human driver or user.
Continuing the human analogy, we note that just having police is not enough. If the only protection in a human society were police, corruption would be a great temptation – there would be no one to watch the watchers. In peacetime human societies the police answer to the courts, and the courts must answer to the people, through elected representatives. Attorneys must answer to the Bar association and are subject to the laws made by elected representatives of the people. Judges may be elected or appointed, and their actions are subject to review. It's a complex system of checks and balances that prevents the system from being corrupted or misused through localized decision‐making by any single part of the system.
In an intelligent system, AI‐based subsystems make decisions after being trained on numerous examples, as described in Chapter 3. Designers try to make these AIs as good at decision‐making as they can be, but it is still possible for bad decisions to be made for a variety of reasons. Bad decisions can be made by any system, AI or not, but in non‐AI systems, it is easier to see the range of possible decisions by examining the hardware and inspecting the software source code. In safety‐critical systems, source code must be inspected and verified. In an intelligent system containing an example‐trained AI component, this is not possible. The code used to implement the neural network can be examined, but the decisions are made by a set of numbers, thousands, or even millions of them, which are set during the training process. Examining these numbers does not clearly reveal how the AI will make decisions. It's a bit like human being – with our current medical knowledge, examining the brain of a human being does not reveal how that human being will make decisions either.
It's not that the AI component is unpredictable, or unreliable at decision‐making – it's more that humans can't readily see the limits of the AI's decisions by inspecting its software source code. The systems engineer could compensate for this unknown by designing a watchdog component that watches over the AI and if it makes an out‐of‐bounds decision, step in and stop the system, or at least get further human authorization before proceeding. Designers add watchdog components to intelligent systems for the same reasons we have police in human systems. The possible range of behavior of an AI component (or a human) is simply much greater than is safe for the system, and there's always that chance, regardless of parentage, provenance, training, or upbringing, that the AI (or human) will one day act out of bounds. The watchdogs need to be present and vigilant.
We have considered the question of how we protect ourselves against an intelligent system, even when it contains AI components that can conceivably make unfortunate or dangerous decisions in certain cases. By applying the human analogy, we suggest that other parts of the system, analogous to police, courts, and prisons, can serve to supervise and make sure the entire system stays in bounds. We can apply the human analogy another way by considering how we can protect our critical systems from unwanted intrusion and subversion by AIs. The human analogy suggests we ask how we protect important systems from human intrusion, otherwise known as hacking.
The field of cybersecurity – the protection of computer systems from unwanted intruders – conjures ready images of a super hacker, typing furiously at a keyboard for about 60 seconds, and gaining access to a high security defense, government or corporate system, and its valuable stores of data. While there are some intrusions that are perpetrated through purely technological means, it is by far more common for would‐be cyber intruders to rely on the weaknesses of other human beings.
Even purely technological intrusions often rely on poor security decisions made by humans in the design or implementation of the system in the first place. It doesn't take a security expert to know that if a system sends unencrypted access data over wireless connections, the data can be intercepted, modified, and re‐sent, tricking the system into doing the intruder's bidding. Systems designed this way, and there are many, are asking for intrusion and exploitation. Well‐designed systems are extremely difficult to hack into using just a computer from the outside.
Most hacking intrusions begin with social engineering by manipulating or exploiting the weakest link in the system – the human being. How many human beings, faced with the need to use multiple, complex passwords on a daily basis, write them down, perhaps on a yellow sticky note affixed to a monitor or inside an (unlocked?) desk drawer. Intruders, posing as maintenance people, plant waterers, or janitorial staff can wander through an office (or pay someone to do so) and snap photos of enough passwords to keep them busy for quite a while. Best of all (for the intruders) the source of the compromised security is untraceable and may remain unknown indefinitely. With the right passwords, a hacker can simply set up additional accounts on the system, giving them permanent access, even if the original passwords are later changed.
Malicious software code that grants an outsider access to a secure system is easy to procure and use, but it must be installed on the secure system. Shockingly, studies have shown that small USB jump drives laden with malware and scattered in a company's parking lot, will more often than not be picked up and inserted into employees' computers by the end of the day. Printing the company's own logo on the drive doubles the effectiveness of the ploy. The even more common way of introducing malicious code into a desired target system is to attach it to a phishing e‐mail, tricking uninformed or inattentive human beings to launch the attachment, installing the malicious code, and granting access to the intruder, usually without notice by the unwitting human accomplice.
Human inattention and lack of understanding of system security are not even the worst of human cybersecurity failings. A study by Sailpoint, a security management firm, described in Inc. Magazine, shows that a surprising number of employees (one in five) are willing to sell company passwords, and almost half of those would sell a password for under US$1000. That amount of money would pay an expert hacker for only a day's work, making employee bribery one of the most cost‐effective ways for humans to gain access to a secure system. If humans will exploit the weaknesses in other humans to gain unauthorized access to sensitive or important systems, then it seems likely that an AI would use this approach as well. The question becomes, how do we prevent AIs from exploiting fallible human beings. The human analogy suggests we first ask, how do we prevent humans from exploiting fallible human beings?
We do what we can to block an AI (or a human) from attempted nefarious communication with unsuspecting employees by attempting to block phishing e‐mails, malware‐laden e‐mail attachments, and disguised hyperlinks. More importantly, we need to make the fallible humans, who are the weakest link in this system, less fallible. Technological barriers and relentless cybersecurity education are the primary options here.
Technological barriers can help. Some organizations disconnect all of the USB ports on all of their employees' computers, blocking the “USB drive in the parking lot” attack vector, potentially impeding employee productivity and freedom, costing the company time, money, and perhaps morale. More sensibly, USB ports can be set to prevent “AutoRun” and “AutoPlay,” two Windows features that make it easier for malicious USB drives to do their dirty work. We can limit the access fallible humans have to a system, so that their fallibilities don't result in system compromise. In a systems sense, these protections place limitations on the access one system (the human) or subsystem has to another (the company network). The designer of an intelligent system can use this principle throughout the system, allowing parts of the system to communicate with others only on a need‐to‐access basis.
Holding to a necessary‐communication‐only principle in system design is not without costs in some cases. It may be expedient, efficient, and economical for one subsystem to serve a few others in a system, but if doing so allows communication between those other subsystems, the designer might be trading economy for danger. Suppose the passenger entertainment system shared a power supply with the guidance avionics on a commercial aircraft. Malicious introduction of software, perhaps even a corrupted movie, or a power spike from a portable battery could overload the shared power supply, causing it to shut down the aircraft's guidance system. It makes more sense to take the “inefficient” route of building independent power supplies for each of these subsystems.
Human fallibility can also be overcome through relentless education. It's a slow process, and continual reinforcement is needed to ensure that learnings stay in place and become embedded in the culture. A policy prohibiting the insertion of USB drives into company computers is fine, but without full education on why that policy is needed and the associated risks, people, especially freedom‐loving Americans, will tend to ignore the rule when it's expedient. Two stories will illustrate.
Many years ago, I was giving a talk in a country far away from my own. After the talk, a smiling, enthusiastic young man came up and asked for a copy of the slides and handed me a USB drive on which to copy them. Being overcome by his effusive flattery about my talk perhaps, I took the USB drive and inserted it into my laptop. Immediately, the security software on the laptop flashed a big warning on the screen that the drive was infected with suspicious software. I pulled out the drive and asked him to request the slides by e‐mail later. I don't believe I ever received that request. I was very lucky that the kind of malware on that drive was caught by the software on my laptop. Whether the person who handed me the drive was trying to get malware onto my computer, and by extension, my company's network when I returned home, or was unaware of the infected drive, I can't know. Something in me knew better, but nevertheless, the drive did get inserted into my computer that day.
In another case, I was visiting a large client corporation and was to give a talk in their conference room. I pulled out my remote control and its USB receiver, and plugged it into one of their computers. My host, who was not fast enough to stop me, almost jumped out of his skin. I knew that USB receiver was safe, since it was not a drive, and could not carry malware, but he was absolutely right to object to what I did – there's no way he could be sure it was safe. But in this case too, an unauthorized USB device was inserted into a company computer. To see how effective your cybersecurity education is, pose the following question to your employees:
Q: You find a USB drive in the parking lot of our main building, bearing our company logo. What do you do?
Ignore it – leave it there on the ground
Plug it into your computer to try to find out who it belongs to so you can return it
Take it to the receptionist
Take it to the security office
How many of your employees would reliably answer that (d) is the only correct choice? Sure, the receptionist should know to give it to the security office, but it's safer not to make that assumption.
The painstaking process of thinking through potential security intrusions is similar, whether we are trying to protect sensitive systems from evil, inattentive, or apathetic humans, rogue AIs, or careless AI programming. Taking into account the personality, fallibilities, and behavior patterns of the potential intruder, the systems engineer must think through possible intrusion scenarios and reason about how to prevent them as the system is designed.
Taking the human analogy one step further, we all know that human beings cheat in certain circumstances. Dan Ariely's studies, as described in his surprising and fascinating book, Predictably Irrational, show that, when being caught is not possible, about 20% of people will cheat, but only by a little. Designers working with human activity systems such as societies, economies, and companies, would be wise to accept this characteristic of humans and accommodate it in system design, either by allowing for it, or by increasing efforts to detect and punish it. So, do AIs cheat, too?
It depends what we mean by cheating. Years ago, I read the wonderful book, The Four Hour Workweek by Tim Ferriss, in which he describes training for and ultimately winning a kickboxing martial arts competition “the wrong way” by intentionally dehydrating, fighting three weight classes below his actual weight, and exploiting a rule that allowed victory if your opponent falls off the fighting platform. Tim won every fight by simply pushing his opponent off the platform. The story is quoted and discussed on a martial arts blog, and the discussion centers around the question – did Tim cheat? (Ferriss 2009, p. 29). Phrases like “cheating within the rules” and “poor sportsmanship” are mentioned, and there is no clear consensus. Is it cheating when a human uses the available rules, and exploits so‐called “loopholes, inadvertent omissions, or technicalities” to win? Some feel the fault here (if there is a fault) lies with the rule‐writers – they should have specified that bouts must be won primarily by kickboxing, for example. AIs cheat in similar ways – the human analogy again.
Robot competitions are held in which each team is given a selection of parts and required to build a robot that can win a one‐meter race across the floor. When an AI is given the task of building the robot, it often arrives at the same solution: assemble the available parts into a tall tower (taller than 1 m), and then simply fall over, crossing the finish line. Unless there is a rule that all parts of the robot must cross the finish line to win, this scheme is the most efficient. In a recent competition, a human team decided to copy the AI solution, but missed the mark by one inch. (Shane 2019).