117,99 €
A guide to common control principles and how they are used to characterize a variety of physiological mechanisms
The second edition of Physiological Control Systems offers an updated and comprehensive resource that reviews the fundamental concepts of classical control theory and how engineering methodology can be applied to obtain a quantitative understanding of physiological systems. The revised text also contains more advanced topics that feature applications to physiology of nonlinear dynamics, parameter estimation methods, and adaptive estimation and control. The author—a noted expert in the field—includes a wealth of worked examples that illustrate key concepts and methodology and offers in-depth analyses of selected physiological control models that highlight the topics presented.
The author discusses the most noteworthy developments in system identification, optimal control, and nonlinear dynamical analysis and targets recent bioengineering advances. Designed to be a practical resource, the text includes guided experiments with simulation models (using Simulink/Matlab). Physiological Control Systems focuses on common control principles that can be used to characterize a broad variety of physiological mechanisms. This revised resource:
Written for biomedical engineering students and biomedical scientists, Physiological Control Systems, offers an updated edition of this key resource for understanding classical control theory and its application to physiological systems. It also contains contemporary topics and methodologies that shape bioengineering research today.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 604
Veröffentlichungsjahr: 2018
Cover
Series Page
Title Page
Copyright
Dedication
Preface
About the Companion Website
Chapter 1: Introduction
1.1 Preliminary Considerations
1.2 Historical Background
1.3 Systems Analysis: Fundamental Concepts
1.4 Physiological Control Systems Analysis: A Simple Example
1.5 Differences Between Engineering and Physiological Control Systems
1.6 The Science (and Art) of Modeling
1.7 “Systems Physiology” versus “Systems Biology”
Problems
Bibliography
Chapter 2: Mathematical Modeling
2.1 Generalized System Properties
2.2 Models with Combinations of System Elements
2.3 Linear Models of Physiological Systems: Two Examples
2.4 Conversions Between Electrical and Mechanical Analogs
2.5 Distributed-Parameter versus Lumped-Parameter Models
2.6 Linear Systems and the Superposition Principle
2.7 Zero-Input and Zero-State Solutions of ODEs
2.8 Laplace Transforms and Transfer Functions
2.9 The Impulse Response and Linear Convolution
2.10 State-Space Analysis
2.11 Computer Analysis and Simulation: MATLAB and SIMULINK
Problems
Bibliography
Chapter 3: Static Analysis of Physiological Systems
3.1 Introduction
3.2 Open-Loop versus Closed-Loop Systems
3.3 Determination of the Steady-State Operating Point
3.4 Steady-State Analysis Using SIMULINK
3.5 Regulation of Cardiac Output
3.6 Regulation of Glucose Insulin
3.7 Chemical Regulation of Ventilation
Problems
Bibliography
Chapter 4: Time-Domain Analysis of Linear Control Systems
4.1 Linearized Respiratory Mechanics: Open-Loop versus Closed-Loop
4.2 Open-Loop versus Closed-Loop Transient Responses: First-Order Model
4.3 Open-Loop versus Closed-Loop Transient Responses: Second-Order Model
4.4 Descriptors of Impulse and Step Responses
4.5 Open-Loop versus Closed-Loop Dynamics: Other Considerations
4.6 Transient Response Analysis Using MATLAB
4.7 SIMULINK Application 1: Dynamics of Neuromuscular Reflex Motion
4.8 SIMULINK Application 2: Dynamics of Glucose–Insulin Regulation
Problems
Bibliography
Chapter 5: Frequency-Domain Analysis of Linear Control Systems
5.1 Steady-State Responses to Sinusoidal Inputs
5.2 Graphical Representations of Frequency Response
5.3 Frequency-Domain Analysis Using MATLAB and SIMULINK
5.4 Estimation of Frequency Response From Input–Output Data
5.5 Frequency Response of a Model of Circulatory Control
Problems
Bibliography
Chapter 6: Stability Analysis: Linear Approaches
6.1 Stability and Transient Response
6.2 Root Locus Plots
6.3 Routh–Hurwitz Stability Criterion
6.4 Nyquist Criterion for Stability
6.5 Relative Stability
6.6 Stability Analysis of the Pupillary Light Reflex
6.7 Model of Cheyne–Stokes Breathing
Problems
Bibliography
Chapter 7: Digital Simulation of Continuous-Time Systems
7.1 Preliminary Considerations: Sampling and The Z-Transform
7.2 Methods for Continuous-Time to Discrete-Time Conversion
7.3 Sampling
7.4 Digital Simulation: Stability and Performance Considerations
7.5 Physiological Application: The Integral Pulse Frequency Modulation Model
Problems
Bibliography
Chapter 8: Model Identification and Parameter Estimation
8.1 Basic Problems in Physiological System Analysis
8.2 Nonparametric and Parametric Identification Methods
8.3 Problems in Parameter Estimation: Identifiability and Input Design
8.4 Identification of Closed-Loop Systems: “Opening the Loop”
8.5 Identification Under Closed-Loop Conditions: Case Studies
8.6 Identification of Physiological Systems Using Basis Functions
Problems
Bibliography
Chapter 9: Estimation and Control of Time-Varying Systems
9.1 Modeling Time-Varying Systems: Key Concepts
9.2 Estimation of Models With Time-Varying Parameters
9.3 Estimation of Time-Varying Physiological Models
9.4 Adaptive Control of Physiological Systems
Problems
Bibliography
Chapter 10: Nonlinear Analysis of Physiological Control Systems
10.1 Nonlinear Versus Linear Closed-Loop Systems
10.2 Phase-Plane Analysis
10.3 Nonlinear Oscillators
10.4 The Describing Function Method
10.5 Models of Neuronal Dynamics
10.6 Nonparametric Identification of Nonlinear Systems
Problems
Bibliography
Chapter 11: Complex Dynamics in Physiological Control Systems
11.1 Spontaneous Variability
11.2 Nonlinear Control Systems with Delayed Feedback
11.3 Coupled Nonlinear Oscillators: Model of Circadian Rhythms
11.4 Time-Varying Physiological Closed-Loop Systems: Sleep Apnea Model
11.5 Propagation of System Noise in Feedback Loops
Problems
Bibliography
Appendix A Commonly Used Laplace Transform Pairs
Appendix B List of MATLAB and SIMULINK Programs
B.1 How to Download the MATLAB and SIMULINK Files
Index
End User License Agreement
Table B.1
Table B.2
Figure 1.1
Figure 1.2
Figure 1.3
Figure 1.4
Figure 1.5
Figure 2.1
Figure 2.2
Figure 2.3
Figure 2.4
Figure 2.5
Figure 2.6
Figure 2.7
Figure 2.8
Figure 2.9
Figure 2.10
Figure 2.11
Figure 2.12
Figure 2.13
Figure 2.14
Figure P2.1
Figure P2.2
Figure P2.3
Figure P2.4
Figure P2.5
Figure 3.1
Figure 3.2
Figure 3.3
Figure 3.4
Figure 3.5
Figure 3.6
Figure 3.7
Figure 3.8
Figure 3.9
Figure 3.10
Figure 3.11
Figure 3.12
Figure 3.13
Figure 3.14
Figure 3.15
Figure 3.16
Figure 3.17
Figure P3.1
Figure P3.2
Figure P3.3
Figure P3.4
Figure 4.1
Figure 4.2
Figure 4.3
Figure 4.4
Figure 4.5
Figure 4.6
Figure 4.7
Figure 4.8
Figure 4.9
Figure 4.10
Figure 4.11
Figure 4.12
Figure 4.13
Figure 4.14
Figure 4.15
Figure 4.16
Figure P4.1
Figure P4.2
Figure P4.3
Figure P4.4
Figure P4.5
Figure P4.6
Figure 5.1
Figure 5.2
Figure 5.3
Figure 5.4
Figure 5.5
Figure 5.6
Figure 5.7
Figure 5.8
Figure 5.9
Figure 5.10
Figure 5.11
Figure 5.12
Figure 5.13
Figure 5.14
Figure 5.15
Figure 5.16
Figure 6.1
Figure 6.2
Figure 6.3
Figure 6.4
Figure 6.5
Figure 6.6
Figure 6.7
Figure 6.8
Figure 6.9
Figure 6.10
Figure 6.11
Figure 6.12
Figure 6.13
Figure 6.14
Figure 6.15
Figure 6.16
Figure P6.1
Figure P6.2
Figure 7.1
Figure 7.2
Figure 7.3
Figure 7.4
Figure 7.5
Figure 7.6
Figure 7.7
Figure 7.8
Figure 7.9
Figure P7.1
Figure 8.1
Figure 8.2
Figure 8.3
Figure 8.4
Figure 8.5
Figure 8.6
Figure 8.7
Figure 8.8
Figure 8.9
Figure 8.10
Figure 8.11
Figure 8.12
Figure 8.13
Figure 8.14
Figure 8.15
Figure 8.16
Figure 8.17
Figure 8.18
Figure 8.19
Figure 8.20
Figure 8.21
Figure 8.22
Figure 8.23
Figure 8.24
Figure 9.1
Figure 9.2
Figure 9.3
Figure 9.4
Figure 9.5
Figure 9.6
Figure 9.7
Figure 9.8
Figure 9.9
Figure 9.10
Figure 9.11
Figure 9.12
Figure 9.13
Figure 10.1
Figure 10.2
Figure 10.3
Figure 10.4
Figure 10.5
Figure 10.6
Figure 10.7
Figure 10.8
Figure 10.9
Figure 10.10
Figure 10.11
Figure 10.12
Figure 10.13
Figure 10.14
Figure 10.15
Figure 10.16
Figure 10.17
Figure 10.18
Figure 10.19
Figure 10.20
Figure 10.21
Figure 10.22
Figure 10.23
Figure 10.24
Figure 10.25
Figure 10.26
Figure 10.27
Figure 10.28
Figure 10.29
Figure 10.30
Figure 10.31
Figure 10.32
Figure 10.33
Figure P10.1
Figure P10.2
Figure 11.1
Figure 11.2
Figure 11.3
Figure 11.4
Figure 11.5
Figure 11.6
Figure 11.7
Figure 11.8
Figure 11.9
Figure 11.10
Figure 11.11
Figure 11.12
Figure 11.13
Figure 11.14
Figure 11.15
Figure 11.16
Figure 11.17
Figure 11.18
Figure 11.19
Figure 11.20
Figure 11.21
Figure 11.22
Figure 11.23
Figure 11.24
Figure 11.25
Figure 11.26
Figure 11.27
Figure 11.28
Figure 11.29
Figure 11.30
Cover
Table of Contents
Begin Reading
Chapter 1
ii
iii
iv
v
vi
xiii
xiv
xv
xvi
xvii
xviii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
IEEE Press445 Hoes LanePiscataway, NJ 08854
IEEE Press Editorial Board
Ekram Hossain, Editor in Chief
Giancarlo Fortino
Andreas Molisch
Linda Shafer
David Alan Grier
Saeid Nahavandi
Mohammad Shahidehpour
Donald Heirman
Ray Perez
Sarah Spurgeon
Xiaoou Li
Jeffrey Reed
Ahmet Murat Tekalp
Second Edition
Michael C.K. Khoo
Copyright © 2018 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com
Library of Congress Cataloging-in-Publication Data is available.
ISBN: 978-1-119-05533-4
To
Pam, Bryant, Mason, and Amber
and in memory of my parents
John and Betty Khoo
It has been 17 years since the publication of the original edition of this monograph. Over this period, I have taught, almost on a yearly basis, a course at the University of Southern California that is based largely on the contents of this book. This second edition incorporates much of the experience I have gained and student feedback I have received from teaching this class. I have also received input from the many instructors who have used this book for their classes. To all these readers, I am deeply appreciative of their helpful comments and questions. The primary goals of this book remain the same as in the first edition: to highlight the basic techniques employed in control theory, systems analysis, and model identification and to give the biomedical engineering student an appreciation of how these principles can be applied to better understand the processes involved in physiological regulation. As before, my assumption is that much of the contents of this second edition are suitable for use in a one-semester course on physiological control systems or physiological systems analysis taken by junior or senior undergraduates or as an introductory class on physiological systems for first-year graduate students. The more advanced parts of this book and its accompanying software may also prove to be a useful resource for biomedical engineers and interested life science or clinical researchers who have had little formal training in systems or control theory. Throughout this book, I have emphasized the physiological applications of control engineering, focusing in particular on the analysis of feedback regulation. In contrast, the basic concepts and methods of control theory are introduced with little attention paid to mathematical derivations or proofs. For this reason, I would recommend the inclusion of a more traditional, engineering-oriented control theory course as a supplement to the material covered in this volume.
One of the main issues I have had with the first edition was the “gap” between the main concepts in systems and control that were introduced assuming continuous-time systems and some of the more advanced applications that featured discrete-time models. Chapter 7 has been introduced to bridge this gap, and to show the reader how continuous-time systems can be converted into discrete-time systems, as well as the impact of different methods of conversion on stability characteristics of the system in question. This additional background should also be useful since many physiological processes (including cardiac, respiratory, and neural) are naturally oscillatory, and models that employ a cycle-by-cycle (and hence, discrete) time base may be more suitable for characterizing longer term dynamics. In Chapter 9, I have revamped what was previously Chapter 8 to cover the essential aspects of time-varying or nonstationary systems. The chapter on physiological system identification (now Chapter 8) has been expanded to include more techniques, such as nonparametric identification using multivariable autoregressive with exogenous (ARX) models and basis function expansion. Finally, the chapter on nonlinear analysis (now Chapter 10) has been expanded to include the Volterra kernel approach to nonparametric estimation of nonlinear systems as well as an introductory discussion of other methods. I have also added material to update various other sections, as well as new problems to the end of each chapter. The MATLAB/SIMULINK files accompanying the book have also been expanded and existing programs have been updated to be compatible with release version R2016b. I see these programs to be an essential complement to the learning experience, allowing the reader to explore “first-hand” the dynamics underlying the biological mechanisms being studied. I do make the implicit assumption that the reader has some basic familiarity with MATLAB/SIMULINK. For the reader who has not used MATLAB or SIMULINK, it is fortunate that there are currently many “primers” on the subject that can be easily found online or in any academic bookstore.
The completion of this second edition has taken much longer than I had anticipated when I took on the project (and I am quite embarrassed to disclose how long ‘long’ is!). I thank Wiley-IEEE Press for giving me the opportunity to produce this second edition, editor Mary Hatcher for her infinite patience, and my friend Metin Akay, the book series editor, for his constant encouragement. This second edition would not have been possible without the feedback and insights gained through my interactions with my past and present Ph.D. students over the years. In particular, I am most grateful to my former student and current research associate, P. “Sang” Chalacheva, who so generously gave her spare time and effort to help with the development of the new MATLAB files and the editing of all parts of this second edition. I would be remiss if I did not also mention the supportive environment provided by the NIH-NIBIB-funded Biomedical Simulations Resource (BMSR), which has funded my research on physiological control and modeling for the past three decades. The modeling activities of my colleagues in the BMSR, David D'Argenio, Vasilis Marmarelis, and Ted Berger, have been a great source of intellectual stimulation over the years. I cannot help but end these remarks by citing my favorite line from the writings of the late Professor Fred Grodins, who recruited me to USC many many moons ago:
“There is nothing magic about Models (or is there?)!”
Michael C.K. Khoo
This book is accompanied by a companion website:
http://www.wiley.com/go/khoo/controlsystems2e
The website includes:
MATLAB and Simulink Files
A control system may be defined as a collection of interconnected components that can be made to achieve a desired response in the face of external disturbances. The “desired response” could be the tracking of a specified dynamic trajectory, in which case the control system takes the form of a servomechanism. An example of this type of control system is a robot arm that is programmed to grasp some object and to move it to a specified location. There is a second class of control system termed the regulator, for which the “desired response” is to maintain a certain physical quantity within specified limits. A simple example of this kind of control system is the thermostat.
There are two basic ways in which a control system can be made to operate. In open-loop mode, the response of the system is determined only by the controlling input(s). As an example, let us suppose that we wish to control the temperature of a room in winter with the use of a fan-heater that heats up and circulates the air within the room. By setting the temperature control to “medium,” for instance, we should be able to get the room temperature to settle down to an agreeable level during the morning hours. However, as the day progresses and the external environment becomes warmer, the room temperature also will rise, because the rate at which heat is added by the fan-heater exceeds the rate at which heat is dissipated from the room. Conversely, when night sets in and the external temperature falls, the temperature in the room will decrease below the desired level unless the heater setting is raised. This is a fundamental limitation of open-loop control systems. They can perform satisfactorily as long as the external conditions do not affect the system much. The simple example we have described may be considered a physical analog of thermoregulatory control in poikilothermic or “cold-blooded” animals. The design of the thermoregulatory processes in these animals do not allow core body temperature to be maintained at a level independent of external conditions; as a consequence, the animal's metabolism also becomes a function of external temperature.
Coming back to the example of the heating system, one way to overcome its limitation might be to anticipate the external changes in temperature and to “preprogram” the temperature setting accordingly. But how would we know what amounts of adjustment are required under the different external temperature conditions? Furthermore, while the external temperature generally varies in a roughly predictable pattern, there will be occasions when this pattern is disrupted. For instance, the appearance of a heavy cloud cover during the day could limit the temperature increase that is generally expected. These problems can be eliminated by making the heater “aware” of changes in the room temperature, thereby allowing it to respond accordingly. One possible scheme might be to measure the room temperature, compare the measured temperature with the desired room temperature, and adjust the heater setting in proportion to the difference between these two temperatures. This arrangement is known as proportional feedback control. There are, of course, other control strategies that make use of the information derived from measurements of the room temperature. Nevertheless, there is a common feature in all these control schemes: They all employ feedback. The great mathematician-engineer, Norbert Wiener (1961), characterized feedback control as “a method of controlling a system by reinserting into it the results of its past performance.” In our example, the system output (the measured room temperature) is “fed back” and used to adjust the input (fan speed). As a consequence, what we now have is a control system that operates in closed-loop mode, which also allows the system to be self-regulatory. This strategy of control is ubiquitous throughout Nature: The physiological analog of the simple example we have been considering is the thermoregulatory control system of homeothermic or “warm-blooded” animals. However, as we will demonstrate throughout this book, the exact means through which closed-loop control is achieved in physiological systems invariably turns out to be considerably more complicated than one might expect.
The concept of physiological regulation dates back to ancient Greece (∼500 BC), where the human body was considered a small replica of the universe. The four basic elements of the universe – air, water, fire, and earth – were represented in the body by blood, phlegm, yellow bile, and black bile, respectively. The interactions among pairs of these elements produced the four irreducible qualities of wetness, warmth, dryness, and cold. It was the harmonious balance among these elements and qualities that led to the proper functioning of the various organ systems. The Greek physician, Galen (about second century AD), consolidated these traditional theories and promoted a physiological theory that was largely held until the end of the sixteenth century. Similar concepts that developed alongside the Taoist school of thought may be traced back to the third century BC in ancient China. Here, the universe was composed of five agents (Wu Xing): wood, fire, earth, metal, and water. These elements interacted with one another in two ways – one was a productive relationship, in which one agent would enhance the effects of the other; the other was a limiting or destructive relationship whereby one agent would constrain the effects of the other. As in the Graeco-Roman view, health was maintained by the harmonious balancing of these agents with one another (Unschuld, 1985).
The notion of regulatory control clearly persisted in the centuries that followed, as the writings of various notable physiologists such as Boyle, Lavoisier, and Pflüger demonstrate. However, this concept remained somewhat vague until the end of the nineteenth century when French physiologist Claude Bernard thought about self-regulation in more precise terms. He noted that the cells of higher organisms were always bathed in a fluid medium, for example, blood or lymph, and that the conditions of this environment were maintained with great stability in the face of disturbances to the overall physiology of the organism. The maintenance of these relatively constant conditions was achieved by the organism itself. This observation so impressed him that he wrote: “It is the fixity of the ‘milieu interieur’ which is the condition of free and independent life.” He added further that “all the vital mechanisms, however varied they may be, have only one object, that of preserving constant the conditions of life in the internal environment.” In the earlier half of this century, Harvard physiologist Walter Cannon (1939) refined Bernard's ideas further and demonstrated systematically these concepts in the workings of various physiological processes, such as the regulation of adequate water and food supply through thirst and hunger sensors, the role of the kidneys in regulating excess water, and the maintenance of blood acid–base balance. He went on to coin the word homeostasis to describe the maintenance of relatively constant physiological conditions. However, he was careful to distinguish the second part of the term, that is, “stasis,” from the word “statics,” since he was well aware that although the end result was a relatively unchanging condition, the coordinated physiological processes that produce this state are highly dynamic.
Armed with the tools of mathematics, Wiener in the 1940s explored the notion of feedback to a greater level of detail than had been done previously. Mindful that most physiological systems were nonlinear, he laid the foundation for modeling nonlinear dynamics from a Volterra series perspective. He looked into the problem of instability in neurological control systems and examined the connections between instability and physiological oscillations. He coined the word “cybernetics” to describe the application of control theory to physiology, but with the passage of time, this term has come to take on a meaning more closely associated with robotics. The race to develop automatic airplane, radar, and other military control systems during the Second World War provided a tremendous boost to the development of control theory. In the post-war period, an added catalyst for even greater progress was the development of digital computers and the growing availability of facilities for the numerical solution of the complex control problems. Since then, research on physiological control systems has become a field of study on its own, with major contributions coming from a mix of physiologists, mathematicians, and engineers. These pioneers of “modern” physiological control systems analysis include Adolph (1961), Grodins (1963), Clynes and Milsum (1970), Milhorn (1966), Milsum (1966), Bayliss (1966), Stark (1968), Riggs (1970), Guyton et al. (1973), and Jones (1973).
Prior to analyzing or designing a control system, it is useful to define explicitly the major variables and structures involved in the problem. One common way of doing this is to construct a block diagram. The block diagram captures in schematic form the relationships among the variables and processes that comprise the control system in question. Figure 1.1 shows block diagrams that represent open- and closed-loop control systems in canonical form. Consider first the open-loop system (Figure 1a). Here, the controller component of the system translates the input (r) into a controller action (u), which affects the controlled system or “plant,” thereby influencing the system output (y). At the same time, however, external disturbances (x) also affect plant behavior; thus, any changes in y reflect contributions from both the controller and the external disturbances. If we consider this open-loop system in the context of our previous example of the heating system, the heater would be the controller and the room would represent the plant. Since the function of this control system is to regulate the temperature of the room, it is useful to define a set point that would correspond to the desired room temperature. In the ideal situation of no fluctuations in external temperature (i.e., x = 0), a particular input voltage setting would place the room temperature exactly at the set point. This input level may be referred to as the reference input value. In linear control systems analysis, it is useful (and often preferable from a computational viewpoint) to consider the system variables in terms of changes from these reference levels instead of their absolute values. Thus, in our example, the input (r) and controller action (u) would represent the deviation from the reference input value and the corresponding change in heat generated by the heater, respectively, while the output (y) would reflect the resulting change in room temperature. Due to the influence of changes in external temperature (x), r must be adjusted continually to offset the effect of these disturbances on y.
Figure 1.1 Block diagrams of an open-loop control system (a) and a closed-loop control system (b).
As mentioned earlier, we can circumvent this limitation by “closing the loop.” Figure 1.1b shows the closed-loop configuration. The change in room temperature (y) is now measured and transduced into the feedback signal (z) by means of a feedback sensor, that is, the thermostat. The feedback signal is subsequently subtracted from the reference input and the error signal (e) is used to change the controller output. If room temperature falls below the set point (i.e., y becomes negative), the feedback signal (z) would also be negative. This feedback signal is subtracted from the reference input setting (r = 0) at the mixing point or comparator (shown as the circular object in Figure 1.1), producing the error signal (e) that is used to adjust the heater setting. Since z is negative, e will be positive. Thus, the heater setting will be raised, increasing the flow of heat to the room and consequently raising the room temperature. Conversely, if room temperature becomes higher than its set point, the feedback signal now becomes positive, leading to a negative error signal, which in turn lowers the heater output. This kind of closed-loop system is said to have negative feedback, since any changes in system output are compensated for by changes in controller action in the opposite direction.
Negative feedback is the key attribute that allows closed-loop control systems to act as regulators. What would happen if, rather than being subtracted, the feedback signal were to be added to the input? Going back to our example, if the room temperature were to rise and the feedback signal were to be added at the comparator, the error signal would become positive. The heater setting would be raised and the heat flow into the room would be increased, thereby increasing the room temperature further. This, in turn, would increase the feedback signal and the error signal, and thus produce even further increases in room temperature. This kind of situation represents the runaway effect that can result from positive feedback. In lay language, one would refer to this as a vicious cycle of events. Dangerous as it may seem, positive feedback is actually employed in many physiological processes. However, in these processes, there are constraints built in that limit the extent to which the system variables can change. Nevertheless, there are also many positive feedback processes, for example, circulatory shock, that in extreme circumstances can lead to the shutdown of various system components, leading eventually to the demise of the organism.
One of the simplest and most fundamental of all physiological control systems is the muscle stretch reflex. The most notable example of this kind of reflex is the knee jerk, which is used in routine medical examinations as an assessment of the state of the nervous system. A sharp tap to the patellar tendon in the knee leads to an abrupt stretching of the extensor muscle in the thigh to which the tendon is attached. This activates the muscle spindles, which are stretch receptors. Neural impulses, which encode information about the magnitude of the stretch, are sent along afferent nerve fibers to the spinal cord. Since each afferent nerve synapses with one motorneuron in the spinal cord, the motorneurons get activated and, in turn, send efferent neural impulses back to the same thigh muscle. These produce a contraction of the muscle that acts to straighten the lower leg. Figure 1.2 shows the basic components of this reflex. A number of important features of this system should be highlighted. First, this and other stretch reflexes involve reflex arcs that are monosynaptic, that is, only two neurons and one synapse are employed in the reflex. Other reflexes have at least one interneuron connecting the afferent and efferent pathways. Second, this closed-loop regulation of muscle length is accomplished in a completely involuntary fashion, as the name “reflex” suggests.
Figure 1.2 Schematic illustration of the muscle stretch reflex. (Adapted from Vander et al. (1997).)
A third important feature of the muscle stretch reflex is that it provides a good example of negative feedback in physiological control systems. Consider the block diagram representation of this reflex, as shown in Figure 1.3. Comparing this configuration with the general closed-loop control system of Figure 1.1, one can see that the thigh muscle now corresponds to the plant or controlled system. The disturbance x is the amount of initial stretch produced by the tap to the knee. This produces a proportionate amount of stretch y in the muscle spindles, which act as the feedback sensor. The spindles translate this mechanical quantity into an increase in afferent neural traffic (z) sent back to the reflex center in the spinal cord, which corresponds to our controller. In turn, the controller action is an increase in efferent neural traffic (u) directed back to the thigh muscle, which subsequently contracts in order to offset the initial stretch. Although this closed-loop control system differs in some details from the canonical structure shown in Figure 1.1, it is indeed a negative feedback system, since the initial disturbance (tap-induced stretch) leads to a controller action that is aimed at reducing the effect of the disturbance.
Figure 1.3 Block diagram representation of the muscle stretch reflex.
While the methodology of systems analysis can be applied to both engineering and physiological control systems, it is important to recognize some key differences:
An engineering control system is designed to accomplish a defined task, and frequently the governing parameters would have been fine-tuned extensively so that the system will perform its task in an “optimal” manner (at least under the circumstances in which it is tested). In contrast, physiological control systems are built for versatility and may be capable of serving several different functions. For instance, although the primary purpose of the respiratory system is to provide gas exchange, a secondary but also important function is to facilitate the elimination of heat from the body. Indeed, some of the greatest advances in physiological research have been directed at discovering the functional significance of various biological processes.
Since the engineering control system is synthesized by the designer, the characteristics of its various components are generally known. On the other hand, the physiological control system usually consists of components that are unknown and difficult to analyze. Thus, we are confronted with the need to apply system identification techniques to determine how these various subsystems behave before we are able to proceed to analyze the overall control system.
There is an extensive degree of
cross-coupling
or interaction among different physiological control systems. The proper functioning of the cardiovascular system, for instance, is to a large extent dependent on interactions with the respiratory, renal, endocrine, and other organ systems. In the example of the muscle stretch reflex considered earlier, the block diagram shown in
Figure 1.3
oversimplifies the actual underlying physiology. There are other factors involved that we had omitted and these are shown in the modified block diagram shown in
Figure 1.4
. First, some branches of the afferent nerves also synapse with the motorneurons that lead to other extensor muscles in the thigh that act synergistically with the primary muscle to straighten the lower leg. Second, other branches of the afferent nerves synapse with interneurons, which, in turn, synapse with motorneurons that lead to the flexor or antagonist muscles. However, here the interneurons introduce a polarity change in the signal so that an increase in afferent neural frequency produces a decrease in the efferent neural traffic that is sent to the flexor muscles. This has the effect of relaxing the flexor muscles so that they do not counteract the activity of the extensor muscles.
Physiological control systems, in general, are
adaptive
. This means that the system may be able to offset any change in output not only through feedback but also by allowing the controller or plant characteristics to change. As an example of this type of feature, consider again the operation of the muscle stretch reflex. While this reflex plays a protective role in regulating muscle stretch, it also can hinder the effects of voluntary control of the muscles involved. For instance, if one voluntarily flexes the knee, the stretch reflex, if kept unchanged, would come into play and this would produce effects that oppose the intended movement.
Figure 1.5
illustrates the solution chosen by Nature to circumvent this problem. When the higher centers send signals down the alpha motorneurons to elicit the contraction of the flexor muscles and the relaxation of the extensor muscle, signals are sent simultaneously down the efferent gamma nerves that innervate the muscle spindles. These gamma signals produce in effect a resetting of the operating lengths of the muscle spindles so that the voluntarily induced stretch in the extensor muscles is no longer detected by the spindles. Thus, by employing this clever, adaptive arrangement, the muscle stretch reflex is basically neutralized.
At the end of Section 1.4, we alluded to another difference that may be found between physiological control systems and simpler forms of engineering control systems. In
Figure 1.1
, the feedback signal is explicitly subtracted from the reference input, demonstrating clearly the use of negative feedback. However, in the stretch reflex block diagram of
Figure 1.3
, the comparator is nowhere to be found. Furthermore, muscle stretch leads to an
increase
in both afferent and efferent neural traffic. So, how is negative feedback achieved? The answer is that negative feedback in this system is “built into” in the plant characteristics: Increased efferent neural input produces a
contraction
of the extensor muscle, thereby acting to counteract the initial stretch. This kind of
embedded
feedback is highly common in physiological systems.
One final difference is that physiological systems are generally nonlinear, while engineering control systems can be linear or nonlinear. Frequently, the engineering designer prefers the use of linear system components since they have properties that are well-behaved and easy to predict. This issue will be revisited many times over in the chapters to follow.
Figure 1.4 Contributions of interrelated systems to the muscle stretch reflex.
Figure 1.5 Adaptive characteristics of the muscle stretch reflex.
As we have shown, the construction of block diagrams is useful in helping us clarify in our own minds what key variables best represent the system under study. It is also helpful in allowing us to formalize our belief (which is usually based partly on other people's or our own observations and partly on intuition) of how the various processes involved are causally related. The block diagram that emerges from these considerations, therefore, represents a conceptual model of the physiological control system under study. However, such a model is limited in its ability to enhance our understanding or make predictions, since it only allows qualitative inferences to be made.
To advance the analysis to the next level involves the upgrading of the conceptual model into a mathematical model. The mathematical model allows us to make hypotheses about the contents in each of the “boxes” of the block diagram. For instance, in Figure 1.3, the box labeled “controller” will contain an expression of our belief of how the change in afferent neural frequency may be related to the change in efferent neural frequency. Is the relationship between afferent frequency and efferent frequency linear? If the changes in afferent frequency follow a particular time-course, what would the time-course of the response in efferent frequency be like? One way of answering these questions would be to isolate this part of the physiological control system and perform experiments that would allow us to measure this relationship. In this case, the relationship between the controller input and controller output is derived purely on the basis of observations, and therefore it may take the form of a table or a curve best fitted to the data. Alternatively, these data may already have been measured, and one could simply turn to the literature to establish the required input–output relationship. This kind of model assumes no internal structure and has been given a number of labels in the physiological control literature, such as black-box, empirical, or nonparametric model. Frequently, on the basis of previous knowledge, we also have some idea of what the underlying physical or chemical processes are likely to be. In such situations, we might propose a hypothesis that reflects this belief. On the basis of the particular physical or chemical laws involved, we would then proceed to derive an algebraic, differential, or integral equation that relates the “input” to the “output” of the system component we are studying. This type of model is said to possess an internal structure, that is, it places some constraints on how the input may affect the output. As such, we might call this a structural or gray-box model. In spite of the constraints built into this kind of model, the range of input–output behavior that it is capable of characterizing can still be quite extensive, depending on the number of free parameters (or coefficients) it incorporates. For this reason, this type of model is frequently referred to as a parametric model.
Mathematical modeling may be seen as the use of a “language” to elaborate on the details of the conceptual model. However, unlike verbal languages, mathematics provides descriptions that are unambiguous, concise, and self-consistent. By being unambiguous, different researchers are able to use and test the same model without being confused about the hypotheses built into the model. Since the equations employed in the model are based, at least in large part, on existing knowledge of the physiological processes in question, they also serve the useful purpose of archiving past knowledge and compressing all that information into a compact format. The inherent self-consistency of the model derives from the operational rules of mathematics, which provide a logical accounting system for dealing with the multiple system variables and their interactions with one another. On the other hand, the hypotheses embedded in some components of the model are only hypotheses, reflecting our best belief regarding the underlying process. More often than not, these are incorrect or oversimplistic. As a consequence, the behavior of the model may not reflect the corresponding reality. Yet, the power of the modeling process lies in its replication of the scientific method: The discrepancy between model prediction and physiological observation can be used as “feedback” to alert us to the inadequacies of one or more component hypotheses. This allows us to return to the model development stage once again in order to modify our previous assumptions. Subsequently, we would retest the revised model against experimental observations. And so, the alternating process of induction and deduction continues until we are satisfied that the model can “explain” most of the observed behavior. Then, having arrived at a “good” model, we could venture to use this model to predict how the system might behave under experimental conditions that have not been employed previously. These predictions would serve as a guide for the planning and design of future experiments.
We would be remiss if we did not mention the currently widespread application of mathematical modeling and control theory to biological systems over a much broader spectrum of spatial and temporal scales. “Systems biology” has come to be recognized as a mainstay of biological science, rather than an isolated discipline. To understand the compelling need for systems biology, one must look back into the 1950s when Watson and Crick (1953) published their two-page, landmark paper in Nature, entitled “Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid.” This paper provided a jump-start to the nascent field of molecular biology at the time. In subsequent lectures and papers, Crick introduced the “sequence hypothesis” that evolved into the “central dogma”: This laid out the two-step process, transcription and translation, through which genetic information flows from DNA to mRNA to protein. These and other concurrent developments in molecular biology heralded the golden age of modern biology. The rush was on to develop more reliable and higher throughput methods of DNA sequencing, which ushered in the field of genomics. In turn, technologies were also developed to detect gene mutations using SNP methods. Attention then turned to the development of other technologies (e.g., gene chips, microarrays) for transcriptomics – the cataloging of the complete set of RNA molecules produced by the genome – and, subsequently, proteomics and metabolomics.
As these developments progressed at exponentially increasing speeds, it became clear that the original “reductionist” approach of attempting to “explain” a biological system as the sum of its various components was woefully inadequate. Instead, it was necessary to consider the networks that bind these disparate components and to study the dynamic interactions among these components. Sequential reasoning and intuitive thinking, which worked well for the classical physiologists and biologists, fell by the wayside as high-throughput techniques and new tools from the “-omics” generated avalanche after avalanche of data. As such, it has become necessary to adopt the rigorous framework with the necessary computational tools to select out features from the data that bear relevance to the questions being posed, arrive at a mathematical framework for capturing the dynamic relationships among the interacting dynamic variables, and subsequently use the model structure to predict what would likely be observed under a variety of experimental conditions. The basic workflow cycle of observation, feature extraction, model building, parameter estimation, and prediction using the model lies at the core of systems biology. The same fundamental principles (with perhaps different specific techniques and tools) apply to systems physiology as well. A key difference is that systems biology, as the term is used now, requires information at the molecular and cellular levels and as such requires the development of models that transcend multiple levels of spatial and temporal scales – what is commonly referred to as “multiscale modeling.” However, our focus, in this book, is on the application of model building and control theory to physiological systems at the organ systems level. Nevertheless, we believe that the principles and techniques presented here provide a useful foundation for the reader who is interested in pursuing a more comprehensive grasp of systems biology. There are a number of textbooks and review papers that focus on system biology: for example, Kitano (2002), Ideker et al. (2006), Voit (2013) and Klipp et al. (2016). For less “textbookish” reading, one is referred to the elegantly written introduction to systems biology by Noble (2006).
Based on the verbal descriptions of the following physiological reflex systems, construct block diagrams to represent the major control mechanisms involved. Clearly identify the physiological correlates of the controller, the plant, and the feedback element, as well as the controlling, controlled, and feedback variables. Describe how negative (or positive) feedback is achieved in each case.
P1.1.
The Bainbridge reflex is a cardiac reflex that aids in the matching of cardiac output (the flow rate at which blood is pumped out of the heart) to venous return (the flow rate at which blood returns to the heart). Suppose there is a transient increase in the amount of venous blood returning to the right atrium. This increases blood pressure in the right atrium, stimulating the atrial stretch receptors. As a result, neural traffic in the vagal afferents to the medulla is increased. This, in turn, leads to an increase in efferent activity in the cardiac sympathetic nerves as well as a parallel decrease in efferent parasympathetic activity. Consequently, both heart rate and cardiac contractility are increased, raising cardiac output. In this way, the reflex acts like a servomechanism, adjusting cardiac output to track venous return.
P1.2.
The pupillary light reflex is another classic example of a negative feedback control system. In response to a decrease in light intensity, receptors in the retina transmit neural impulses at a higher rate to the pretectal nuclei in the midbrain, and subsequently to the Edinger–Westphal nuclei. From the Edinger–Westphal nuclei, a change in neural traffic down the efferent nerves back to the eyes leads to a relaxation of the sphincter muscles and contraction of the radial dilator muscles that together produce an increase in pupil area, which increases the total flux of light falling on the retina.
P1.3.
The regulation of water balance in the body is intimately connected with the control of sodium excretion. One major mechanism of sodium reabsorption involves the renin–angiotensin–aldosterone system. Loss of water and sodium from the body, for example, due to diarrhea, leads to a drop in plasma volume, which lowers mean systemic blood pressure. This stimulates the venous and arterial baroreflexes that cause an increase in activity of the renal sympathetic nerves, which in turn stimulates the release of renin by the kidneys into the circulation. The increase in plasma renin concentration leads to an increase in plasma angiotensin, which stimulates the release of aldosterone by the adrenal cortex. Subsequently, the increased plasma aldosterone stimulates the reabsorption of sodium by the distal tubules in the kidneys, thereby increasing plasma sodium levels.
P1.4.
The control system that regulates water balance is intimately coupled with the control of sodium excretion. When sodium is reabsorbed by the distal tubules of the kidneys, water will also be reabsorbed if the permeability of the tubular epithelium is lowered. This is achieved in the following way. When there is a drop in plasma volume, mean systemic pressure decreases, leading to a change in stimulation of the left atrial pressure receptors. The latter send signals to a group of neurons in the hypothalamus, increasing its production of vasopressin or antidiuretic hormone (ADH). As a result, the ADH concentration in blood plasma increases, which leads to an increase in water permeability of the kidney distal tubules and collecting ducts.
P1.5.
Arterial blood pressure is regulated by means of the baroreceptor reflex. Suppose arterial blood pressure falls. This reduces the stimulation of the baroreceptors located in the aortic arch and the carotid sinus, which lowers the rate at which neural impulses are sent along the glossopharyngeal and vagal afferents to the autonomic centers in the medulla. Consequently, sympathetic neural outflow is increased, leading to an increase in heart rate and cardiac contractility, as well as vasoconstriction of the peripheral vascular system. At the same time, a decreased parasympathetic outflow aids in the heart rate increase. These factors together act to raise arterial pressure.
P1.6.
A prolonged reduction in blood pressure due to massive loss of blood can lead to “hemorrhagic shock” in which the decreased blood volume lowers mean systemic pressure, venous return, and thus cardiac output. Consequently, arterial blood pressure is also decreased, leading to decreased coronary blood flow, reduction in myocardial oxygenation, loss in the pumping ability of the heart, and therefore further reduction in cardiac output. The decreased cardiac output also leads to decreased oxygenation of the peripheral tissues, which can increase capillary permeability, thereby allowing fluid to be lost from the blood to the extravascular spaces. This produces further loss of blood volume and mean systemic pressure, and therefore, further reduction in cardiac output.
Adolph, E. Early concepts in physiological regulation.
Physiol. Rev.
41: 737–770, 1961.
Bayliss, L.E.
Living Control Systems
, English University Press, London, 1966.
Cannon, W.
The Wisdom of the Body
, Norton, New York, 1939.
Clynes, M., and J.H. Milsum.
Biomedical Engineering Systems
, McGraw-Hill, New York, 1970.
Grodins, F.S.
Control Theory and Biological Systems
, Columbia University Press, New York, 1963.
Guyton, A.C., C.E. Jones, and T.G. Coleman.
Circulatory Physiology: Cardiac Output and Its Regulation
, Saunders, Philadelphia, 1973.
Ideker, T., R.L. Winslow, and A.D. Lauffenburger. Bioengineering and systems biology.
Ann. Biomed. Eng.
34: 1226–1233, 2006.
Jones, R.W.
Principles of Biological Regulation
, Academic Press, New York, 1973.
Kitano, H. Systems biology: a brief overview.
Science
295: 1662–1664, 2002.
Klipp, E., W. Liebermeister, C. Wierling, A. Kowald, H. Lehrach, and R. Herwig.
Systems Biology: A Textbook
, 2nd edition, Wiley-Blackwell, Hoboken, NJ, 2016.
Milhorn, H.T.
The Application of Control Theory to Physiological Systems
, Saunders, Philadelphia, 1966.
Miller, J.
The Body in Question
, Random House, New York, 1978.
Milsum, J.H.
Biological Control Systems Analysis
, McGraw-Hill, New York, 1966.
Noble, D.
The Music of Life: Biology Beyond Genes
, Oxford University Press, Oxford, 2006.
Riggs, D.S.
Control Theory and Physiological Feedback Mechanisms
, Williams & Wilkins, Baltimore, 1970.
Stark, L.
Neurological Control Systems
, Plenum Press, New York, 1968.
Unschuld, P.U.
Medicine in China: A History of Ideas
, Berkeley, London, 1985.
Vander, A.J., J.H. Sherman, and D.S. Luciano.
Human Physiology: The Mechanisms of Body Function
, 7th edition, McGraw-Hill, New York, 1997.
Voit, E.O.
A First Course in Systems Biology
, Garland Science, New York, 2013.
Watson, J.D., and F.H.C. Crick. Molecular structure of nucleic acids: a structure for deoxyribose nuclei acid.
Nature
171: 737–738, 1953.
Wiener, N.
Cybernetics: Control and Communication in the Animal and the Machine
, John Wiley & Sons, Inc., New York, 1961.
In this chapter, we will review the basic concepts and methods employed in the development of “gray box” models. Models of very different systems often contain properties that can be characterized using the same mathematical expression. The first of these is the resistive property. Everyone is familiar with the concept of electrical resistance (R), which is defined by Ohm's law as
where V is the voltage or driving potential across the resistor and I represents the current that flows through it. Note that V is an “across”-variable and may be viewed as a measure of “effort.” On the other hand, I is a “through”-variable and represents a measure of “flow.” Thus, if we define the generalized “effort” variable ψ and the generalized “flow variable, ζ, Ohm's law becomes
where R now represents a generalized resistance. Figure 2.1 shows the application of this concept of generalized resistance to different kinds of systems. In the mechanical dashpot, when a force F is applied to the plunger (and, of course, an equal and opposite force is applied to the dashpot casing), it will move with a velocity v that is proportional to F. As illustrated in Figure 2.1a, this relationship takes on the same form as the generalized Ohm's law (Equation 2.2), when F and v are made to correspond to ψ and ζ, respectively. The constant of proportionality, Rm, which is related to the viscosity of the fluid inside the dashpot, provides a measure of “mechanical resistance.” In fact, Rm determines the performance of the dashpot as a shock absorber and is more commonly known as the “damping coefficient.” In fluid flow, the generalized Ohm's law assumes the form of Poiseuille's law, which states that the volumetric flow of fluid (Q) through a rigid tube is proportional to the pressure difference (ΔP) across the two ends of the tube. This is illustrated in Figure 2.1b. Poiseuille further showed that the fluid resistance Rf is directly related to the viscosity of the fluid and the length of the tube, and inversely proportional to the square of the tube cross-sectional area. In Fourier's law of thermal transfer, the flow of heat conducted through a given material is directly proportional to the temperature difference that exists across the material (Figure 2.1c). Thermal resistance Rt can be shown to be inversely related to the thermal conductivity of the material. Finally, in chemical systems, the flux Q of a given chemical species across a permeable membrane separating two fluids with different species concentrations is proportional to the concentration difference Δφ (Figure 2.1d). This is known as Fick's law of diffusion. The diffusion resistance Rc
