Identification of Physical Systems - Rajamani Doraiswami - E-Book

Identification of Physical Systems E-Book

Rajamani Doraiswami

0,0
108,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Identification of a physical system deals with the problem of identifying its mathematical model using the measured input and output data. As the physical system is generally complex, nonlinear, and its input–output data is corrupted noise, there are fundamental theoretical and practical issues that need to be considered. 

Identification of Physical Systems addresses this need, presenting a systematic, unified approach to the problem of physical system identification and its practical applications.  Starting with a least-squares method, the authors develop various schemes to address the issues of accuracy, variation in the operating regimes, closed loop, and interconnected subsystems. Also presented is a non-parametric signal or data-based scheme to identify a means to provide a quick macroscopic picture of the system to complement the precise microscopic picture given by the parametric model-based scheme.  Finally, a sequential integration of totally different schemes, such as non-parametric, Kalman filter, and parametric model, is developed to meet the speed and accuracy requirement of mission-critical systems.

Key features:

  • Provides a clear understanding of theoretical and practical issues in identification and its applications, enabling the reader to grasp a clear understanding of the theory and apply it to practical problems
  • Offers  a self-contained guide by including the background necessary to understand this interdisciplinary subject
  • Includes case studies for the application of identification on physical laboratory scale systems, as well as number of illustrative examples throughout the book

Identification of Physical Systems is a comprehensive reference for researchers and practitioners working in this field and is also a useful source of information for graduate students in electrical, computer, biomedical, chemical, and mechanical engineering.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 814

Veröffentlichungsjahr: 2014

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



IDENTIFICATION OF PHYSICAL SYSTEMS

APPLICATIONS TO CONDITION MONITORING, FAULT DIAGNOSIS, SOFT SENSOR AND CONTROLLER DESIGN

Rajamani Doraiswami, Chris Diduch and Maryhelen Stevenson

University of New Brunswick, Canada

This edition first published 2014 © 2014 John Wiley & Sons Ltd

Registered office John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom

For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.

The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. It is sold on the understanding that the publisher is not engaged in rendering professional services and neither the publisher nor the author shall be liable for damages arising herefrom. If professional advice or other expert assistance is required, the services of a competent professional should be sought.

MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® software or related products does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the MATLAB® software.

Library of Congress Cataloging-in-Publication Data

Doraiswami, Rajamani. Identification of physical systems : applications to condition monitoring, fault diagnosis, softsensor, and controller design / Rajamani Doraiswami, Chris Diduch, Maryhelen Stevenson.pages cmIncludes bibliographical references and index.ISBN 978-1-119-99012-3 (cloth)1. Systems engineering. 2. Systems engineering–Mathematics. I. Diduch, Chris. II. Stevenson, Maryhelen. III. Title.TA168.D66 2014620.001′1–dc23 2013049559

A catalogue record for this book is available from the British Library.

ISBN 9781119990123

Contents

Preface

Modeling of Signals and Systems

Temporal and Spectral Characterization of Signals: Correlation and Spectral Density (Coherence)

Estimation of a Deterministic Parameter

Estimation of Random Parameter

Least-Squares Estimation

Kalman Filter

System Identification

Closed loop Identification

Fault Diagnosis

Modeling and Identification of Physical Systems

Fault Diagnosis of Physical Systems

Fault Diagnosis of a Sensor Network

Soft Sensor

Nomenclature

1 Modeling of Signals and Systems

1.1 Introduction

1.2 Classification of Signals

1.3 Model of Systems and Signals

1.4 Equivalence of Input–Output and State-Space Models

1.5 Deterministic Signals

1.6 Introduction to Random Signals

1.7 Model of Random Signals

1.8 Model of a System with Disturbance and Measurement Noise

1.9 Summary

References

Further Readings

2 Characterization of Signals: Correlation and Spectral Density

2.1 Introduction

2.2 Definitions of Auto- and Cross-Correlation (and Covariance)

2.3 Spectral Density: Correlation in the Frequency Domain

2.4 Coherence Spectrum

2.5 Illustrative Examples in Correlation and Spectral Density

2.6 Input–Output Correlation and Spectral Density

2.7 Illustrative Examples:

Modeling and Identification

2.8 Summary

2.9 Appendix

References

3 Estimation Theory

3.1 Overview

3.2 Map Relating Measurement and the Parameter

3.3 Properties of Estimators

3.4 Cramér–Rao Inequality

3.5 Maximum Likelihood Estimation

3.6 Summary

3.7 Appendix: Cauchy–Schwarz Inequality

3.8 Appendix: Cramér–Rao Lower Bound

3.9 Appendix: Fisher Information: Cauchy PDF

3.10 Appendix: Fisher Information for i.i.d. PDF

3.11 Appendix: Projection Operator

3.12 Appendix: Fisher Information: Part Gauss-Part Laplace

Problem

References

Further Readings

4 Estimation of Random Parameter

4.1 Overview

4.2 Minimum Mean-Squares Estimator (MMSE): Scalar Case

4.3 MMSE Estimator: Vector Case

4.4 Expression for Conditional Mean

4.5 Summary

4.6 Appendix: Non-Gaussian Measurement PDF

References

Further Readings

5 Linear Least-Squares Estimation

5.1 Overview

5.2 Linear Least-Squares Approach

5.3 Performance of the Least-Squares Estimator

5.4 Illustrative Examples

5.5 Cramér–Rao Lower Bound

5.6 Maximum Likelihood Estimation

5.7 Least-Squares Solution of Under-Determined System

5.8 Singular Value Decomposition

5.9 Summary

5.10 Appendix: Properties of the Pseudo-Inverse and the Projection Operator

5.11 Appendix: Positive Definite Matrices

5.12 Appendix: Singular Value Decomposition of a Matrix

5.13 Appendix: Least-Squares Solution for Under-Determined System

5.14 Appendix: Computation of Least-Squares Estimate Using the SVD

References

Further Readings

6 Kalman Filter

6.1 Overview

6.2 Mathematical Model of the System

6.3 Internal Model Principle

6.4 Duality Between Controller and an Estimator Design

6.5 Observer: Estimator for the States of a System

6.6 Kalman Filter: Estimator of the States of a Stochastic System

6.7 The Residual of the Kalman Filter with Model Mismatch and Non-Optimal Gain

6.8 Summary

6.9 Appendix: Estimation Error Covariance and the Kalman Gain

6.10 Appendix: The Role of the Ratio of Plant and the Measurement Noise Variances

6.11 Appendix: Orthogonal Properties of the Kalman Filter

6.12 Appendix: Kalman Filter Residual with Model Mismatch

References

7 System Identification

7.1 Overview

7.2 System Model

7.3 Kalman Filter-Based Identification Model Structure

7.4 Least-Squares Method

7.5 High-Order Least-Squares Method

7.6 The Prediction Error Method

7.7 Comparison of High-Order Least-Squares and the Prediction Error Methods

7.8 Subspace Identification Method

7.9 Summary

7.10 Appendix: Performance of the Least-Squares Approach

7.11 Appendix: Frequency-Weighted Model Order Reduction

References

8 Closed Loop Identification

8.1 Overview

8.2 Closed-Loop System

8.3 Model of the Single Input Multi-Output System

8.4 Kalman Filter-Based Identification Model

8.5 Closed-Loop Identification Schemes

8.6 Second Stage of the Two-Stage Identification

8.7 Evaluation on a Simulated Closed-Loop Sensor Net

8.8 Summary

References

9 Fault Diagnosis

9.1 Overview

9.2 Mathematical Model of the System

9.3 Model of the Kalman Filter

9.4 Modeling of Faults

9.5 Diagnostic Parameters and the Feature Vector

9.6 Illustrative Example

9.7 Residual of the Kalman Filter

9.8 Fault Diagnosis

9.9 Fault Detection: Bayes Decision Strategy

9.10 Evaluation of Detection Strategy on Simulated System

9.11 Formulation of Fault Isolation Problem

9.12 Estimation of the Influence Vectors and Additive Fault

9.13 Fault Isolation Scheme

9.14 Isolation of a Single Fault

9.15 Emulators for Offline Identification

9.16 Illustrative Example

9.17 Overview of Fault Diagnosis Scheme

9.18 Evaluation on a Simulated Example

9.19 Summary

9.20 Appendix: Bayesian Multiple Composite Hypotheses Testing Problem

9.21 Appendix: Discriminant Function for Fault Isolation

9.22 Appendix: Log-likelihood Ratio for a Sinusoid and a Constant

References

10 Modeling and Identification of Physical Systems

10.1 Overview

10.2 Magnetic Levitation System

10.3 Two-Tank Process Control System

10.4 Position Control System

10.5 Summary

References

11 Fault Diagnosis of Physical Systems

11.1 Overview

11.2 Two-Tank Physical Process Control System

11.3 Position Control System

11.4 Summary

References

12 Fault Diagnosis of a Sensor Network

12.1 Overview

12.2 Problem Formulation

12.3 Fault Diagnosis Using a Bank of Kalman Filters

12.4 Kalman Filter for Pairs of Measurements

12.5 Kalman Filter for the Reference Input-Measurement Pair

12.6 Kalman Filter Residual: A Model Mismatch Indicator

12.7 Bayes Decision Strategy

12.8 Truth Table of Binary Decisions

12.9 Illustrative Example

12.10 Evaluation on a Physical Process Control System

12.11 Fault Detection and Isolation

12.12 Summary

12.13 Appendix

References

13 Soft Sensor

13.1 Review

13.2 Mathematical Formulation

13.3 Identification of the System

13.4 Model of the Kalman Filter

13.5 Robust Controller Design

13.6 High Performance and Fault Tolerant Control System

13.7 Evaluation on a Simulated System: Soft Sensor

13.8 Evaluation on a Physical Velocity Control System

13.9 Conclusions

13.10 Summary

References

Index

End User License Agreement

List of Tables

Chapter 1

Table 1.1

Table 1.2

Chapter 2

Table 2.1

Table 2.2

Table 2.3

Chapter 3

Table 3.1

Chapter 4

Table 4.1

Chapter 6

Table 6.1

Table 6.2

Table 6.3

Chapter 7

Table 7.1

Table 7.2

Table 7.3

Chapter 8

Table 8.1

Chapter 9

Table 9.1

Chapter 11

Table 11.1

Table 11.2

Chapter 12

Table 12.1 and 12.2

Table 12.3

Table 12.4

Chapter 13

Table 13.1

Guide

Cover

Table of Contents

Preface

Chapter

Pages

xv

xvi

xvii

xviii

xix

xx

xxi

xxii

xxiii

xxiv

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

375

376

377

378

379

380

381

382

383

384

385

386

387

388

389

390

391

392

393

394

395

396

397

398

399

400

401

402

403

404

405

406

407

408

409

410

411

412

413

414

415

416

417

418

419

420

421

422

423

424

425

426

427

428

429

430

431

432

433

434

435

436

437

438

439

440

441

442

443

444

445

446

447

448

449

450

451

452

453

454

455

456

457

458

459

460

461

462

463

464

465

466

467

468

469

470

471

472

473

474

475

476

477

478

479

480

481

482

483

484

485

486

487

488

489

490

491

492

493

494

495

496

497

498

499

500

501

502

503

504

505

506

507

508

509

510

511

512

Preface

The topic of identification and its applications is interdisciplinary, covering the areas of estimation theory, signal processing, and control with a rich background in probability and stochastic processes and linear algebra. Applications include control system design and analysis, fault detection and isolation, health monitoring, condition-based maintenance, fault diagnosis of a sensor network, and soft sensing. A soft sensor estimates variables of interest from the process output measurements using a maintenance-free software, instead of a hardware, device.

The Kalman filter forms the backbone of the work presented in this book, in view of its key property, namely, the residual is a zero-mean white noise process if and only if there is no mismatch between the model of the system and that employed in the design of the Kalman filter. This property is termed the residual property of the Kalman filter. The structure of the identification model is selected to be that of the Kalman filter so as to exploit the residual property. If the error between the system output and its estimate obtained using the identified model is a zero-mean white noise process, then the identified model is the best fit to the system model. Fault detection and isolation is based on analyzing the Kalman filter residual. If the residual property holds, then it is asserted that there is a fault and the residual is then further analyzed to isolate the faulty subsystems. A bank of Kalman filters is employed for fault diagnosis in a sensor network because of the distributed nature of the latter. At the core of a soft sensor is a Kalman filter, which generates the estimates of the unmeasured variable from the output measurements.

Chapters 1-5 provide a background for system identification and its applications including modeling of signals and systems, deterministic and random signals, characterization of signals, and estimation theory, as explained below.

Modeling of Signals and Systems

Chapter 1 describes state-space and linear regression models of systems subject to disturbance and measurement noise. The disturbances may include effects such as the gravity load, electrical power demand, fluctuations in the flow in a fluid system, wind gusts, bias, power frequency signal, dc offset, crew or passenger load in vehicles such as space-crafts, ships, helicopters, and planes, and process faults. Measurement noise is a random signal inherent in all physical components. The most common source of noise is thermal noise, due to the motion of thermally agitated free electrons in a conductor.

A signal is modeled as an output of a linear time-invariant system driven by an impulse (delta) function if it is deterministic, and by a zero-mean white noise process if it is random. An integrated model is developed that includes the model of the disturbance and the measurement noise. The integrated model is driven by both the system input and the zero-mean white noise processes that generate the random disturbances and measurement noise. This model sets the stage for developing the model of the Kalman filter for the system.

Temporal and Spectral Characterization of Signals: Correlation and Spectral Density (Coherence)

Characterization of the signals in terms of the correlation and its frequency-domain counterpart, the power spectral density, are treated in Chapter 2. The correlation is a measure of the statistical similarity or dissimilarity between two waveforms, whereas the magnitude-squared coherence spectrum measures the spectral similarity or dissimilarity between two signals. This measure (coherence spectrum) is the frequency-domain counterpart of the correlation coefficient which preserves only the shape of the correlation function. The coherence spectrum is widely used in many areas including medical diagnosis, performance monitoring, and fault diagnosis. Non-parametric identification of a system in terms of its frequency response may be obtained simply by dividing the cross-power spectral density of the system input and its output, by the spectral density of the input at each frequency. The non-parametric identification serves to cross check the result of the parametric identification of the system.

Estimation theory: Estimation theory is a branch of statistics with wide application in science and engineering. It especially forms the backbone of the system identification field. Because of its pivotal role in System identification, estimation theory has been thoroughly addressed in three complete chapters (Chapters 3-5) which provide its foundational knowledge and results.

Estimation of a Deterministic Parameter

The problem of estimating a deterministic parameter from noisy measurements is developed in Chapter 3. The measurement model is assumed to be linear. The output measurement is a linear function of the unknown parameter with additive noise. The probability density function of the measurement is not restricted to be Gaussian. Commonly occurring heavy tailed probability density functions (PDFs) such as the Laplacian, the exponential, and Cauchy PDFs are considered. If the PDF governing the measurement is unknown except for its mean and variance, a worst-case PDF, which is a partly Gaussian and partly exponential, is employed. The worst-case PDF is derived using min-max theory. A thin-tailed PDF, such as the Gaussian one, characterizes “good” measurement data, while those with thick tails characterize bad measurement data. A popular lower bound used in estimation theory to define the efficiency of an estimator, namely the Cramer - Rao lower bound, is derived for the error covariance of the estimator. An estimator is termed efficient if its estimation error covariance is equal to the Cramer-Rao lower bound, and is unbiased if the true value of the parameter is equal to the expected value of its estimate. Approaches to the estimation of deterministic non-random parameters are developed, including maximum likelihood and the least-squares methods.

Estimation of Random Parameter

Chapter 4 deals with random parameter estimation. It is shown that the optimal estimate in the sense of minimum mean-squared error is the conditional mean of the parameter given the past and present measurement. Extension of the random parameter estimation to the optimum mean-squared error estimation of random process, including the states and output of a system, leads to the development of the Kalman filter.

Least-Squares Estimation

Chapter 5 deals with the widely used least-squares method for estimating the unknown deterministic parameter from the measurement signal which is corrupted by colored or white noise. Properties of the least-squares estimate are derived. Expressions for the bias error and covariance of the estimation error are obtained for the case when the number of data samples is both finite and infinitely large. The least-squares and its generalized version of the weighted least-squares method produces an estimator that is unbiased and is the best linear unbiased estimator (BLUE estimator). Most importantly, the optimal estimate is a solution of a set of linear equations which can be efficiently computed using the Singular Value Decomposition (SVD) technique. Moreover, the least-squares estimation has a very useful geometric interpretation. The residual can be shown to be actually orthogonal to the hyper-plane generated by the columns of the data matrix. This is called the orthogonality principle.

Kalman Filter

The Kalman filter is developed in Chapter 6. It is widely used in a plethora of science and engineering applications including tracking, navigation, fault diagnosis, condition-based maintenance, performance (or health or product quality) monitoring, soft sensing, estimation of a signal of a known class buried in noise, speech enhancement, and controller implementation. It also plays a crucial role in system identification as the structure of the identification model is chosen to be the same as that of the Kalman filter. It is an optimal recursive estimator of the states of a dynamical system that is very well suited for systems that can be non-stationary, unlike the Wiener filter which is limited to stationary processes only. Further extensions of the Kalman filter have taken it into the realms of non-Gaussian and nonlinear systems and have spawned a variety of powerful filters, ranging from the well-known extended KF to the most general particle filter (PF). In a KF, the system is modeled in a state-space form driven by a zero-mean white Gaussian noise process. The Kalman filter consists of two sets of equation, a static (or algebraic) and a dynamic equation. The dynamic equation, or state equation, is driven by the input of the system and the residual. The algebraic (or static) equation, also known as the output equation, contains an additive white Gaussian noise which represents the measurement noise. The Kalman filter is designed for the integrated model of the system formed of the models of the system, the disturbance and the measurement noise. There are two approaches to deriving the Kalman filter: one relies on the stochastic estimation theory and the other on the deterministic theory. A deterministic approach is adopted herein. The structure of the Kalman filter is determined using the internal model principle, which establishes the necessary and sufficient condition for the tracking of the output of a dynamical system. In accordance with this principle, the Kalman filter consists of (i) a copy of the system model driven by the residuals and (ii) a gain term, termed the Kalman gain, used to stabilize the filter. The internal model principle provides a mathematical justification for the robustness of the Kalman filter to noise and disturbances and for the high sensitivity (i.e., a lack of robustness) of the mean of the residuals to model mismatch. This property is judiciously exploited in designing a Kalman filter for applications such as performance monitoring and fault diagnosis. The Kalman filter computes the estimates by fusing the a posteriori information provided by the measurement, and the a priori information contained in the model which governs the evolution of the measurement. The covariance of the measurement noise, and that of the plant noise, quantify the degree of belief associated with the measurement and model information, respectively. The estimate of the state is obtained as the best compromise between the estimates generated by the model and those obtained from the measurement, depending upon the plant noise and the measurement noise covariances.

System Identification

Chapter 7 considers various methods used for system identificationm including the classical leastsquares, the high-order least-squares, the prediction error, and the subspace methods. Given that a very large class of dynamical systems can be modeled by rational transfer functions, the identification of these systems therefore deals with the problem of estimating the unknown parameters which characterize them completely, namely the numerator and denominator coefficients of their transfer functions. The vector formed of these coefficients is termed the feature vector. The least-squares identification technique is an extension of the classical least-squares method to the case when the matrix relating the output of the system to the feature vector is not constant; but is a function of the past inputs and the outputs. The basic principle behind a general identification method is as follows: (i) choose a model, for example a transfer function model, based initially on physical laws governing the system, (ii) estimate the feature vector by minimizing the residual, which is the error between the output of the system and its estimate from the assumed model, (iii) verify whether the error between the system output and its estimate, is a zero-mean white noise process. If it is not a white noise process, refine the structure (the order of the numerator, the order of the denominator, and the delay) of the model and repeat the previous steps. It is interesting to note here that by choosing a model (i.e., by fixing its structure), system identification is reduced to a parameter estimation problem. In order to prevent overfitting the data, a well-known criterion for model order selection, such as the Akaike Information Criterion, is used. In order to meet the above requirements and to comply with the internal model principle, the structure of the assumed model is chosen to be the same as that of the Kalman filter of the model in view of its residual property. It is shown that the widely-used high-order least-squares and the prediction error methods are derived from the residual model that relates the Kalman filter residual, to the system input and output in terms of the feature vector. The subspace method, however, is derived directly from the Kalman filter model relating the residual, system input and output.

Closed loop Identification

Chapter 8 extends the identification to systems operating in closed loop, In practice, and for a variety of reasons (for e.g., analysis, design and control), it is often necessary to identify a system that must operate in a closed-loop fashion under some type of feedback control. These reasons could also include design of high performance controller, safety issues, the need to stabilize an unstable plant and /or improve its performance while avoiding the cost incurred through downtime if the plant were to be taken offline for test. In these cases, it is therefore necessary to perform closed-loop identification. Applications include aerospace, magnetic levitation, levitated micro-robotics, magnetically-levitated automotive engine valves, magnetic bearings, mechatronics, adaptive control of processes, satellitelaunching vehicles or unstable aircraft operating in closed-loop and process control systems. There are three basic approaches to closed-loop identification, namely direct, an indirect, and a two-stage. Using a direct approach, there may be a bias in the identified subsystem models due mainly to the correlation between the input and the noise. Further, the open loop plant may be unstable. The two-stage approach is emphasized. It consists of two stages of identification. In the first stage, the closed-loop transfer functions relating the system input to all the measured outputs are identified. In the second stage, using the estimated outputs from the first stage, the (open loop) subsystem transfer functions are estimated.

Applications of Identification: Chapters 9, 10, 11, 12, and 13 deal with the applications of identification.

Fault Diagnosis

Fault diagnosis is developed in Chapter 9. As the complexity of engineering systems increases, fault diagnosis, condition-based maintenance, and health monitoring become of vital practical importance to ensure key advantages, such as the system’s reliability and performance sustainability, reduction of down time, low operational costs, and personnel safety. Fault diagnosis of physical systems is still a challenging problem and continues to be a subject of intense research both in industry and in academia, in view of the stringent and conflicting requirements in practice for a high probability of correct detection and isolation, low false alarm probability, and timely decision on the fault status.

A physical system is an interconnection of subsystems including the actuators, and sensors and plants. Each subsystem is modeled as a transfer function that may represent a physical entity of the system such as a sensor, actuator, controller, or other system component that is subject to faults. Parameters, termed herein as diagnostic parameters, are selected so that they are capable of monitoring the health of the subsystems, and may be varied either directly or indirectly (using an emulator) during the off-line identification phase. An emulator is a transfer function block which is connected at the input with a view to inducing faults in a subsystem which may arise as a result of variations in the phase and the magnitude of the transfer function of a subsystem. An emulator may take the form of a gain or a filter to induce gain or phase variations. A fault occurs within a subsystem when one or more of its diagnostic parameters vary. A variation in the diagnostic parameter does not necessarily imply that the subsystem has failed, but it may lead to a potential failure resulting in poor product quality, shut down, or damage to subsystem components. Hence a proactive action such as condition-based preventive maintenance must be taken prior to the occurrence of a fault.

A unified approach to both detection and isolation of a fault is presented based on Kalman filter residual property. The fault detection capability of the Kalman filter residual is extended to the task of fault isolation. It is shown that the residual is a linear function in each of the diagnostic parameters when other parameters are kept constant, that is, it is multi-linear function of the diagnostic parameters. A vector, termed influence vector, plays a crucial role in the fault isolation process. The influence vector is made of elements that are partial derivatives of the feature vector with respect to each diagnostic parameter. The nominal fault-free model of the system and the influence vectors are estimated off-line by performing a number of experiments to cover all likely operating scenarios rather than merely identify the system at a given operating point. Emulators, which are transfer blocks, are included at the accessible points in the system such as inputs, the outputs, or both in order to at mimic the likely operating scenarios. This is similar in spirit to the artificial neural network approach where a training set comprising data obtained from a number of representative operating scenarios is presented so as to capture completely the behaviour of the system.

The decision to select between the hypothesis that the system has a fault, and the alternative hypothesis that it does not, is extremely difficult in practice as the statistics of both the noise corrupting the data and the model that generated these data are not known precisely. To effectively discriminate between these two important decisions (or hypotheses), the Bayes decision strategy is employed here as it allows for the inclusion of the information about the cost associated with the decision taken, and the a priori probability of the occurrence of a fault. The fault isolation problem is similarly posed as a multiple hypothesis testing problem. The hypotheses include a single fault in a subsystem, simultaneous faults in two subsystems, and so on until simultaneous faults in all subsystems. For single fault, a closed form solution is presented. A fault in a subsystem is asserted if the correlation between the measured residual and one of a number of hypothesized residual estimates is maximum.

Modeling and Identification of Physical Systems

In Chapter 10, modeling and identification of physical systems is given. The theoretical underpinnings of identification and its applications are thoroughly verified by extensive simulations and very well corroborated by the practical implementation laboratory scale systems including: (i) a two-tank process control system, (ii) a magnetically-levitated system, and (iii) a mechatronic control system. In this chapter, a mathematical model of the physical system derived from physical laws is given. The input-output data obtained from experiments performed on theses physical system are used to evaluate the performance of the identification, fault diagnosis, and soft sensor schemes. The closed-loop identification scheme developed in the earlier chapters is employed.

Fault Diagnosis of Physical Systems

Chapter 11, treats model-based fault diagnosis of physical systems. The background on the Kalman filter, closed-loop identification, and fault diagnosis are given in earlier chapters, Case studies in fault diagnosis of a laboratory-scale two-tank process control system and the position control are presented.

Fault Diagnosis of a Sensor Network

In Chapter 12, a model-based fault diagnosis scheme is developed for a sensor network of a cascade, parallel, and feedback combination of subsystems. The objective is to detect and isolate a fault in any of the subsystems and measurement sensors which are subject to disturbances and/or measurement noise. The approach hinges on the use of a bank of Kalman filters (KF) to detect and isolate faults. Each KF is driven by either a pair (i) of consecutive sensor measurements or (ii) of a reference input and a measurement. It is shown that the KF residual is a reliable indicator of a fault in subsystems and sensors located in the path between the pair of the KF’s input. A simple and efficient procedure is developed that analyzes each of the associated paths and leads to both the detection and isolation of any fault that occurred in the paths analyzed. The scheme is successfully evaluated on several simulated examples and a physical fluid system exemplified by a benchmarked laboratory-scale two-tank system to detect and isolate faults, including sensor, actuator, and leakage ones. Further, its performance is compared with those of the Artificial Neural Network and fuzzy logic-based model-free schemes.

Soft Sensor

A model-based soft sensor is proposed for estimating unmeasured variable for high performance, fault tolerant, and reliable control systems is considered in Chapter 13. A soft sensor can be broadly defined as a software-based sensor. Soft sensors are invaluable in numerous scientific and industrial applications where hardware sensors are either too costly to maintain and/or too dangerous or impossible to physically access. Soft sensors act as the virtual eyes and ears of operators and engineers looking to drawconclusions from processes that are difficult - or impossible - to measure with a physical sensor. With no moving parts, a soft sensor offers a maintenance-free method for a variety of data acquisition tasks that serve numerous applications, such as fault diagnosis, process control, instrumentation, signal processing, and the like. In fact, they are found to be ideal for use in aerospace, pharmaceutical, process control, mining, oil and gas, and healthcare industries. It is anticipated that a wave of soft sensing will sweep through the measurement world through its increasing use in smart phones nowadays. Soft sensing is already providing the core component of the new and emerging area of smart sensing. The design and use of a soft sensor is illustrated in this chapter in the specific and important area of robust and fault tolerant control. A soft sensor uses a software algorithm that derives its sensing power from the use of an Artificial Neural networks, a Neuro-fuzzy system, Kernel methods (support vector machines), a multi-variate statistical analysis, a Kalman filter, or other model-based or model-free approaches (Angelov and Kordon, 2010). A model-based approach using a Kalman filter for the design of a soft sensor is proposed here. The nominal model of the system is identified by performing a number of experiments to cover various operating regimes using emulators. The proposed scheme is evaluated on a simulated, as well as a laboratory-scale, velocity control system.

Nomenclature

Vector Norm

Matrix Norm

Let be a nxm with elements {aij}

is a 2-norm of a matrix . It is also called a spectral norm of a matrix . It is the largest singular value of or the square root of the largest eigenvalue of the positive semi-definite matrix .

is a 1-norm of a matrix . It is the largest absolute column sum of the matrix.

is a ∞-norm of a matrix . It is the largest absolute row sum of the matrix.

is a Frobenius-norm of a matrix . The Fresenius norm is often used in matrix analysis as it is easy to compute; for example, to determine how close the two matrices and are. It is not an induced norm..

A useful inequality between 1, 2, and ∞ norms is given by

I

identity matrix

SVD

Singular Value Decomposition where is nxn, is mxm, and is nxm matrices, and are unitary matrices.

diag

(

a

1

 ,

a

2

 , …,  

a

n

)

nxn diagonal matrix with ai as its ith diagonal element

det(

A

)

determinant of A

trace(

A

)

trace of A

pseudo inverse of A

eigenvalue of A

singular values of A

ith singular values of A

smallest singular value of A

largest singular value of A

||

A

||

2-norm (spectral norm) of A:

[

a

ij

]

matrix with aij as its ith row and jth column element

A

(

i

,

j

)

ijth element of A

jth column vector of A

ith row vector of A

Re

{α}

real part of a complex α

imaginary part of a complex α

A-B is positive semi-definite

projection operator of :

orthogonal complement projector

W

weighting matrix

1

vector of all ones

matrix of all ones

is the linear space generated by

field of real numbers

n

Euclidian space of nx1 real vectors

nxm

Euclidian space of nxm real matrices

Probability and Random Process

Transforms

ℑ(.)

Fourier transform of (·)

− 1

(.)

inverse Fourier transform of (·)

z-transform of (·)

z

z-transform variable

the inverse z-transform of (·)

ω

frequency in radians per second

frequency in hertz (Hz)

FFT

Fast Fourier Transform

Operations on Signals

r

xx

(

m

)

correlation of x(k) and y(k)

x

(

k

)  

y

(

k

)

convolution of x(k) and y(k)

E

xy

(

z

)

energy spectral density of x(k) and y(k)

P

xy

(

z

)

power spectral density of x(k) and y(k)

E

x

energy signal x(k)

P

x

power signal x(k)

the value of x that maximizes f(x)

the value of x that minimizes f(x)

Signals and Systems

output

state of a system

estimate of the output

residual:

r

(reference) input

w

disturbance affecting the system

measurement noise at the output

SNR

Signal to Noise Ratio

Kronecker delta function

G

(

z

)

transfer function of a system

state-space model

AR

Auto Regressive

MA

Moving Average

ARMA

Auto Regressive and Moving Average

ARMAX

Auto Regressive and Moving Average with external input

FIR

Finite Impulse Response

PRBS

Pseudo Random Binary Signal

SISO

Single Input, Single Output

SIMO

Single Input, Multiple Output

MIMO

Multiple Input, Multiple Output

1Modeling of Signals and Systems

1.1 Introduction

A system output is generated as a result of the input, the disturbances, and the measurement noise driving the plant. The input is a termed as “signal” while the disturbance and the measurement noise are termed as “noise.” A signal is the desired waveform while a noise is as an unwanted waveform, and the output is the result of convolution (or filtering) of the signal and the noise by the system. Examples of signal include speech, music, biological signals, and so on, and examples of noise include 60 Hz power frequency waveform, echo, reflection, thermal noise, shot noise, impulse noise, and so on. A signal or noise may be deterministic or stochastic. Signals such as speech, music, and biological signals are stochastic: they are not exactly the same from one realization to the other. There are two approaches to characterize the input–output behavior of a system:

Non-parametric (classical or FFT-based) approach.

Parametric (modern or model-based) approach.

In the parametric approach, the plant, the signal, and the noise are described by a discrete-time model. In the non-parametric approach, the plant is characterized by its frequency response, and the signal and the noise are characterized by correlation functions (or equivalently by power spectral densities). Generally, FFT forms the basic algorithm used to obtain the non-parametric model. Both approaches complement each other. The parametric approach provides a detailed microscopic description of the plant, the signal, and the noise. The non-parametric approach is computationally fast but provides only a macroscopic picture.

In general, the signal or the noise may be classified as deterministic and random processes. A class of deterministic processes – including the widely prevalent constants, exponentials, sinusoids, exponentially damped sinusoids, and periodic waveforms – are modeled as an output of a Linear Time Invariant (LTI) system driven by delta function. Essentially the model of a deterministic signal (or noise) is the z-transform of the signal (or noise).

We frequently encounter non-deterministic or random signals, which appear everywhere and are not analytically describable. In many engineering problems one has to analyze or design systems subject to uncertainties resulting from incomplete knowledge of the system, inaccurate models, measurement errors, and uncertain environments. Uncertainty in the behavior of the systems is commonly handled using following approaches:

Deterministic approach:

The uncertainty is factored in the analysis and particularly the design by considering the worst case scenario.

Fuzzy-logic approach

:

The fuzziness of the variable (e.g., small, medium, large values) are handled using the mathematics of fuzzy logic.

Probabilistic approach

:

The uncertainty is handled by treating the variables as random signals.

In this chapter, we will restrict ourselves to the probabilistic approach. These uncertainties are usually modeled as random signal inputs (noise and disturbances) to the system. The measurements and the disturbances affecting the system are treated as random signals. Commonly, a fictitious random input is introduced to mimic uncertainties in the model of a system. Random signals are characterized in terms of statistical terms that represent average behavior when a large number of experiments is performed. The set of all the outcomes of the experiments is called an ensemble of time functions or equivalently a random process (or stochastic process). A random signal is modeled as an output of a LTI system driven by zero-mean white noise, unlike a deterministic signal which is modeled as an output with delta function input. A class of low-pass, high-pass, and band-pass random signals are modeled by selecting an appropriate LTI system (filter).

An output of a system is mostly affected by disturbances and measurement noise. The disturbance and the measurement noise may be deterministic waveforms or a random process. Deterministic waveforms include constant, sinusoid, or a periodic signals, while the random waveform may be a low-pass, a band-pass, or a high-pass process. An integrated model is obtained by augmenting the model of the plant with those of the disturbance and the measurement noise. The resulting integrated model is expressed in the form of a high-order difference equation model, such as an Auto Regressive (AR), a Moving Average (MA), or a Auto Regressive and Moving Average (ARMA) model. The input is formed of the plant input, and the inputs driving the disturbance and measurement noise model inputs, namely delta functions and/or white noise processes. Similarly, an augmented state-space model of the plant is derived by combining the state-space models of the plant, the disturbances, and the measurement noise.

This integrated model is employed subsequently in the system identification, condition monitoring, and fault detection and isolation. The difference equation model is used for system identification while the state-space model is used for obtaining the Kalman filter or an observer.

A model of a class of signals (rather than the form of a specific member of that class) is developed from the deterministic or the stochastic signal model by setting the driving input to zero. A model of a class of signals such as the reference, the disturbance, and the measurement noise is employed in many applications. Since the response of a system depends upon the reference input to the system, and the disturbances and the measurement noise affecting its output, the desired performance may degrade if the influence of these signals is not factored into the design of the system. For example, in the controller or in the Kalman filter, the steady-state tracking will be ensured if and only if a model of the class of these signals is included and this model is driven by the tracking error. The model of the class of signal which is integrated in the controller, the observer, or the Kalman filter is termed as the internal model of the signal. The internal model ensures that given output tracks the reference input in spite of the presence of the disturbances and the measurement noise waveform corrupting the plant output.

Tables, formulae, and background information required are given in the Nomenclature section.

1.2 Classification of Signals

Signals are classified broadly as deterministic or random; bounded or unbounded; energy or power; causal, anti-causal, or non-causal.

1.2.1 Deterministic and Random Signals

Deterministic signals can be modeled exactly by a mathematical expression, rule, or table. Because of this, future values of any deterministic signal can be calculated from past values. For this reason, these signals are relatively easy to analyze as they do not change, and we can make accurate assumptions about their past and future behavior.

Deterministic signals are not always adequate to model real-world situations. Random signals, on the other hand, cannot be characterized by a simple, well-defined mathematical equation. They are modeled in probabilistic terms. Probability and statistics are employed to analyze their behavior. Also, because of their randomness, average values from a collection of signals are usually studied rather than analyzing one individual signal.

Unlike deterministic signals, stochastic signals – or random signals – are not so nice. Random signals cannot be characterized by a simple, well-defined mathematical equation and their future values cannot be predicted. Rather, we must use probability and statistics to analyze their behavior. Also, because of their randomness, average values from a collection of signals obtained from a number of experiments are usually studied rather than analyzing one individual outcome from one experiment.

1.2.2 Bounded and Unbounded Signal

A signal x(k) is said to be bounded if

(1.1)