115,99 €
A unified and systematic theoretical framework for solving problems related to finite impulse response (FIR) estimate Optimal and Robust State Estimation: Finite Impulse Response (FIR) and Kalman Approaches is a comprehensive investigation into batch state estimators and recursive forms. The work begins by introducing the reader to the state estimation approach and provides a brief historical overview. Next, the work discusses the specific properties of finite impulse response (FIR) state estimators. Further chapters give the basics of probability and stochastic processes, discuss the available linear and nonlinear state estimators, deal with optimal FIR filtering, and consider a limited memory batch and recursive algorithms. Other topics covered include solving the q-lag FIR smoothing problem, introducing the receding horizon (RH) FIR state estimation approach, and developing the theory of FIR state estimation under disturbances. The book closes by discussing the theory of FIR state estimation for uncertain systems and providing several applications where the FIR state estimators are used effectively. Key concepts covered in the work include: * A holistic overview of the state estimation approach, which arose from the need to know the internal state of a real system, given that the input and output are both known * Optimal, optimal unbiased, maximum likelihood, and unbiased and robust finite impulse response (FIR) structures * FIR state estimation approach along with the infinite impulse response (IIR) and Kalman approaches * Cost functions and the most critical properties of FIR and IIR state estimates Optimal and Robust State Estimation: Finite Impulse Response (FIR) and Kalman Approaches was written for professionals in the fields of microwave engineering, system engineering, and robotics who wish to move towards solving finite impulse response (FIR) estimate issues in both theoretical and practical applications. Graduate and senior undergraduate students with coursework dealing with state estimation will also be able to use the book to gain a valuable foundation of knowledge and become more adept in their chosen fields of study.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 674
Veröffentlichungsjahr: 2022
Cover
Title Page
Copyright
Dedication
Preface
Foreword
Acronyms
1 Introduction
1.1 What Is System State?
1.2 Properties of State Estimators
1.3 More About FIR State Estimators
1.4 Historical Overview and Most Noticeable Works
1.5 Summary
1.6 Problems
Notes
2 Probability and Stochastic Processes
2.1 Random Variables
2.2 Stochastic Processes
2.3 Stochastic Differential Equation
2.4 Summary
2.5 Problems
3 State Estimation
3.1 Lineal Stochastic Process in State Space
3.2 Methods of Linear State Estimation
3.3 Linear Recursive Smoothing
3.4 Nonlinear Models and Estimators
3.5 Robust State Estimation
3.6 Summary
3.7 Problems
Notes
4 Optimal FIR and Limited Memory Filtering
4.1 Extended State‐Space Model
4.2 The
a posteriori
Optimal FIR Filter
4.3 The
a posteriori
Optimal Unbiased FIR Filter
4.4 Maximum Likelihood FIR Estimator
4.5 The
a priori
FIR Filters
4.6 Limited Memory Filtering
4.7 Continuous‐Time Optimal FIR Filter
4.8 Extended
a posteriori
OFIR Filtering
4.9 Properties of FIR State Estimators
4.10 Summary
4.11 Problems
5 Optimal FIR Smoothing
5.1 Introduction
5.2 Smoothing Problem
5.3 Forward Filter/Forward Model
q
‐lag OFIR Smoothing
5.4 Backward OFIR Filtering
5.5 Backward Filter/Backward Model
g
‐lag OFIR Smoother
5.6 Forward Filter/Backward Model
q
‐Lag OFIR Smoother
5.7 Backward Filter/Forward Model
q
‐Lag OFIR Smoother
5.8 Two‐Filter
q
‐lag OFIR Smoother
5.9
q
‐Lag ML FIR Smoothing
5.10 Summary
5.11 Problems
6 Unbiased FIR State Estimation
6.1 Introduction
6.2 The
a posteriori
UFIR Filter
6.3 Backward
a posteriori
UFIR Filter
6.4 The
‐lag UFIR Smoother
6.5 State Estimation Using Polynomial Models
6.6 UFIR State Estimation Under Colored Noise
6.7 Extended UFIR Filtering
6.8 Robustness of the UFIR Filter
6.9 Implementation of Polynomial UFIR Filters
6.10 Summary
6.11 Problems
7 FIR Prediction and Receding Horizon Filtering
7.1 Introduction
7.2 Prediction Strategies
7.3 Extended Predictive State‐Space Model
7.4 UFIR Predictor
7.5 Optimal FIR Predictor
7.6 Receding Horizon FIR Filtering
7.7 Maximum Likelihood FIR Predictor
7.8 Extended OFIR Prediction
7.9 Summary
7.10 Problems
8 Robust FIR State Estimation Under Disturbances
8.1 Extended Models Under Disturbances
8.2 The
a posteriori
H
2
FIR Filtering
8.3
H
2
FIR Prediction
8.4
H
∞
FIR State Estimation
8.5
H
2
/
H
∞
FIR Filter and Predictor
8.6 Generalized
H
2
FIR State Estimation
8.7 ℒ
1
FIR State Estimation
8.8 Game Theory FIR State Estimation
8.9 Recursive Computation of Robust FIR Estimates
8.10 FIR Smoothing Under Disturbances
8.11 Summary
8.12 Problems
Note
9 Robust FIR State Estimation for Uncertain Systems
9.1 Extended Models for Uncertain Systems
9.2 The
a posteriori H
2
FIR Filtering
9.3
H
2
FIR Prediction
9.4 Suboptimal
FIR Structures Using LMI
9.5
FIR State Estimation for Uncertain Systems
9.6 Hybrid
FIR Structures
9.7 Generalized
FIR Structures for Uncertain Systems
9.8 Robust
FIR Structures for Uncertain Systems
9.9 Summary
9.10 Problems
10 Advanced Topics in FIR State Estimation
10.1 Distributed Filtering over Networks
10.2 Optimal Fusion Filtering Under Correlated Noise
10.3 Hybrid Kalman/UFIR Filter Structures
10.4 Estimation Under Delayed and Missing Data
10.5 Summary
10.6 Problems
11 Applications of FIR State Estimators
11.1 UFIR Filtering and Prediction of Clock States
11.2 Suboptimal Clock Synchronization
11.3 Localization Over WSNs Using Particle/UFIR Filter
11.4 Self‐Localization Over RFID Tag Grids
11.5 INS/UWB‐Based Quadrotor Localization
11.6 Processing of Biosignals
11.7 Summary
11.8 Problems
Appendix A: Matrix Forms and Relationships
A.1 Derivatives
A.2 Matrix Identities
A.3 Special Matrices
A.4 Equations and Inequalities
A.5 Linear Matrix Inequalities
Appendix B: Norms
B.1 Vector Norms
B.2 Matrix Norms
B.3 Signal Norms
B.4 System Norms
Appendix C: Matlab Codes
C.1 Batch UFIR Filter
C.2 Iterative UFIR Filtering Algorithm
C.3 Batch OFIR Filter
C.4 Iterative OFIR Filtering Algorithm
C.5 Batch OUFIR Filter
C.6 Iterative OUFIR Filtering Algorithm
C.7 Batch
‐Lag UFIR Smoother
C.8 Batch
‐Shift FFFM OFIR Smoother
C.9 Batch
‐Lag FFBM OFIR Smoother
C.10 Batch
‐Lag BFFM OFIR Smoother
C.11 Batch
‐Lag BFBM OFIR Smoother
References
Index
End User License Agreement
Chapter 6
Table 6.1 Coefficients
of Low‐Degree Functions
.
Table 6.2 Main Properties of
.
Table 6.3 Coefficients
,
, and
of Low‐Degree UFIR Filters.
Chapter 11
Table 11.1 Tags detected in six intervals (in m) along a passway shown in F...
Chapter 1
Figure 1.1 Generalized structures of nonlinear state estimators: (a) FIR, (b...
Figure 1.2 Generalized structures of linear state estimators: (a) FIR, (b) l...
Figure 1.3 Worst‐case effect of tuning errors on the estimator accuracy.
Figure 1.4 Block diagram of a stochastic LTV system observed in continuous t...
Chapter 2
Figure 2.1 Effects of skewness on unimodal distributions: (a) negatively ske...
Figure 2.2 Common forms of kurtosis: mesokurtic (normal), platykurtic (highe...
Figure 2.3 Relationships and connections between cdf
, cf
, raw mom...
Figure 2.4 Autocorrelation function
and PSD
of a Gauss‐Markov process.
Chapter 3
Figure 3.1 Typical errors in the KF caused by incorrectly specified initial ...
Figure 3.2 Effect of errors in noise covariances,
and
, on the RMSEs prod...
Figure 3.3 Examples of CMN in electronic channels: (a) signal strength CMN i...
Chapter 4
Figure 4.1 Effect of the disturbance
, which appears at three different tim...
Figure 4.2 Generalized structure of a linear
a posteriori
OFIR filter.
Figure 4.3 Generalized structure of a linear
a posteriori
OUFIR filter.
Figure 4.4 Generalized structure of the LMF.
Figure 4.5 Batch linear state estimators (filters) and recursive forms: “
” ...
Figure 4.6 Typical responses of state estimators to a velocity jump of a man...
Figure 4.7 Typical errors produced by the OFIR filter and KF under ideal con...
Figure 4.8 Typical estimation errors in the first state produced by the OFIR...
Figure 4.9 Discrete‐time control system with an RH FIR filter.
Chapter 5
Figure 5.1 NPG of the 1‐degree polynomial UFIR smoother
, filter
, and pre...
Figure 5.2 Forward filter/forward model
‐lag OFIR smoothing strategy.
Figure 5.3 RMSE produced by the FFFM OFIR, UFIR, and RTS smoothers: (a) whit...
Figure 5.4 Backward filter/backward model
‐lag OFIR smoothing strategy to p...
Figure 5.5 RMSEs produced by the BFBM OFIR, UFIR, and RTS smoothers: (a) whi...
Figure 5.6 Forward filter/backward model
‐lag OFIR smoothing strategy.
Figure 5.7 RMSE produced by the FFBM OFIR, UFIR, and RTS smoothers: (a) whit...
Figure 5.8 Backward filter/forward model
‐lag OFIR smoothing strategy.
Figure 5.9 RMSE produced by the FFBM OFIR, UFIR, and RTS smoothers: (a) whit...
Figure 5.10 Two‐filter
‐lag FB OFIR smoothing strategy.
Chapter 6
Figure 6.1 The RMSE produced by the UFIR filter as a function of
. An optim...
Figure 6.2 Determining
for a UFIR filter applied to two‐state polynomial m...
Figure 6.3 Typical filtering errors produced by the UFIR filter and KF for a...
Figure 6.4 Estimates of the coordinate
of a moving vehicle obtained using ...
Figure 6.5 Low‐degree polynomial FIR functions
.
Figure 6.6 The
‐varying NPG of a UFIR smoothing filter for several low‐degr...
Figure 6.7 Typical smoothing, filtering, and prediction errors produced by O...
Figure 6.8 Typical RMSEs produced by KF and UFIR filter for a two‐state mode...
Figure 6.9 Typical RMSEs produced by the two‐state UFIR filter, KF, and modi...
Figure 6.10 Generalized block diagram of the
th degree UFIR filter.
Figure 6.11 Block diagram of the first‐degree (ramp) polynomial UFIR filter....
Figure 6.12 Magnitude response functions
of low‐degree polynomial UFIR fil...
Figure 6.13 Phase response functions of the low‐degree UFIR filters for
: (...
Figure 6.14 DFT of the low‐degree polynomial UFIR filters: (a) magnitude res...
Chapter 7
Figure 7.1 Two basic strategies to obtain the predicted estimate
at
over...
Figure 7.2 Moving vehicle tracking in y‐coordinate (m) using UFIR filter and...
Figure 7.3 Tracking a moving vehicle along the y‐coordinate (m) using an OFI...
Chapter 8
Figure 8.1 Errors in the
state estimator in the
‐domain.
Figure 8.2 RMSEs produced in the east direction by Kalman,
‐OFIR,
‐OUFIR, ...
Figure 8.3 Typical RMSEs generated by the
‐OFIR, Kalman, and UFIR filters a...
Figure 8.4 Squared norms of the disturbance‐to‐error transfer functions of t...
Chapter 9
Figure 9.1 Errors caused by optimal tuning an estimator to
and
: tuning t...
Figure 9.2 Errors in the
‐OFIR state estimator caused by uncertainties, dis...
Figure 9.3 Errors in the
‐OUFIR state estimator caused by uncertainties, di...
Chapter 10
Figure 10.1 An example of a WSN with 50 nodes randomly placed in coordinates...
Figure 10.2 Basic scenarios with one‐step‐lag delayed and missing data: 1) r...
Chapter 11
Figure 11.1 Typical estimates of the clock TIE produced by the UFIR filter a...
Figure 11.2 Estimates of the NIST MC current state via the UTC–UTC(NIST MC) ...
Figure 11.3 Loop model of local clock synchronization based on GPS 1PPS timi...
Figure 11.4 Typical errors in GPS timing receivers: (a) GPS time uncertainty...
Figure 11.5 A typical function of a nonstationary TIE
of an unlocked cryst...
Figure 11.6 Allan deviation of the GPS‐locked OCXO‐based clock for different...
Figure 11.7 PTP deviation of a GPS locked crystal clock for different
of t...
Figure 11.8 2‐D schematic geometry of the mobile robot localization.
Figure 11.9 A typical scenario with the sample impoverishment.
Figure 11.10 A flowchart of the hybrid PF/EFIR algorithm.
Figure 11.11 Errors of a mobile robot localization with a small number of pa...
Figure 11.12 2D schematic geometry of a vehicle traveling on an indoor floor...
Figure 11.13 Schematic diagram of a vehicle platform traveling on an indoor ...
Figure 11.14 Localization errors caused by imprecisely known noise covarianc...
Figure 11.15 INS/UWB‐integrated quadrotor localization scheme [217].
Figure 11.16 CMN in UWB‐derived data: (a) east direction, (b) north directio...
Figure 11.17 RMSEs produced by the KF, cKF, UFIR filter, and cUFIR filter. F...
Figure 11.18 Single ECG pulse measured in the presence of noise (Data). Nois...
Figure 11.19 EMG signal: (a) measured EMG signal
composed by MUAPs, (b) Hi...
Figure 11.20 EMG signal composed with low‐density MUAP and envelope extracte...
Cover
Table of Contents
Title Page
Copyright
Dedication
Preface
Foreword
Acronyms
Begin Reading
Appendix A: Matrix Forms and Relationships
Appendix B: Norms
Appendix C: Matlab Codes
References
Index
End User License Agreement
ii
iii
iv
v
xv
xvi
xvii
xviii
xix
xx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
419
420
421
422
423
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
IEEE Press445 Hoes LanePiscataway, NJ 08854
IEEE Press Editorial BoardSarah Spurgeon, Editor in Chief
Jón Atli Benediktsson
Andreas Molisch
Diomidis Spinellis
Anjan Bose
Saeid Nahavandi
Ahmet Murat Tekalp
Adam Drobot
Jeffrey Reed
Peter (Yong) Lian
Thomas Robertazzi
Yuriy S. ShmaliyUniversidad de Guanajuato, Mexico
Shunyi ZhaoJiangnan University, China
Copyright © 2022 The Institute of Electrical and Electronics Engineers, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per‐copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750‐8400, fax (978) 750‐4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748‐6011, fax (201) 748‐6008, or online at http://www.wiley.com/go/permission.
Limit of Liability/Disclaimer of Warranty: MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the accuracy of the text or exercises in this book. This work's use or discussion of MATLAB® software or related products does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the MATLAB® software. While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762‐2974, outside the United States at (317) 572‐3993 or fax (317) 572‐4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging‐in‐Publication Data
Names: Shmaliy, Yuriy, author. | Zhao, Shunyi, author.Title: Optimal and robust state estimation : finite impulse response (FIR) and Kalman approaches / Yuriy S. Shmaliy, Shunyi Zhao.Description: Hoboken, NJ : Wiley-IEEE Press, 2022. | Includes bibliographical references and index.Identifiers: LCCN 2022016217 (print) | LCCN 2022016218 (ebook) | ISBN 9781119863076 (cloth) | ISBN 9781119863083 (adobe pdf) | ISBN 9781119863090 (epub)Subjects: LCSH: Observers (Control theory) | Systems engineering.Classification: LCC QA402.3 .S53 2022 (print) | LCC QA402.3 (ebook) | DDC 629.8/312-dc23/eng20220628LC record available at https://lccn.loc.gov/2022016217LC ebook record available at https://lccn.loc.gov/2022016218
Cover Design: WileyCover Image: © Science Photo Library/Getty Images
To our families
The state estimation approach arose from the need to know the internal state of a real system, given that the input and output measurements are known. The corresponding structure is called a state estimator, and in control theory it is also called a state observer. In signal processing, the problem is related to the process state and its transition from one point to another. In contrast to parameter estimation theory, which deals with the estimation of the parameters of the fitting function, the state estimation approach is more suitable for engineering applications and the development of end‐to‐end algorithms.
Knowing the system state helps to solve many engineering problems. In systems, the state usually cannot be observed directly, but its indirect observation can be provided by way of the system outputs. In control, it is used to stabilize a system via state feedback. In signal processing, the direct, inverse, and identification problems are solved by applying state estimators (filters, smoothers, and predictors) to linear and nonlinear processes. In biomedical applications, state estimators facilitate extracting required process features.
The most general state estimator is a batch estimator, which requires data and input over a time horizon and has either infinite impulse response (IIR) or finite impulse response (FIR). Starting with the seminal works of Kalman, recursive state estimates have found an enormous number of applications. However, since recursions are mostly available for white noise, they are less accurate when noise is not white. The advantage is that recursions are computationally easy. But, unlike in Kalman's days, the computational complexity is no longer an issue for modern computers and microprocessors, and interest in batch optimal and robust estimators is growing.
To immediately acquaint the reader with the FIR approach, suppose that discrete measurements on a finite horizon are collected in the vector and the gain matrix , which contains the impulse response values, is defined in some sense (optimal or robust). Then the discrete convolution‐based batch FIR estimate , which can be easily computed recursively, will have three advantages over Kalman recursions:
Bounded input bounded output stability, which means there is no feedback and additional constraints to ensure stability and avoid divergence.
Better accuracy in colored noise due to the ability to work with full block error matrices; recursive forms require such matrices to be diagonal.
Higher robustness as uncertainties beyond the averaging horizon are not projected onto the current estimate.
This book is the first systematic investigation and analysis of batch state estimators and recursive forms. To elucidate the theory of optimal and robust FIR state estimators in continuous and discrete time, the book is organized as follows. Chapter 1 introduces the reader to the state estimation approach, discusses the properties of FIR state estimators, provides a brief historical overview, and observes the most noticeable works on the topic. Chapter 2 gives the basics of probability and stochastic processes. Chapter 3 discusses the available linear and nonlinear state estimators. Chapter 4 deals with optimal FIR filtering and considers a posteriori and a priori optimal, optimal unbiased, ML, and limited memory batch and recursive algorithms. Chapter 5 solves the ‐lag FIR smoothing problem. Chapter 6 presents an unbiased FIR state estimator. Chapter 7 introduces the receding horizon (RH) FIR state estimation approach. Chapter 8 develops the theory of FIR state estimation under disturbances and Chapter 9 for uncertain systems. Chapter 10 lists several additional topics in FIR state estimation. Chapter 11 provides several applications, where the FIR state estimators are used effectively. The remainder of the book is built with Appendix A, which presents matrix forms and relationships; Appendix B, which introduces the norms; and Appendix C, which contains Matlab‐based codes of FIR state estimators.
The authors appreciate the collaboration with Prof. Choon Ki Ahn of Korea University, South Korea, with whom several results were co‐authored. Yuriy Shmaliy appreciates the collaboration with Prof. Dan Simon of Cleveland University, Prof. Wojciech Pieczynski of Institut Polytechnique de Paris (Telecom SudParis), and Dr. Yuan Xu of University of Jinan, China, as well as the support of Prof. Oscar Ibarra‐Manzano and Prof. José Andrade‐Lucio of Universidad de Guanajuato, and contributions of his former and present Ph.D. and M.D. students Dr. Jorge Muñoz‐Minjares, Dr. Miguel Vázquez‐Olguín, Dr. Carlos Lastre‐Dominguez, Sandra Márquez‐Figueroa, Karen Uribe‐Murcia, Jorge Ortega‐Contreras, Eli Pale‐Ramon, and Juan José López Solórzano to the development of FIR state estimators for various signal processing and control areas. Shunyi Zhao appreciates the collaboration and support of Prof. Fei Liu of the Jiangnan University, China, and Prof. Biao Huang of the University of Alberta, Canada.
Yuriy S. ShmaliyShunyi Zhao
I had the privilege and pleasure of meeting Yuriy Shmaliy several years ago when he visited me at Cleveland State University. We spent the day together talking about state estimation, and he gave a well‐attended and engaging seminar about finite impulse response (FIR) filtering to enthusiastic CSU graduate students and faculty. I was fascinated by his approach to state estimation. At the time, I was thoroughly immersed in the Kalman filtering paradigm, and his FIR methods were new to me. I could immediately see how they could address the problems that the typical Kalman filter has with stability and robustness. I already knew all about the approaches for addressing these Kalman filter problems—in fact, I had studied and published several such approaches myself. But I was always left with the nagging thought that no matter how much the Kalman filter is modified for enhanced stability and robustness, it is still the Kalman filter, which was not designed with stability and robustness in mind, so any attempts to enhance its stability and robustness will always be ad hoc. The FIR filter, in contrast, is designed from the outset for stability and robustness. Does this mean the FIR filter is better than the Kalman filter? That question is too simplistic to even be coherent. As we know, all optimization is multi‐objective, so claiming that one filter is better than the other is ill‐advised. But we can definitely say that some filters are “better” than other filters from certain perspectives, and the FIR filter has clearly established itself as an approach that is better than other filters (including the Kalman filter) from certain perspectives.
This textbook deals with state estimation using FIR filters. Anyone who's seriously interested in state estimation theory, research, or application should study this book and make its algorithms a part of his or her toolbox. There are many ways to estimate the state of a system, with the Kalman filter being the most tried‐and‐true method. The Kalman filter became the standard in state estimation after its invention around 1960. Its advantages, including its theoretical rigor and relative ease of implementation, have overcome its well‐known disadvantages, which include a notorious lack of robustness and frequent problems with stability.
FIR filters have arisen as a viable alternative to Kalman filters in a targeted attempt to address the disadvantages of Kalman filtering. One of the advantages of Kalman filtering is its recursive nature, which makes it an infinite impulse response (IIR) filter, but this feature creates an inherent disadvantage, which is a tendency toward instability. The FIR filter is specifically formulated without feedback, which provides it with inherent stability and improved robustness.
Based on their combined 30 years of research in this field, the authors have compiled a thorough and systematic investigation and analysis of FIR state estimators. Chapter 1 introduces the concept and the basic approaches of state estimation, including a review of properties such as optimality, unbiasedness, noise distributions, performance measures, stability, robustness, and computational complexity. Chapter 1 also presents a brief but interesting historical review that traces FIR filtering all the way back to Johannes Kepler in 1601. Chapter 2 reviews the basics of probability and stochastic processes, and culminates with an overview of stochastic differential equations. Chapter 3 reviews state space modeling theory and summarizes some of the popular approaches to state estimation, including Bayesian estimation, maximum likelihood estimation, least squares estimation, Kalman filtering and smoothing, extended Kalman filtering, unscented Kalman filtering, particle filtering, and H‐infinity filtering. The overview of Kalman filtering in this chapter is quite good and delves into many theoretical considerations, such as optimality, unbiasedness, the effects of initial condition errors and noise covariance errors, noise correlations, and colored noise.
As good as the first three chapters are, the meat of the book really begins in Chapter 4, which derives the FIR filter by combining the forward discrete‐time system model with the backward model into a single matrix equation that can be handled with a single batch of measurements. The FIR filter is derived in both a priori and a posteriori forms. Although the FIR filter is not recursive, the batch arrangement of the filter can be rewritten in a recursive form for computational savings. The authors show how unbiasedness and maximum likelihood can be incorporated into the FIR filter. The end of the chapter extends FIR filter theory to continuous time systems.
Chapter 5 derives several different FIR smoother formulations. Chapter 6 discusses a specific FIR filter, which is the unbiased FIR (UFIR) filter. The UFIR filter uses an optimal horizon length to minimize mean square estimation error. The authors also extend the UFIR filter to smoothing and to nonlinear systems. Chapter 7 discusses prediction using the FIR approach and the special case of one‐step prediction, which is called receding‐horizon FIR prediction. Chapter 8 derives the FIR filter that is maximally robust to noise statistics and system modeling errors while constraining the bias. This chapter discusses robustness from the , , hybrid , , and perspectives. Chapter 9 rederives many of the previous results while considering uncertainty in the system model. Chapter 10 is titled “Advanced Topics” and considers problems such as distributed FIR filtering, correlated noise, hybrid Kalman/FIR filtering, and delayed and missing measurements. Chapter 11 presents several case studies of FIR filtering to illustrate the design decisions, trade‐offs, and implementation issues that need to be considered during application. The examples include the estimation of clock states (with a special consideration for GPS receiver clocks), clock synchronization, localization of wireless sensor networks, localization over RFID tag grids, quadrotor localization, ECG signal noise reduction, and EMG waveform estimation.
The book includes 33 examples scattered throughout, including careful comparisons between Kalman and FIR results. These examples are in addition to the more comprehensive case studies in the final chapter. The book also includes 26 pseudocode listings to assist the student and researcher in their implementation of FIR algorithms and about 200 end‐of‐chapter problems for self‐study or coursework. This is a book that I would have loved to have read as a student or early‐career researcher. I think that any researcher who studies it will be well‐rewarded for their effort.
Cleveland State University
Daniel J. Simon
AWGN
Additive white Gaussian noise
BE
Backward Euler
BFBM
Backward filter backward model
BFFM
Backward filter forward model
BIBO
Bounded input bounded output
CMN
Colored measurement noise
CPN
Colored process noise
DARE
Discrete algebraic Riccati equation
DARI
Discrete algebraic Riccati inequality
DDRE
Discrete dynamic (difference) Riccati equation
DFT
Discrete Fourier transform
DSM
Discrete Shmaliy moments
EKF
Extended Kalman filter
EOFIR
Extended optimal finite impulse response
FE
Forward Euler method
FF
Fusion filter
FFBM
Forward filter backward model
FFFM
Forward filter forward model
FH
Finite horizon
FIR
Finite impulse response
FPK
Fokker‐Plank‐Kolmogorov
GKF
General Kalman filter
GNPG
Generalized noise power gain
GPS
Global Positioning System
IDFT
Inverse discrete Fourier transform
IIR
Infinite impulse response
KBF
Kalman‐Bucy filter
KF
Kalman filter
KP
Kalman predictor
LMF
Limited memory filter
LMKF
Limited memory Kalman filter
LMI
Linear matrix inequality
LMP
Limited memory predictor
LUMV
Linear unbiased minimum variance
LS
Least squares
LTI
Linear time invariant
LTV
Linear time varying
MBF
Modified Bryson‐Frazier
MC
Monte Carlo
MIMO
Multiple input multiple output
ML
Maximum likelihood
MPC
Model predictive control
MSE
Mean square error
MVF
Minimum variance FIR
MVU
Minimum variance unbiased
NARE
Nonsymmetric algebraic Riccati equation
NPG
Noise power gain
ODE
Ordinary differential equation
OFIR
Optimal finite impulse response
OUFIR
Optimal unbiased finite impulse response
PF
Particle filter
PMF
Point mass filter
PSD
power spectral density
RDE
Riccati differential equation
RH
Receding horizon
RKF
Robust Kalman filter
RMSE
Root mean square error
ROC
Region of convergence
RTS
Rauch‐Tung‐Striebel
SDE
Stochastic differential equation
SIS
Sequential importance sampling
SPDE
Stochastic partial differential equation
UFIR
Unbiased finite impulse response
UKF
Unscented Kalman filter
UT
Unscented transformation
WGN
White Gaussian noise
WLS
Weighted least squares
WSN
Wireless sensor network
cdf
cumulative distribution function
cf
characteristic function
cKF
centralized Kalman filter
cUFIR
centralized unbiased finite impulse response
dKF
distributed Kalman filter
dUFIR
distributed unbiased finite impulse response
KF
micro Kalman filter
UFIR
micro unbiased finite impulse response
probability density function
The limited memory filter appears to be the only device for preventing divergence in the presence of unbounded perturbation.
Andrew H. Jazwinski [79], p. 255
The term state estimation implies that we want to estimate the state of some process, system, or object using its measurements. Since measurements are usually carried out in the presence of noise, we want an accurate and precise estimator, preferably optimal and unbiased. If the environment or data is uncertain (or both) and the system is being attacked by disturbances, we also want the estimator to be robust. Since the estimator usually extracts state from a noisy observation, it is also called a filter, smoother, or predictor. Thus, a state estimator can be represented by a certain block (hardware or software), the operator of which allows transforming (in some sense) input data into an output estimate. Accordingly, the linear state estimator can be designed to have either infinite impulse response (IIR) or finite impulse response (FIR). Since IIR is a feedback effect and FIR is inherent to transversal structures, the properties of such estimators are very different, although both can be represented in batch forms and by iterative algorithms using recursions. Note that effective recursions are available only for delta‐correlated (white) noise and errors.
In this chapter, we introduce the reader to FIR and IIR state estimates, discuss cost functions and the most critical properties, and provide a brief historical overview of the most notable works in the area. Since IIR‐related recursive Kalman filtering, described in a huge number of outstanding works, serves in a special case of Gaussian noise and diagonal block covariance matrices, our main emphasis will be on the more general FIR approach.
When we deal with some stochastic dynamic system or process and want to predict its further behavior, we need to know the system characteristics at the present moment. Thus, we can use the fundamental concept of state variables, a set of which mathematically describes the state of a system. The practical need for this was formulated by Jazwinski in [79] as “…the engineer must know what the system is “doing” at any instant of time” and “…the engineer must know the state of his system.”
Obviously, the set of state variables should be sufficient to predict the future system behavior, which means that the number of state variables should not be less than practically required. But the number of state variables should also not exceed a reasonable set, because redundancy, ironically, reduces the estimation accuracy due to random and numerical errors. Consequently, the number of useful state variables is usually small, as will be seen next.
When tracking and localizing mechanical systems, the coordinates of location and velocities in each of the Cartesian coordinates are typical state variables. In precise satellite navigation systems, the coordinates, velocities, and accelerations in each of the Cartesian coordinates are a set of nine state variables. In electrical and electronic systems, the number of state variables is determined by the order of the differential equation or the number of storage elements, which are inductors and capacitors.
In periodic systems, the amplitude, frequency, and phase of the spectral components are necessary state variables. But in clocks that are driven by oscillators (periodic systems), the standard state variables are the time error, fractional frequency offset, and linear frequency drift rate.
In thermodynamics, a set of state variables consists of independent variables of a state function such as internal energy, enthalpy, and entropy. In ecosystem models, typical state variables are the population sizes of plants, animals, and resources. In complex computer systems, various states can be assigned to represent processes.
In industrial control systems, the number of required state variables depends on the plant program and the installation complexity. Here, a state observer provides an estimate of the set of internal plant states based on measurements of its input and output, and a set of state variables is assigned depending on practical applications.
The need to know the system state is dictated by many practical problems. An example of signal processing is system identification over noisy input and output. Control systems are stabilized using state feedback. When such problems arise, we need some kind of model and an estimator.
Any stochastic dynamic system can be represented by the first‐order linear or nonlinear vector differential equation (in continuous time) or difference equation (in discrete time) with respect to a set of its states. Such equations are called state equations, where state variables are usually affected by internal noise and external disturbances, and the model can be uncertain.
Estimating the state of a system with random components represented by the state equation means evaluating the state approximately using measurements over a finite time interval or all available data. In many cases, the complete set of system states cannot be determined by direct measurements in view of the practical inability of doing so. But even if it is possible, measurements are commonly accompanied by various kinds of noise and errors. Typically, the full set of state variables is observed indirectly by way of the system output, and the observed state is represented with an observation equation, where the measurements are usually affected by internal noise and external disturbances. The important thing is that if the system is observable, then it is possible to completely reconstruct the state of the system from its output measurements using a state observer. Otherwise, when the inner state cannot be observed, many practical problems cannot be solved.
Systems and processes can be both nonlinear or linear. Accordingly, we recognize nonlinear and linear state‐space models. Linear models are represented by linear equations and Gaussian noise. A model is said to be nonlinear if it is represented by nonlinear equations or linear equations with non‐Gaussian random components.
A physical nonlinear system with random components can be represented in continuous time by the following time‐varying state space model,
where the nonlinear differential equation (1.1) is called the state equation and an algebraic equation 1.2 the observation equation. Here, is the system state vector; , is the input (control) vector; is the state observation vector, is some system error, noise, or disturbance; is an observation error or measurement noise; is a nonlinear system function; and is a nonlinear observation function. Vectors and can be Gaussian or non‐Gaussian, correlated or noncorrelated, additive or multiplicative. For time‐invariant systems, both nonlinear functions become constant.
In discrete time , a nonlinear system can be represented in state space with a time step using either the forward Euler (FE) method or the backward Euler (BE) method. By the FE method, the discrete‐time state equation turns out to be predictive and we have
where is the state, is the input, is the observation, is the system error or disturbance, and is the observation error. The model in (1.3) and (1.4) is basic for digital control systems, because it matches the predicted estimate required for feedback and model predictive control.
By the BE method, the discrete‐time nonlinear state‐space model becomes
to suit the many signal processing problem when prediction is not required. Since the model in (1.5) and (1.6) is not predictive, it usually approximate a nonlinear process more accurately.
A linear time‐varying (LTV) physical system with random components can be represented in continuous time using the following state space model
where the noise vectors and can be either Gaussian or not, correlated or not. If and are both zero mean, uncorrelated, and white Gaussian with the covariances and , where and are the relevant power spectral densities, then the model in (1.7) and (1.8) is said to be linear. Otherwise, it is nonlinear. Note that all matrices in (1.7) and (1.8) become constant as , , , , when a system is linear time‐invariant (LTI). If the order of the disturbance is less than the order of the system, then , and the model in (1.7) and (1.8) becomes standard for problems considering vectors and as the system and measurement noise, respectively.
By the FE method, the linear discrete‐time state equation also turns out to be predictive, and the state‐space model becomes
where , , , , and are time‐varying matrices. If the discrete noise vectors and are zero mean and white Gaussian with the covariances and , then this model is called linear.
Using the BE method, the corresponding state‐space model takes the form
and we notice again that for LTI systems all matrices in (1.9)–(1.12) become constant.
Both the FE‐ and BE‐based discrete‐time state‐space models are employed to design state estimators with the following specifics. The term with matrix is neglected if the order of the disturbance is less than the order of the system, which is required for stability. If noise in (1.9)–(1.12) with is Gaussian and the model is thus linear, then the optimal state estimation is provided using the batch optimal FIR filtering and recursive optimal Kalman filtering. When and/or are non‐Gaussian, then the model becomes nonlinear and other estimators can be more accurate. In some cases, the nonlinear model can be converted to linear, as in the case of colored Gauss‐Markov noise. If and are unknown and bounded only by the norm, then the model in (1.9–1.12). can be used to derive different kinds of estimators called robust.
Before discussing the properties of state estimators fitting various cost functions, it is necessary to introduce baseline estimates and errors, assuming that the observation is available from the past (not necessarily zero) to the time index inclusive. The following filtering estimates are commonly used:
is the
a posteriori
estimate.
is the
a priori
or predicted estimate.
is the
a posteriori
error covariance.
is the
a priori
or predicted error covariance,
where means an estimate at over data available from the past to and including at time index , is the a posteriori estimation error, and is the a priori estimation error. Here and in the following, is an operator of averaging.
Since the state estimates can be derived in various senses using different performance criteria and cost functions, different state estimators can be designed using FE and BE methods to have many useful properties. In considering the properties of state estimators, we will present two other important estimation problems: smoothing and prediction.
If the model is linear, then the optimal estimate is obtained by the batch optimal FIR (OFIR) filter and the recursive Kalman filter (KF) algorithm. The KF algorithm is elegant, fast, and optimal for the white Gaussian approximation. Approximation! Does this mean it has nothing to do with the real world, because white noise does not exist in nature? No! Engineering is the science of approximation, and KF perfectly matches engineering tasks. Therefore, it found a huge number of applications, far more than any other state estimator available. But is it true that KF should always be used when we need an approximate estimate? Practice shows no! When the environment is strictly non‐Gaussian and the process is disturbed, then batch estimators operating with full block covariance and error matrices perform better and with higher accuracy and robustness. This is why, based on practical experience, F. Daum summarized in [40] that “Gauss's batch least squares …often gives accuracy that is superior to the best available extended KF.”
The state estimator performance depends on a number of factors, including cost function, accurate modeling, process suitability, environmental influences, noise distribution and covariance, etc. The linear optimal filtering theory [9] assumes that the best estimate is achieved if the model adequately represents a system, an estimator is of the same order as the model, and both noise and initial values are known. Since such assumptions may not always be met in practice, especially under severe operation conditions, an estimator must be stable and sufficiently robust. In what follows, we will look at the most critical properties of batch state estimators that meet various performance criteria. We will view the real‐time state estimator as a filter that has an observation and control signal in the input and produces an estimate in the output. We will also consider smoothing and predictive state estimation structures. Although we will refer to all the linear and nonlinear state‐space models discussed earlier, the focus will be on discrete‐time systems and estimates.
In the time domain, the general operator of a linear system is convolution, and a convolution‐based linear state estimator (filter) can be designed to have either IIR or FIR. In continuous time, linear and nonlinear state estimators are electronic systems that implement differential equations and produce output electrical signals proportional to the system state. In this book, we will pay less attention to such estimators.
In discrete time, a discrete convolution‐based state estimator can be designed to perform the following operations:
Filtering
, to produce an estimate
at
Smoothing
, to produce an estimate
at
with a delay lag
Prediction
, to produce an estimate
at
with a step
Smoothing filtering
, to produce an estimate
at
taking values from
future points
Predictive filtering
, to produce an estimate
at
over data delayed on
points
These operations are performed on the horizon of data points, and there are three procedures most often implemented in digital systems:
Filtering
at
over a data horizon
, where
, to determine the current system state
One‐step prediction
at
over
to predict future system state
Predictive filtering
at
over
to organize the
receding horizon
(RH) state feedback control or
model predictive control
(MPC)
It is worth noting that if discrete convolution is long, then the computational problem may arise and batch estimation becomes impractical for real‐time applications.
To design a batch estimator, observations and control signals collected on a horizon , from to , can be united in extended vectors and . Then the nonlinear state estimator can be represented by the time‐varying operator and, as shown in Fig. 1.1, three basic ‐shift state estimators recognized to produce the filtering estimate, if , ‐lag smoothing estimate, if , and ‐step prediction, if :
FIR state estimator (
Fig. 1.1
a), in which the initial state estimate
and error matrix
are variables of
IIR limited memory state estimator (
Fig. 1.1
b), in which the initial state
is taken beyond the horizon
and becomes the input
RH FIR state estimator (
Fig. 1.1
c) that processes one‐step delayed inputs and in which
and
are variables of
Figure 1.1 Generalized structures of nonlinear state estimators: (a) FIR, (b) IIR limited memory, and (c) RH FIR; filter by , ‐lag smoother by , and ‐step predictor by .
Due to different cost functions, the nonlinear operator may or may not require information about the noise statistics, and the initial values may or may not be its variables. For time‐invariant models, the operator is also time‐invariant. Regardless of the properties of , the ‐dependent structures (Fig. 1.1) can give either a filtering estimate, a ‐lag smoothing estimate, or a ‐step prediction.
In the FIR state estimator (Fig. 1.1a) the initial and represent the supposedly known state at the initial point of . Therefore, and are variables of the operator . This estimator has no feedback, and all its transients are limited by the horizon length of points.
In the limited memory state estimator (Fig. 1.1b), the initial state is taken beyond the horizon . Therefore, goes to the input and is provided through estimator state feedback, thanks to which this estimator has an IIR and long‐lasting transients.
The RH FIR state estimator (Fig. 1.1c) works similarly to the FIR estimator (Fig. 1.1a) but processes the one‐step delayed inputs. Since the predicted estimate by appears at the output of this estimator before the next data arrive, it is used in state feedback control. This property of RH FIR filters is highly regarded in the MPC theory [106].
Due to the properties of homogeneity and additivity[167], data and control signal in linear state estimators can be processed separately by introducing the homogeneous gain and forced gain for LTV systems and constant gains and for LTI systems. The generalized structures of state estimators that serve for LTV systems are shown in Fig. 1.2 and can be easily modified for LTI systems using and .
Figure 1.2 Generalized structures of linear state estimators: (a) FIR, (b) limited memory IIR, and (c) RH FIR; filter by , ‐lag smoother by , and ‐step predictor by . Based on [174].
The ‐shift linear FIR filtering estimate corresponding to the structure shown in Fig. 1.2a can be written as [173]
where the ‐dependent gain is defined for zero input, , and for zero initial conditions. For Gaussian models, the OFIR estimator requires all available information about system and noise, and thus the noise covariances, initial state , and estimation error become variables of its gains and . It has been shown in [229] that iterative computation of the batch OFIR filtering estimate with is provided by Kalman recursions. If such an estimate is subjected to the unbiasedness constraint, then the initial values are removed from the variables. In another extreme, when an estimator is derived to satisfy only the unbiasedness condition, the gains and depend neither on the zero mean noise statistics nor on the initial values. It is also worth noting that if the control signal is tracked exactly, then the forced gain can be expressed via the homogeneous gain, and the latter becomes the fundamental gain of the FIR state estimator.
The batch linear limited memory IIR state estimator appears from Fig. 1.2b by combining the subestimates as
where the initial state taken beyond the horizon is processed with the gain . As will become clear in the sequel, the limited memory filter (LMF) specified by (1.14) with is the batch KF.
The RH FIR state estimator (Fig. 1.2c) is the FIR estimator (Fig. 1.2a) that produces a ‐shift state estimate over one‐step delayed data and control signal as
By , this estimator becomes the RH FIR filter used in state feedback control and MPC. The theory of this filter has been developed in great detail by W. H. Kwon and his followers [91].
It has to be remarked now that a great deal of nonlinear problems can be solved using linear estimators if we approximate the nonlinear functions between two neighboring discrete points using the Taylor series expansion. State estimators designed in such ways are called extended. Note that other approaches employing the Volterra series and describing functions [167] have received much less attention in state space.
The term optimal is commonly applied to estimators of linear stochastic processes, in which case the trace of the error covariance, which is the mean square error (MSE), is convex and the optimal gain is required to keep it to a minimum. It is also used when the problem is not convex and the estimation error is minimized in some other sense.
The estimator optimality is highly dependent on noise distribution and covariance. That is, an estimator must match not only the system model but also the noise structure. Otherwise, it can be improved and thus each type of noise requires its own optimal filter.
If a nonlinear system is represented with a nonlinear stochastic differential equation (SDE) (1.1), where is white Gaussian, then the optimal filtering problem can be solved using the approach originally proposed by Stratonovich [193] and further developed by many other authors. For linear systems represented by SDE (1.7), an optimal filter was derived by Kalman and Bucy in [85], and this is a special case of Stratonovich's solution.
If a discrete‐time system is represented by a stochastic difference equation, then an optimal filter (Fig. 1.1) can be obtained by minimizing the MSE, which is a trace of the error covariance . The optimal filter gain can thus be determined by solving the minimization problem
to guarantee, at , an optimal balance between random errors and bias errors, and as a matter of notation we notice that the optimal estimate is biased. A solution to (1.16) results in the batch ‐shift OFIR filter [176]. Given , the OFIR filtering estimate can be computed iteratively using Kalman recursions [229]. Because the state estimator derived in this way matches the model and noise, then it follows that there is no other estimator for Gaussian processes that performs better than the OFIR filter and the KF algorithm.
In the transform domain, the FIR filter optimality can be achieved for LTI systems using the approach by minimizing the squared Frobenius norm of the noise‐to‐error weighted transfer function averaged over all frequencies [141]. Accordingly, the gain of the OFIR state estimator can be determined by solving the minimization problem
