Multimodal Perception and Secure State Estimation for Robotic Mobility  Platforms - Xinghua Liu - E-Book

Multimodal Perception and Secure State Estimation for Robotic Mobility Platforms E-Book

Xinghua Liu

0,0
84,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Multimodal Perception and Secure State Estimation for Robotic Mobility Platforms Enables readers to understand important new trends in multimodal perception for mobile robotics This book provides a novel perspective on secure state estimation and multimodal perception for robotic mobility platforms such as autonomous vehicles. It thoroughly evaluates filter-based secure dynamic pose estimation approaches for autonomous vehicles over multiple attack signals and shows that they outperform conventional Kalman filtered results. As a modern learning resource, it contains extensive simulative and experimental results that have been successfully implemented on various models and real platforms. To aid in reader comprehension, detailed and illustrative examples on algorithm implementation and performance evaluation are also presented. Written by four qualified authors in the field, sample topics covered in the book include: * Secure state estimation that focuses on system robustness under cyber-attacks * Multi-sensor fusion that helps improve system performance based on the complementary characteristics of different sensors * A geometric pose estimation framework to incorporate measurements and constraints into a unified fusion scheme, which has been validated using public and self-collected data * How to achieve real-time road-constrained and heading-assisted pose estimation This book will appeal to graduate-level students and professionals in the fields of ground vehicle pose estimation and perception who are looking for modern and updated insight into key concepts related to the field of robotic mobility platforms.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 287

Veröffentlichungsjahr: 2022

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Copyright

Dedication

About the Authors

Preface

1 Introduction

1.1 Background and Motivation

1.2 Multimodal Pose Estimation for Vehicle Navigation

1.3 Secure Estimation

1.4 Contributions and Organization

Note

Part I: Multimodal Perception in Vehicle Pose Estimation

2 Heading Reference‐Assisted Pose Estimation

2.1 Preliminaries

2.2 Abstraction Model of Measurement with a Heading Reference

2.3 Heading Reference‐Assisted Pose Estimation (HRPE)

2.4 Simulation Studies

2.5 Experimental Results

2.6 Conclusion

Note

3 Road‐Constrained Localization Using Cloud Models

3.1 Preliminaries

3.2 Map‐Assisted Ground Vehicle Localization

3.3 Experimental Validation on UGD

3.4 Experimental Validation on GGD

3.5 Conclusion

Note

4 GPS/Odometry/Map Fusion for Vehicle Positioning Using Potential Functions

4.1 Potential Wells and Potential Trenches

4.2 Potential‐Function‐Based Fusion for Vehicle Positioning

4.3 Experimental Results

4.4 Conclusion

5 Multi‐Sensor Geometric Pose Estimation

5.1 Preliminaries

5.2 Geometric Pose Estimation Using Dynamic Potential Fields

5.3 VO‐Heading‐Map Pose Estimation for Ground Vehicles

5.4 Experiments on KITTI Sequences

5.5 Experiments on the NTU Dataset

5.6 Conclusion

Notes

Part II: Secure State Estimation for Mobile Robots

6 Filter‐Based Secure Dynamic Pose Estimation

6.1 Introduction

6.2 Related Work

6.3 Problem Formulation

6.4 Estimator Design

6.5 Discussion of Parameter Selection

6.6 Experimental Validation

6.7 Conclusion

7 UKF‐Based Vehicle Pose Estimation under Randomly Occurring Deception Attacks

7.1 Introduction

7.2 Related Work

7.3 Pose Estimation Problem for Ground Vehicles under Attack

7.4 Design of the Unscented Kalman Filter

7.5 Numeric Simulation

7.6 Experiments

7.7 Conclusion

8 Secure Dynamic State Estimation with a Decomposing Kalman Filter

8.1 Introduction

8.2 Problem Formulation

8.3 Decomposition of the Kalman Filter By Using a Local Estimate

8.4 A Secure Information Fusion Scheme

8.5 Numerical Example

8.6 Conclusion

8.7 Appendix: Proof of Theorem 8.2

8.8 Proof of Theorem 8.4

Notes

9 Secure Dynamic State Estimation for AHRS

9.1 Introduction

9.2 Related Work

9.3 Attitude Estimation Using Heading References

9.4 Secure Estimator Design with a Decomposing Kalman Filter

9.5 Simulation Validation

9.6 Conclusion

Note

10 Conclusions

References

Index

End User License Agreement

List of Tables

Chapter 2

Table 2.1 Simulation results of SVO, loosely coupled HRPE (LC‐HRPE), and tig...

Table 2.2 Experimental results of SVO, loosely coupled HRPE (LC‐HRPE), and t...

Chapter 3

Table 3.1 Quantitative comparison between the proposed approach and SVO loca...

Table 3.2 Quantitative comparison between the proposed approach and MVO loca...

Table 3.3 Quantitative comparison in localization errors between the propose...

Table 3.4 Quantitative results of errors between the proposed approach, pure...

Chapter 4

Table 4.1 Test sequences.

Table 4.2 Positioning results with different methods.

Chapter 5

Table 5.1 Notations from Lie group and differential geometry.

Table 5.2 Pose estimation results of the proposed approach and LC‐HRPE on KI...

Table 5.3 Pose estimation results of the proposed approach and LC‐HRPE on NT...

Chapter 6

Table 6.1 Parameters in the secure filter.

List of Illustrations

Chapter 2

Figure 2.1 Stereo 3D‐2D projection from Euclidean space to the focal plane....

Figure 2.2 Abstraction of measurement and graph formulation.

Figure 2.3 Illustration of

matrices for the loosely coupled and tightly co...

Figure 2.4 HRPE scheme. Solid arrows represent evolution over time for a sin...

Figure 2.5 Yaw estimation error of KITTI sequences 00, 02, 05, and 08 with V...

Figure 2.6 Estimation accuracy with heading measurement error using the loos...

Figure 2.7 Estimation accuracy with respect to sliding window size using the...

Figure 2.8 Average computing time per pose for simulated sequences.

Figure 2.9 Outliers at turning (upper), passing humps (middle), and intensit...

Figure 2.10 Our experimental platform.

Figure 2.11 Experimental results based on self‐collected sequences NTU 01–04...

Figure 2.12 Translation estimation error based on NTU 01–04.

Figure 2.13 The sizes of graphs to be optimized based on the tightly coupled...

Chapter 3

Figure 3.1 Framework of the proposed map‐assisted localization approach.

Figure 3.2 Sequences 00 and 08 from the VO benchmark of the KITTI dataset. U...

Figure 3.3 Sequences 00 and 08 from the VO benchmark of the KITTI dataset. U...

Figure 3.4 Scale estimation results for sequences 00 and 02, respectively.

Figure 3.5

Consistency

range comparison with (light gray) and without (dark ...

Figure 3.6 Sequences 00 and 08 from visual odometry benchmark of the

KITTI

d...

Figure 3.7 Sequences 00, 08, and 09 from the visual odometry benchmark of th...

Figure 3.8 The absolute scales of sequences 00, 08, and 09. The light gray s...

Figure 3.9 Our self‐collected dataset. (a) Trajectories estimated from stere...

Figure 3.10 The influence of the given parameters. (a) The influence of give...

Chapter 4

Figure 4.1 Examples of a potential well and a potential trench in 2D data sp...

Figure 4.2 Several situations with minimums in the resultant potential field...

Figure 4.3 Partial positioning results of sequence 2.

Figure 4.4 Positioning error of sequence 3.

Figure 4.5 Positioning results of sequence 4.

Figure 4.6 Positioning error of sequence 4.

Figure 4.7 Positioning results of sequence 5.

Figure 4.8 Positioning error of sequence 5.

Figure 4.9 Positioning results of sequence 6.

Figure 4.10 Positioning error of sequence 6.

Chapter 5

Figure 5.1 Geometric representation of the state evolution from

,

to

,

Figure 5.2 Illustration of

. A dynamic potential field

can be obtained fr...

Figure 5.3 Approximating mappings between DPFs with mappings between samples...

Figure 5.4 Generating road constraints for KITTI sequence 00 from OSM. (a) P...

Figure 5.5 Trajectories (upper), translational error (middle), and rotationa...

Figure 5.6 Trajectories (upper), translational error (middle), and rotationa...

Figure 5.7 Estimation accuracy with respect to AHRS measurement error.

Figure 5.8 Translational estimation accuracy with respect to road map resolu...

Figure 5.9 Estimation accuracy with respect to the number of particles.

Figure 5.10 Trajectories (left) and translational error (right) of SVO, Geo‐...

Chapter 6

Figure 6.1 Reference frames and the steering model, where

,

denote the

...

Figure 6.2 (a) The testing route with process noise and measurement noise on...

Figure 6.3 Estimator performance under attack

on single state. Upper, midd...

Figure 6.4 Diagonal elements (a)

; (b)

; and (c)

with regard to differen...

Figure 6.5 Estimator performance under attacks

on multiple states. The top...

Figure 6.6 Estimator performance under attacks

on multiple states. The upp...

Figure 6.7 The L2 norm of the estimator error covariance matrix with regard ...

Chapter 7

Figure 7.1 Simulation performance of the proposed filter (“UKF”), the filter...

Figure 7.2 The estimation squared error bar graph with respect to

. (a) The...

Figure 7.3 Pose estimation of selected states

(first column),

(second co...

Figure 7.4 The diagonal elements in the estimation covariance matrices

. Th...

Chapter 8

Figure 8.1 The information flow of the proposed filter.

Figure 8.2 The normalized mean squared error of the secure estimator versus ...

Chapter 9

Figure 9.1 Heading reference‐based attitude estimation with two heading refe...

Figure 9.2 Secure attitude estimation framework. This chapter focuses on des...

Figure 9.3 Estimator performance without and with attacks;

.

Figure 9.4 Estimator performance without and with attacks;

.

Figure 9.5 Mean squared error (MSE) ratio between the proposed secure filter...

Guide

Cover

Table of Contents

Title Page

Copyright

Dedication

About the Authors

Preface

Begin Reading

References

Index

End User License Agreement

Pages

ii

iii

iv

v

xii

xiii

xiv

xv

xvi

1

2

3

4

5

6

7

8

9

10

11

13

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

109

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

IEEE Press

445 Hoes Lane Piscataway, NJ 08854

IEEE Press Editorial Board

Sarah Spurgeon,

Editor in Chief

Jón Atli Benediktsson

   

Andreas Molisch

   

Diomidis Spinellis

Anjan Bose

   

Saeid Nahavandi

   

Ahmet Murat Tekalp

Adam Drobot

   

Jeffrey Reed

   

   

Peter (Yong) Lian

   

Thomas Robertazzi

   

   

Multimodal Perception and Secure State Estimation for Robotic Mobility Platforms

 

 

Xinghua Liu

Xi'an University of Technology

China

 

Rui Jiang

Qingdao University

Singapore

 

Badong Chen

Xi'an Jiaotong University

China

 

Shuzhi Sam Ge

Qingdao University

Singapore

 

 

 

 

 

 

 

Copyright © 2023 by The Institute of Electrical and Electronics Engineers, Inc.

All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey.

Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per‐copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750‐8400, fax (978) 750‐4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748‐6011, fax (201) 748‐6008, or online at http://www.wiley.com/go/permission.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762‐2974, outside the United States at (317) 572‐3993 or fax (317) 572‐4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging‐in‐Publication Data applied for

 

Hardback: 9781119876014

 

Cover Design: Wiley

Cover Image: © Darq/Shutterstock.com

To

Yu Xia, Siyu

and

our loved ones

About the Authors

Xinghua Liu received the B.Sc. from Jilin University, Changchun, China, in 2009; and the Ph.D. degree in Automation from University of Science and Technology of China, Hefei, in 2014. From 2014 to 2015, he was invited as a Visiting Fellow at RMIT University in Melbourne, Australia. From 2015 to 2018, he was a Research Fellow at the School of Electrical and Electronic Engineering in Nanyang Technological University, Singapore. Dr. Liu has joined Xi'an University of Technology as a Professor since September 2018. His research interest includes state estimation and control, intelligent systems, autonomous vehicles, cyber‐physical systems, robotic systems, etc.

Rui Jiang received the B.Eng. degree in Measurement, Control technique and Instruments from Harbin Institute of Technology, Harbin, China, in 2014; and the Ph.D. degree in Control, Intelligent systems & robotics from National University of Singapore, Singapore, in 2019. Dr. Jiang is an Adjunct Lecturer with the Department of Electrical and Computer Engineering, National University of Singapore. His research interests includes intelligent sensing and perception for robotic systems.

Badong Chen received the B.S. and M.S. degrees in control theory and engineering from Chongqing University, Chongqing, China, in 1997 and 2003, respectively, and the Ph.D. degree in Computer Science and Technology from Tsinghua University, Beijing, China, in 2008. He was a Postdoctoral Researcher with Tsinghua University from 2008 to 2010, and a Postdoctoral Associate with the University of Florida Computational NeuroEngineering Laboratory from 2010 to 2012. He is currently a Professor with the Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China. He has authored or coauthored two books, four chapters, and more than 200 papers in various journals and conference proceedings. His research interests include signal processing, machine learning, and their applications to neural engineering and robotics. Dr. Chen is a Member of the Machine Learning for Signal Processing Technical Committee of the IEEE Signal Processing Society and the Cognitive and Developmental Systems Technical Committee of the IEEE Computational Intelligence Society. He is an Associate Editor for the IEEE Transactions on Cognitive and Developmental Systems, IEEE Transactions on Neural Networks and Learning Systems, and the Journal of the Franklin Institute, and has been on the Editorial Board of Entropy.

Shuzhi Sam Ge received his Ph.D. degree and Diploma of Imperial College (DIC) from the Imperial College London, London, U.K., in 1993, and B.Sc. degree from the Beijing University of Aeronautics and Astronautics (BUAA), China, in 1986. He is a Professor with the Department of Electrical and Computer Engineering, PI member of Institute for Functional Intelligent Materials, the National University of Singapore, Singapore, and the Founding Honorary Director of Institute for Future (IFF), Qingdao University, Qingdao, China. He serves as the Founding Editor‐in‐Chief, International Journal of Social Robotics, Springer Nature, and book editor for Automation and Control Engineering of Taylor & Francis/CRC Press. He has served/been serving as an Associate Editor for a number of flagship journals including IEEE TAC, IEEE TCST, IEEE TNN, IEEE Transaction on SMC‐Systems, Automatica, and CAAI Transactions on Intelligence Technology. At Asian Control Association, he serves as President Elec, 2022–2024. At IEEE Control Systems Society, he served as Vice President for Technical Activities, 2009–2010, Vice President of Membership Activities, 2011–2012, Member of Board of Governors of IEEE Control Systems Society, 2007–2009. He is a Clarivate Analytics high‐cited scientist in 2016–2021. He is a Fellow of IEEE, IFAC, IET, and SAEng. His current research interests include robotics, intelligent systems, artificial intelligence, and smart materials.

Preface

Cyber‐physical systems (CPSs), which refer to the embedding of widespread sensing, networking, computation, and control into physical spaces, play a crucial role in many areas today. As an important application area of CPSs, autonomous vehicles have emerged over the past few years as a subdiscipline of control theory in which the flow of information in a system takes place across a communication network. Unlike traditional control systems, where computation and communications are usually ignored, the approaches that have been developed for autonomous vehicle systems explicitly take into account various aspects of the communication channels that interconnect different parts of the overall system and the nature of the distributed computation that follows from this structure. This leads to a new set of tools and techniques for analyzing and designing autonomous vehicles that builds on the rich frameworks of communication theory, computer science, and control and estimation theory.

This book is intended to be employed by researchers and graduate‐level students interested in sensor fusion and secure state estimation in mobile robots or autonomous vehicles. The book is aimed to be self‐contained and to give interested readers insights into modeling, analysis, and applications of techniques of CPS‐based autonomous vehicles. We also provide pointers to the literature for further reading on each explored topic. Moreover, numerous illustrative figures and step‐by‐step examples help readers understand the main ideas and implementation details.

This book is organized in two parts: multimodal perception for vehicle pose estimation and secure state estimation for mobile robots. The first part discusses different sensor configurations and introduces new sensor fusion algorithms and frameworks to minimize pose estimation errors. Those concepts and methods could be used in current state‐of‐the‐art autonomous vehicles, and extensive experimental results are provided to verify algorithm performance on real robotic platforms. In the second part, we deal with the problem of secure pose estimation for mobile robots under many different types of attacks (possible sensor attacks). A filter‐based secure dynamic pose estimation approach is presented such that the vehicle pose can be resilient under randomly occurring deception attacks. Based on the established heading measuring model, we can decompose the optimal Kalman estimate into a linear combination of local state estimates, and a convex optimization‐based approach is introduced to combine the local estimate into a more secure estimate under the sparse attack.

In particular, Chapter 1 introduces multimodal pose estimation and secure estimation for vehicle navigation, as well as the organization of the entire book. Chapter 2 presents an optimization‐based sensor fusion framework where absolute heading measurements are used in vehicle pose estimation. Two road‐constrained pose estimation methods are introduced in Chapter 3 and Chapter 4 to reduce error and drift for long travel distances. Chapter 5 presents a unified framework to combine heading reference and map assistance, and the framework can be applied to other types of measurements and constraints in vehicle pose estimation. The core material on secure dynamic pose estimation for autonomous vehicles is presented in Chapter 6, where an upper bound for the estimation error covariance is guaranteed to establish stable estimates of the pose states. Chapter 7 presents a pose estimation approach for ground vehicles under randomly occurring deception attacks, and an unscented Kalman filter‐based secure pose estimator is then proposed to generate a stable estimate of the vehicle pose states. Chapters 8 and 9 then go on to consider secure dynamic state estimation under sparse attacks. We prove that we can decompose the optimal Kalman estimate as a weighted sum of local state estimates, and a convex optimization‐based approach is introduced to combine the local estimate into a more secure state estimate. It is shown that the proposed secure estimator coincides with the Kalman estimator with a certain probability when there is no attack and can be stable when elements of the model state are compromised. In each of these chapters on the core material, we have attempted to present a unified view of many of the most recent and relevant results in secure state estimation to establish a foundation on which more specialized results of interest to specific groups can be covered.

We would like to thank Prof. Han Wang, Prof. Yilin Mo, Prof. Emanuele Garone, Prof. Tong Heng Lee, Dr. Shuai Yang, Dr. Hui Zhou, Dr. Handuo Zhang, and Dr. Xiaomei Liu for their collaboration in this research area in recent years. We thank Prof. Xinde Li, Dr. Mien Van, and Dr. Yuanzhe Wang for carefully reviewing the book and for the continuous support given to us in the entire publication process. In addition, we would like to thank the National Key Research and Development Program of China (No. 2020YFB1313600), the National Natural Science Foundation of China, the Natural Science Foundation of Shaanxi Province, the Key Laboratory Project of Shaanxi Provincial Department of Education, and Shaanxi Youth Science and Technology New Star Project for their financial support of our research work.

                                Xinghua Liu

                                Xi'an University of Technology, China

                                Rui Jiang

                                National University of Singapore, Singapore

                                Badong Chen

                                Xi'an Jiaotong University, China

                                Shuzhi Sam Ge

                                National University of Singapore, Singapore

1Introduction

1.1 Background and Motivation

With the rapid development of sensor technologies, and due to increased density in integrated circuits predicted by Moore's law, the autonomous vehicle has become a fruitful area blending robotics, automation, computer vision, and intelligent transportation technologies. It has been reported that traditional automobile companies and startups plan to get their autonomous driving systems ready in the 2020s [Ross, 2017].

The US Department of Transportation's National Highway Traffic Safety Administration (NHTSA) defined five levels of autonomous driving, from manual driving (level 0), to driver assistance (level 1), to fully autonomous driving (level 5) (https://www.sae.org/standards/content/j3016_202104). As an inspiring example, the Audi A8, launched in 2017, is claimed to be “the world's first production automobile conditional automated at level 3,” according to Audi AG. Nevertheless, some pessimistic voices have emerged, claiming that fully autonomous cars will not be developed as quickly as expected or are even unlikely. One of the pacesetters in fully autonomous driving technologies, Waymo LLC, has received resident complaints due to conflicts in driving behaviors between humans and autonomous vehicles.

Although it is still a long way to level 5 autonomy, there is high demand for the development of autonomous vehicles so that tasks related to logistics, environmental cleanup, public security, and much more can be automated. Among all the functional blocks in autonomous vehicles, the navigation system plays an irreplaceable role since the vehicle needs to be literally “in motion” for any particular task. Multimodal perception and state estimation are two coadjutant modules for vehicle navigation. There have been extensive research outcomes on these two topics in autonomous vehicle navigation, but a few challenges still exist, motivated by which the in‐depth studies in this book have been carried out:

A modern pose estimation system contains multiple sensors to achieve accuracy and robustness. Appropriate sensor configurations, which combine the advantages of each sensor to benefit the whole estimation system, are distinct depending on the specific applications and requirements. Based on a particular sensor configuration, new theories and ideas are required for multi‐sensor pose estimation, where states, measurements, and constraints are represented in a unified fusion framework.

Due to the stealthiness of attacks, system operators usually cannot discover attacks in time, which may lead to severe economic damage and even the loss of human lives. Such incidents indicate that enhancing the security of the system is an urgent issue. Researchers have studied how we can securely estimate the state of a dynamical system from the controller's point of view based on a set of noisy and maliciously corrupted sensor measurements. In particular, researchers have focused on linear dynamical systems and have tried to understand how the system dynamics can be leveraged for security guarantees.

This book discusses the pose estimation problem for robotic mobility platforms using information from multiple sensors. The first part discusses different sensor configurations and introduces new sensor fusion algorithms and frameworks to minimize pose estimation errors. Those concepts and methods are extensively used in current state‐of‐the‐art autonomous vehicles, and extensive experimental results have been provided to verify the algorithm performance on real robotic platforms. The second part focuses on the secure estimation problem in multi‐sensor fusion, where attacks are considered and explicitly modeled in algorithm design. As this is a new topic that is at the primary stage of research, theoretical analysis and simulation results are shown in the related chapters.

1.2 Multimodal Pose Estimation for Vehicle Navigation

1.2.1 Multi‐Senor Pose Estimation

Multi‐sensor fusion is a typical solution where system dynamics, measurements, and constraints are fused consistently to increase estimation performance in terms of accuracy and robustness [Borges and Aldon, 2002, Ye et al., 2015, Teixeira et al., 2018]. Essentially, pose estimation can be considered as state estimation within a state space with a problem‐dependent topological structure. Let us assume the following discrete state equation and output equation:

(1.1)
(1.2)

where , , denote the state, control input, and measurement, respectively; and are the state equation and output equation; and and represent process and measurement noise.

Filtering and optimization are two frequently used data fusion frameworks for pose estimation. Filtering approaches propagate state vectors with their joint probability distributions along with time. The Kalman filter models the state and noise as Gaussian, which is not suitable for non‐Gaussian or multimodal distributions. The particle filter and its variants [Van Der Merwe et al., 2001, Nummiaro et al., 2003] have been proposed to deal with non‐linear and non‐Gaussian systems, and the computation load of updating particle states proliferates with the sample number. The optimization‐based approaches retain historical measurement and estimation as a graph such that they can be used for bundle adjustment or simultaneous localization and mapping (SLAM) [Grisetti et al., 2010]. The two commonly used frameworks are elaborated here.

Filtering‐Based Approaches As shown in related work [Janabi‐Sharifi and Marey, 2010, Koval et al., 2015, Bloesch et al., 2017], filters provide a probabilistic solution on pose estimation, which can be divided into two steps. First, the “prediction” step predicts states without current measurement, according to the state equation

(1.3)

where denotes the conditional distribution, and specifically is obtained from (1.1). Then, the probability distribution of the update can be obtained in the “correction” step, based on the output equation

(1.4)

where is obtained from (1.2), and the constant denominator is

(1.5)

Optimization‐Based Approaches Instead of using the filtering‐based approaches, some other research [Leutenegger et al., 2015, Huang et al., 2017, Parisotto et al., 2018, Wang et al., 2018a] aims to minimize the user‐defined cost function such that

(1.6)

where denotes the cost items to be considered; the information matrix indicates the degree of confidence in the corresponding measurement; and the error function measures the difference between the ideal and actual measurement.

1.2.2 Pose Estimation with Constraints

Constraints1 in pose estimation are helpful in increasing algorithm robustness and accuracy. For example, we may consider motion constraints(1.1), that limit the vehicle's pose change with time, and road constraints, which require the vehicle to stay on the road. Constraints in practical issues are mostly considered as soft to allow modeling errors and noise. We discuss constrained pose estimation from two perspectives.

Incorporating Constraints into Filtering Given the constraints , where is a constant vector, the augmented output equation can be obtained to incorporate the constraints into measurements [Mourikis and Roumeliotis, 2007, Simon, 2010, Boada et al., 2017, Ramezani et al., 2017, Yang et al., 2017a, Shen et al., 2017]:

(1.7)

where the covariance matrix of indicates the confidence in the soft constraints. With a prediction that remains the same, the correction step can be achieved by applying the augmented output equation.

In addition, we may first obtain the estimate without constraints and then project the unconstrained estimates toward the constraint states to get the final estimate

(1.8)

where is an operator indicating the difference between states and is a positive‐definite weighting matrix. For linear systems under linear constraints, if , the ordinary vector subtraction is selected as , leading to analytical solutions. Numerical methods are required to generalize the projection method to non‐linear systems or with non‐linear constraints. For particle filters, particle weights can be adjusted to reduce the influence of estimation results that do not satisfy the constraints.

Incorporating Constraints into Optimization For hard constraints, the method of Lagrange multipliers can be used to construct the corresponding non‐constrained optimization problem. For soft constraints, one naive but effective way is to add the penalty functions to the cost function , such that

(1.9)

where denotes the ‐th constraint to be considered; indicates the degree of confidence in the ‐th constraint. Examples of related work can be found in [Estrada et al., 2005, Levinson et al., 2007, Lu et al., 2017, Hoang et al., 2017].

Besides the constraints discussed previously (so‐called state constraints in the literature), measurement constraints can be seen in practice. One example would be the constant norm constraint on measurement vectors for translationally static but rotating magnetometers. Unfortunately, the current literature pays less attention to measurement constraints than state constraints. In Chapters 4 and 5, by presenting a unified representation containing state space and measurement space, both state constraints and measurement constraints are considered in the proposed geometric pose estimation framework.

1.2.3 Research Focus in Multimodal Pose Estimation

In the first part of this book, we focus primarily on two topics in designing new frameworks of multimodal pose estimation.

Toward Drift Reduction in Visual Odometry As low‐cost sensors with abundant visual information, cameras are frequently seen in ground vehicles, where visual odometry (VO) has been widely used for autonomous vehicle pose estimation thanks to its constantly improving performance. However, several challenges still need to be resolved. Error accumulation or the so‐called drift issue is a challenge preventing VO from being used in long‐range navigation. The existing solutions for enhancing VO performance involve (i) improving VO components including feature detection, matching, outlier removal, and pose optimization; and (ii) seeking assistance from other approaches or databases [Shen et al., 2014] such as LIDAR [Zhang and Singh, 2015], global positioning systems (GPSs) [Agrawal and Konolige, 2006], digital maps [Jiang et al., 2017, Alonso et al., 2012], and inertial navigation systems (INS) [Bloesch et al., 2015, Mourikis and Roumeliotis, 2007, Lobo and Dias, 2003, Wang et al., 2014, Falquez et al., 2016, Leutenegger et al., 2015, Lupton and Sukkarieh, 2012, Forster et al., 2017, Piniés et al., 2007, Li and Mourikis, 2013, Santoso et al., 2017]. Benefiting from the self‐contained property, many visual‐inertial odometry (VIO) schemes have been proposed to reduce drift in VO. Loosely coupled methods [Mourikis and Roumeliotis, 2007, Falquez et al., 2016] fuse data at a higher level, where data from the inertial measurement unit (IMU) and VO are fused after being obtained; tightly coupled methods, which consider not only poses but features as state variables in estimation, generally achieve greater precision but also suffer from higher computational costs. There are two main streams in tightly coupled VIO: on the one hand, a filter‐based method is proposed to estimate egomotion, camera extrinsic parameters, and the additive IMU biases in Bloesch et al. (2015). On the other hand, with optimization‐based methods, pose estimation can be formulated as a non‐linear least‐square optimization problem that aims to minimize a cost function containing inertial error terms and reprojection error simultaneously. Leutenegger et al. (2015