Pedestrian Inertial Navigation with Self-Contained Aiding - Andrei M. Shkel - E-Book

Pedestrian Inertial Navigation with Self-Contained Aiding E-Book

Andrei M. Shkel

0,0
76,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Explore an insightful summary of the major self-contained aiding technologies for pedestrian navigation from established and emerging leaders in the field Pedestrian Inertial Navigation with Self-Contained Aiding delivers a comprehensive and broad treatment of self-contained aiding techniques in pedestrian inertial navigation. The book combines an introduction to the general concept of navigation and major navigation and aiding techniques with more specific discussions of topics central to the field, as well as an exploration of the future of the future of the field: Ultimate Navigation Chip (uNavChip) technology. The most commonly used implementation of pedestrian inertial navigation, strapdown inertial navigation, is discussed at length, as are the mechanization, implementation, error analysis, and adaptivity of zero-velocity update aided inertial navigation algorithms. The book demonstrates the implementation of ultrasonic sensors, ultra-wide band (UWB) sensors, and magnetic sensors. Ranging techniques are considered as well, including both foot-to-foot ranging and inter-agent ranging, and learning algorithms, navigation with signals of opportunity, and cooperative localization are discussed. Readers will also benefit from the inclusion of: * A thorough introduction to the general concept of navigation as well as major navigation and aiding techniques * An exploration of inertial navigation implementation, Inertial Measurement Units, and strapdown inertial navigation * A discussion of error analysis in strapdown inertial navigation, as well as the motivation of aiding techniques for pedestrian inertial navigation * A treatment of the zero-velocity update (ZUPT) aided inertial navigation algorithm, including its mechanization, implementation, error analysis, and adaptivity Perfect for students and researchers in the field who seek a broad understanding of the subject, Pedestrian Inertial Navigation with Self-Contained Aiding will also earn a place in the libraries of industrial researchers and industrial marketing analysts who need a self-contained summary of the foundational elements of the field.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 285

Veröffentlichungsjahr: 2021

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Copyright

Author Biographies

List of Figures

List of Tables

1 Introduction

1.1 Navigation

1.2 Inertial Navigation

1.3 Pedestrian Inertial Navigation

1.4 Aiding Techniques for Inertial Navigation

1.5 Outline of the Book

References

2 Inertial Sensors and Inertial Measurement Units

2.1 Accelerometers

2.2 Gyroscopes

2.3 Inertial Measurement Units

2.4 Conclusions

References

3 Strapdown Inertial Navigation Mechanism

3.1 Reference Frame

3.2 Navigation Mechanism in the Inertial Frame

3.3 Navigation Mechanism in the Navigation Frame

3.4 Initialization

3.5 Conclusions

References

4 Navigation Error Analysis in Strapdown Inertial Navigation

4.1 Error Source Analysis

4.2 IMU Error Reduction

4.3 Error Accumulation Analysis

4.4 Conclusions

References

5 Zero‐Velocity Update Aided Pedestrian Inertial Navigation

5.1 Zero‐Velocity Update Overview

5.2 Zero‐Velocity Update Algorithm

5.3 Parameter Selection

5.4 Conclusions

References

6 Navigation Error Analysis in the ZUPT‐Aided Pedestrian Inertial Navigation

6.1 Human Gait Biomechanical Model

6.2 Navigation Error Analysis

6.3 Verification of Analysis

6.4 Limitations of the ZUPT Aiding Technique

6.5 Conclusions

References

7 Navigation Error Reduction in the ZUPT‐Aided Pedestrian Inertial Navigation

7.1 IMU‐Mounting Position Selection

7.2 Residual Velocity Calibration

7.3 Gyroscope G‐Sensitivity Calibration

7.4 Navigation Error Compensation Results

7.5 Conclusions

References

8 Adaptive ZUPT‐Aided Pedestrian Inertial Navigation

8.1 Floor Type Detection

8.2 Adaptive Stance Phase Detection

8.3 Conclusions

References

9 Sensor Fusion Approaches

9.1 Magnetometry

9.2 Altimetry

9.3 Computer Vision

9.4 Multiple‐IMU Approach

9.5 Ranging Techniques

9.6 Conclusions

References

10 Perspective on Pedestrian Inertial Navigation Systems

10.1 Hardware Development

10.2 Software Development

10.3 Conclusions

References

Index

End User License Agreement

List of Tables

Chapter 1

Table 1.1 Summary of non‐self‐contained aiding techniques.

Chapter 4

Table 4.1 Classification of IMU performances in terms of bias instability.

Table 4.2 List of some commercial IMUs and their characteristics.

Table 4.3 Propagation of position errors in 2D strapdown inertial navigation ...

Table 4.4 Propagation of position errors in 2D strapdown inertial navigation ...

Chapter 7

Table 7.1 Possible error sources in the ZUPT‐aided pedestrian inertial naviga...

Table 7.2 Stance phase analysis summary with different floor types.

Table 7.3 Stance phase analysis summary with different trajectories.

Table 7.4 Stance phase analysis summary with different subjects.

List of Illustrations

Chapter 1

Figure 1.1 A schematic of gimbal system..

Figure 1.2 Comparison of (a) gimbal inertial navigation algorithm and (b) st...

Figure 1.3 A comparison of (a) an IMU developed for the Apollo missions in 1...

Chapter 2

Figure 2.1 The basic structure of an accelerometer.

Figure 2.2 Schematics of accelerometers based on SAW devices [11], vibrating...

Figure 2.3 Typical performances and applications of different gyroscopes....

Figure 2.4 Schematics of a gyroscope and its different configurations [24]–[...

Figure 2.5 Ideal response of a gyroscope operated in (a) open‐loop mode, (b)...

Figure 2.6 Schematics of two typical IMU assembly architectures: (a) cubic s...

Figure 2.7 Different mechanical structures of three‐axis gyroscopes.

Figure 2.8 Examples of miniaturized IMU assembly architectures by MEMS fabri...

Chapter 3

Figure 3.1 Block diagram of strapdown inertial navigation mechanism in the i...

Figure 3.2 Block diagram of strapdown inertial navigation mechanism in the n...

Figure 3.3 Relation between the gyroscope bias and yaw angle estimation erro...

Chapter 4

Figure 4.1 Common error types in inertial sensor readouts. (a) Noise, (b) bi...

Figure 4.2 A schematic of log–log plot of Allan deviation.

Figure 4.3 A schematic of the IMU assembly error.

Figure 4.4 Illustration of the two components of the IMU assembly error: non...

Figure 4.5 Two‐dimensional strapdown inertial navigation system in a fixed f...

Figure 4.6 Propagation of navigation error with different grades of IMUs.

Chapter 5

Figure 5.1 Relation between the volumes and the navigation error in five min...

Figure 5.2 Comparison of the estimated velocity of the North and estimated t...

Figure 5.3 Diagram of the ZUPT‐aided pedestrian inertial navigation algorith...

Figure 5.4 Velocity propagation along three orthogonal directions during the...

Figure 5.5 Distribution of the final velocity along three orthogonal directi...

Chapter 6

Figure 6.1 (a) Interpolation of joint movement data and (b) simplified human...

Figure 6.2 Human ambulatory gait analysis. The light gray dots are the stati...

Figure 6.3 Velocity of the parameterized trajectory. A close match is demons...

Figure 6.4 Displacement of the parameterized trajectory. A close match is de...

Figure 6.5 A typical propagation of errors in attitude estimations in ZUPT‐a...

Figure 6.6 Effects of ARW of the gyroscopes on the velocity and angle estima...

Figure 6.7 Effects of VRW of the accelerometers on the velocity and angle es...

Figure 6.8 Effects of RRW of the gyroscopes on the velocity and angle estima...

Figure 6.9 Relation between RRW of gyroscopes and the position estimation un...

Figure 6.10 Allan deviation plot of the IMU used in this study. The result i...

Figure 6.11 The navigation error results of 40 trajectories. The averaged ti...

Figure 6.12 Ending points of 40 trajectories. All data points are in a recta...

Figure 6.13 Autocorrelations of the

,

, and

components of the innovation...

Chapter 7

Figure 7.1 Possible IMU‐mounting positions.

Figure 7.2 Noise characteristics of the IMUs used in the study.

Figure 7.3 Comparison of averaged IMU data and ZUPT states from IMUs mounted...

Figure 7.4 Navigation error of 34 tests of the same circular trajectory.

Figure 7.5 Comparison of estimated trajectories and innovations from IMU mou...

Figure 7.6 Experimental setup to record the motion of the foot during the st...

Figure 7.7 Velocity of the foot along three directions during a gait cycle. ...

Figure 7.8 Zoomed‐in view of the velocity of the foot during the stance phas...

Figure 7.9 Panel (a) shows the test statistics of the same 70 steps recorded...

Figure 7.10 Relation between the underestimate of trajectory length and the ...

Figure 7.11 The solid line is an estimated trajectory, and the dashed line i...

Figure 7.12 (a) Experimental setup to statically calibrate IMU; (b) experime...

Figure 7.13 Relation between the gyroscope g‐sensitivity and the vibration f...

Figure 7.14 Comparison of trajectories with and without systematic error com...

Figure 7.15 Comparison of the end points with and without systematic error c...

Chapter 8

Figure 8.1 Schematics of the algorithm discussed in this chapter. The number...

Figure 8.2 An example of IMU data partition. Each partition (indicated by di...

Figure 8.3 Distribution of eigenvalues of the centered data matrix after con...

Figure 8.4 Relation between the misclassification rate, PCA output dimension...

Figure 8.5 Confusion matrices of the floor type identification results with ...

Figure 8.6 Distribution of the first two principal components of the availab...

Figure 8.7 Schematics of the algorithm used in this study. The part in gray ...

Figure 8.8 Navigation results with and without the floor type identification...

Figure 8.9 The solid line is a typical test statistic for different walking ...

Figure 8.10 The relation between the shock level and the minimum test statis...

Figure 8.11 The dashed lines in dark and light gray are adaptive thresholds ...

Figure 8.12 Sub figures (a) through (d) show position propagation, specific ...

Figure 8.13 Relation between the navigation RMSE and fixed threshold level i...

Chapter 9

Figure 9.1 Lab‐on‐Shoe platform. Schematic of the vision‐based foot‐to‐foot ...

Figure 9.2 Schematic of the comparison of (a) one‐way ranging and (b) two‐wa...

Figure 9.3 Scattering of the sound wave deteriorates the accuracy of the mea...

Figure 9.4 “T” stands for transmitter and “R” stands for receiver. Only in c...

Figure 9.5 (a) Experimental setup of the illustrative experiment; (b) Rangin...

Figure 9.6 A comparison of results of different aiding techniques for indoor...

Figure 9.7 A comparison of different aiding techniques for self‐contained na...

Chapter 10

Figure 10.1 Our perspective of pedestrian inertial navigation system: uNavCh...

Guide

Cover

Table of Contents

IEEE Press

Title Page

Copyright

Author Biographies

List of Figures

List of Tables

Begin Reading

Index

WILEY END USER LICENSE AGREEMENT

Pages

ii

iii

iv

xi

xiii

xiv

xv

xvi

xvii

xviii

xix

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

159

160

161

162

163

164

165

166

167

168

169

IEEE Press

445 Hoes Lane

Piscataway, NJ 08854

IEEE Press Editorial Board

Ekram Hossain,

Editor in Chief

Jón Atli Benediktsson

  Xiaoou Li

Jeffrey Reed

Anjan Bose

  Lian Yong

Diomidis Spinellis

David Alan Grier

  Andreas Molisch

Sarah Spurgeon

Elya B. Joffe

  Saeid Nahavandi

Ahmet Murat Tekalp

Pedestrian Inertial Navigation with Self‐Contained Aiding

Yusheng Wang and Andrei M. Shkel

University of California, Irvine

 

 

 

 

 

 

 

IEEE Press Series on SensorsVladimir Lumelsky, Series Editor

Copyright © 2021 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey.

Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per‐copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750‐8400, fax (978) 750‐4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748‐6011, fax (201) 748‐6008, or online at http://www.wiley.com/go/permission.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762‐2974, outside the United States at (317) 572‐3993 or fax (317) 572‐4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging‐in‐Publication Data applied for:

ISBN: 9781119699552

Cover Design: Wiley

Cover Image: © Production Perig/Shutterstock

Author Biographies

Yusheng Wang, PhD, received the B.Eng. degree (Hons.) in engineering mechanics from Tsinghua University, Beijing, China, in 2014 and the Ph.D. degree in mechanical and aerospace engineering from the University of California, Irvine, CA, in 2020. His research interests include the development of silicon‐based and fused quartz‐based MEMS resonators and gyroscopes, and pedestrian inertial navigation development with sensor fusion. He is currently working at SiTime Corporation as an MEMS Development Engineer.

Andrei M. Shkel, PhD, has been on faculty at the University of California, Irvine since 2000, and served as a Program Manager in the Microsystems Technology Office of DARPA. His research interests are reflected in over 300 publications, 42 patents, and 3 books. Dr. Shkel has been on a number of editorial boards, including Editor of IEEE/ASME JMEMS, Journal of Gyroscopy and Navigation, and the founding chair of the IEEE Inertial Sensors. He was awarded the Office of the Secretary of Defense Medal for Exceptional Public Service in 2013, and the 2009 IEEE Sensors Council Technical Achievement Award. He is the President of the IEEE Sensors Council and the IEEE Fellow.

List of Figures

Figure 1.1 A schematic of gimbal system.

Figure 1.2 Comparison of (a) gimbal inertial navigation algorithm and (b) strapdown inertial navigation algorithm.

Figure 1.3 A comparison of (a) an IMU developed for the Apollo missions in 1960s.

Figure 2.1 The basic structure of an accelerometer.

Figure 2.2 Schematics of accelerometers based on SAW devices [11], vibrating beams [8], and BAW devices [9].

Figure 2.3 Typical performances and applications of different gyroscopes.

Figure 2.4 Schematics of a gyroscope and its different configurations [24]–[27].

Figure 2.5 Ideal response of a gyroscope operated in (a) open‐loop mode, (b) force‐to‐rebalance mode, and (c) whole angle mode, respectively.

Figure 2.6 Schematics of two typical IMU assembly architectures: (a) cubic structure and (b) stacking structure.

Figure 2.7 Different mechanical structures of three‐axis gyroscopes.

Figure 2.8 Examples of miniaturized IMU assembly architectures by MEMS fabrication: (a) folded structure and (b) stacking structure.

Figure 3.1 Block diagram of strapdown inertial navigation mechanism in the i‐frame.

Figure 3.2 Block diagram of strapdown inertial navigation mechanism in the n‐frame.

Figure 3.3 Relation between the gyroscope bias and yaw angle estimation error.

Figure 4.1 Common error types in inertial sensor readouts. (a) Noise, (b) bias, (c) scale factor error, (d) nonlinearity, (e) dead zone, (f) quantization.

Figure 4.2 A schematic of log–log plot of Allan deviation.

Figure 4.3 A schematic of the IMU assembly error.

Figure 4.4 Illustration of the two components of the IMU assembly error: non‐orthogonality and misalignment.

Figure 4.5 Two‐dimensional strapdown inertial navigation system in a fixed frame. Two accelerometers and one gyroscope is needed.

Figure 4.6 Propagation of navigation error with different grades of IMUs.

Figure 5.1 Relation between the volumes and the navigation error in five minutes of IMUs of different grades. The dashed box in the lower left corner indicates the desired performance for the pedestrian inertial navigation, showing the need for aiding techniques.

Figure 5.2 Comparison of the estimated velocity of the North and estimated trajectory for navigation with and without ZUPT aiding.

Figure 5.3 Diagram of the ZUPT‐aided pedestrian inertial navigation algorithm.

Figure 5.4 Velocity propagation along three orthogonal directions during the 600 stance phases.

Figure 5.5 Distribution of the final velocity along three orthogonal directions during 600 stance phases. Standard deviation is extracted as the average velocity uncertainty during the stance phase.

Figure 6.1 (a) Interpolation of joint movement data and (b) simplified human leg model.

Figure 6.2 Human ambulatory gait analysis. The light gray dots are the stationary points in different phases of one gait cycle.

Figure 6.3 Velocity of the parameterized trajectory. A close match is demonstrated and discontinuities were eliminated.

Figure 6.4 Displacement of the parameterized trajectory. A close match is demonstrated for displacement along the direction (horizontal). Difference between the displacements along direction (vertical) is to guarantee the displacement continuity in between the gait cycles.

Figure 6.5 A typical propagation of errors in attitude estimations in ZUPT‐aided pedestrian inertial navigation. The solid lines are the actual estimation errors, and the dashed lines are the uncertainty of estimation. Azimuth angle (heading) is the only important EKF state that is not observable from zero‐velocity measurements.

Figure 6.6 Effects of ARW of the gyroscopes on the velocity and angle estimation errors in the ZUPT‐aided inertial navigation algorithm.

Figure 6.7 Effects of VRW of the accelerometers on the velocity and angle estimation errors in the ZUPT‐aided inertial navigation algorithm.

Figure 6.8 Effects of RRW of the gyroscopes on the velocity and angle estimation errors in the ZUPT‐aided inertial navigation algorithm.

Figure 6.9 Relation between RRW of gyroscopes and the position estimation uncertainties.

Figure 6.10 Allan deviation plot of the IMU used in this study. The result is compared to the datasheet specs [13].

Figure 6.11 The navigation error results of 40 trajectories. The averaged time duration is about 110 seconds, including the initial calibration. Note that scales for the two axes are different to highlight the effect of error accumulation.

Figure 6.12 Ending points of 40 trajectories. All data points are in a rectangular area with the length of 2.2 m and width of 0.8 m.

Figure 6.13 Autocorrelations of the , , and components of the innovation sequence during ZUPT‐aided pedestrian inertial navigation.

Figure 7.1 Possible IMU‐mounting positions.

Figure 7.2 Noise characteristics of the IMUs used in the study.

Figure 7.3 Comparison of averaged IMU data and ZUPT states from IMUs mounted on the forefoot and behind the heel. Stance phase is identified when ZUPT state is equal to 1.

Figure 7.4 Navigation error of 34 tests of the same circular trajectory.

Figure 7.5 Comparison of estimated trajectories and innovations from IMU mounted at the forefoot (a) and the heel (b).

Figure 7.6 Experimental setup to record the motion of the foot during the stance phase.

Figure 7.7 Velocity of the foot along three directions during a gait cycle. The thick solid lines are the averaged velocities along three directions.

Figure 7.8 Zoomed‐in view of the velocity of the foot during the stance phase. The light gray dashed lines correspond to zero‐velocity state, and the dark gray dashed lines are the range of the velocity distribution.

Figure 7.9 Panel (a) shows the test statistics of the same 70 steps recorded previously. Thick solid line is an averaged value. Panel (b) shows the residual velocity of the foot along the trajectory during the stance phase. The inner, middle, and outer dashed lines correspond to threshold levels of , , and , respectively.

Figure 7.10 Relation between the underestimate of trajectory length and the ZUPT detection threshold. The thick solid line is the result of the previous analysis, and the thinner lines are experimental results from 10 different runs.

Figure 7.11 The solid line is an estimated trajectory, and the dashed line is an analytically generated trajectory with heading angle increasing at a rate of . Note that the scales for the and axes are different. The inset shows that the estimated heading angle increases at a rate of .

Figure 7.12 (a) Experimental setup to statically calibrate IMU; (b) experimental setup to measure the relation between gyroscope g‐sensitivity and acceleration frequency [13].

Figure 7.13 Relation between the gyroscope g‐sensitivity and the vibration frequency obtained from three independent measurements. The dashed line is the gyroscope g‐sensitivity measured in static calibration. Inset is the FFT of the ‐axis accelerometer readout during a typical walking of two minutes.

Figure 7.14 Comparison of trajectories with and without systematic error compensation. Note that the scales for and axes are different.

Figure 7.15 Comparison of the end points with and without systematic error compensation. The dashed lines are the boundaries of the results.

Figure 8.1 Schematics of the algorithm discussed in this chapter. The numbers (1)–(4) indicate the four main steps in the algorithm.

Figure 8.2 An example of IMU data partition. Each partition (indicated by different brightness) starts at toe‐off of the foot.

Figure 8.3 Distribution of eigenvalues of the centered data matrix after conducting the singular value decomposition.

Figure 8.4 Relation between the misclassification rate, PCA output dimension, and number of neurons in the hidden layer.

Figure 8.5 Confusion matrices of the floor type identification results with the PCA output dimension of 3 and 10, respectively. Classes are (1) walking on hard floor, (2) walking on grass, (3) walking on sand, (4) walking upstairs, and (5) walking downstairs.

Figure 8.6 Distribution of the first two principal components of the available data.

Figure 8.7 Schematics of the algorithm used in this study. The part in gray shows its difference from the standard multiple‐model Kalman filter.

Figure 8.8 Navigation results with and without the floor type identification. The dashed line is the ground truth.

Figure 8.9 The solid line is a typical test statistic for different walking and running paces. The dark gray dashed lines show the test statistic levels during the stance phase with different gait paces, and the light gray dashed line shows the test statistic level when standing still.

Figure 8.10 The relation between the shock level and the minimum test statistic in the same gait cycle. The dots correspond to data from different gait cycles, the solid line is a fitted curve, and the dashed lines are intervals.

Figure 8.11 The dashed lines in dark and light gray are adaptive thresholds with and without an artificial holding, respectively. The dots indicate the stance phases detected by the threshold without holding, while the stance phases detected by the threshold with holding is shown by the gray boxes.

Figure 8.12 Sub figures (a) through (d) show position propagation, specific force of the IMU, generalized likelihood ratio test, and the navigation results of the experiment, respectively. Note that the and axis scalings in (d) are different.

Figure 8.13 Relation between the navigation RMSE and fixed threshold level is shown by the solid line. The navigation RMSE achieved by adaptive threshold is shown by the dashed line.

Figure 9.1 Lab‐on‐Shoe platform. Schematic of the vision‐based foot‐to‐foot relative position measurement.

Figure 9.2 Schematic of the comparison of (a) one‐way ranging and (b) two‐way ranging.

Figure 9.3 Scattering of the sound wave deteriorates the accuracy of the measurement.

Figure 9.4 “T” stands for transmitter and “R” stands for receiver. Only in case (c) will the receiver receive the signal.

Figure 9.5 (a) Experimental setup of the illustrative experiment; (b) Ranging data are collected with transmitter and receiver aligned; (c) and (d) Ranging data are not collected with transmitter and receiver not aligned. Dashed lines in (b)–(d) are directions of transmission of the ultrasonic wave.

Figure 9.6 A comparison of results of different aiding techniques for indoor environment.

Figure 9.7 A comparison of different aiding techniques for self‐contained navigation. The dashed line is the ground truth. The estimated ending points are denoted by the dots. The total navigation length was around 420 m.

Figure 10.1 Our perspective of pedestrian inertial navigation system: uNavChip [2,3].

List of Tables

Table 1.1 Summary of non‐self‐contained aiding techniques.

Table 4.1 Classification of IMU performances in terms of bias instability.

Table 4.2 List of some commercial IMUs and their characteristics.

Table 4.3 Propagation of position errors in 2D strapdown inertial navigation due to deterministic errors.

Table 4.4 Propagation of position errors in 2D strapdown inertial navigation due to stochastic errors.

Table 7.1 Possible error sources in the ZUPT‐aided pedestrian inertial navigation.

Table 7.2 Stance phase analysis summary with different floor types.

Table 7.3 Stance phase analysis summary with different trajectories.

Table 7.4 Stance phase analysis summary with different subjects.

1Introduction

1.1 Navigation

Navigation is the process of planning, recording, and controlling the movement of a craft or vehicle from one place to another [1]. It is an ancient subject but also a complex science, and a variety of methods have been developed for different circumstances, such as land navigation, marine navigation, aeronautic navigation, and space navigation.

One of the most straightforward methods is to use landmarks. Generally speaking, a landmark can be anything with known coordinates in a reference frame. For example, any position on the surface of the Earth can be described by its latitude and longitude, defined by the Earth's equator and Greenwich meridian. The landmarks can be hills and rivers in the wilderness, or streets and buildings in urban areas, or lighthouses and even celestial bodies when navigating on the sea. Other modern options, such as radar stations, satellites, and cellular towers, can all be utilized as landmarks. The position of the navigator can be extracted by measuring the distance to, and/or the orientation with respect to the landmarks. For example, celestial navigation is a well‐established technique for navigation on the sea. In this technique, “sights,” or angular distance is measured between a celestial body, such as the Sun, the Moon, or the Polaris, and the horizon. The measurement, combined with the knowledge of the motion of the Earth, and time of measurement, is able to define both the latitude and longitude of the navigator [2]. In the case of satellite navigation, a satellite constellation composed of many satellites with synchronized clocks and known positions, and continuously transmitting radio signal is needed. The receiver can measure the distance between itself and the satellites by comparing the time difference between the signal that is transmitted by the satellite and received by the receiver. A minimum of four satellites must be in view of the receiver for it to compute the time and its location [3]. Navigation methods of this type, which utilize the observation of landmarks with known positions to directly determine a position, are called the position fixing. In the position fixing type of navigation, navigation accuracy is dependent only on the accuracy of the measurement and the “map” (knowledge of the landmarks). Therefore, navigation accuracy remains at a constant level as navigation time increases, as long as observations of the landmarks are available.

The idea of position fixing is straightforward, but the disadvantage is also obvious. Observation of landmarks may not always be available and is susceptible to interference and jamming. For example, no celestial measurement is available in foggy or cloudy weather; radio signals suffer from diffraction, refraction, and Non‐Line‐Of‐Sight (NLOS) transmission; satellite signals may also be jammed or spoofed. Besides, a known “map” is required, which makes this type of navigation infeasible in the completely unknown environment.

An alternative navigation type is called dead reckoning. The phrase “dead reckoning” probably dated from the seventeenth century, when the sailors calculated their location on the sea based on the velocity and its orientation. Nowadays, dead reckoning refers to the process where the current state (position, velocity, and orientation) of the system is calculated based on the knowledge of its initial state and measurement of speed and heading [4]. Velocity is decomposed into three orthogonal directions based on heading and then multiplied by the elapsed time to obtain the position change. Then, the current position is calculated by summing up the position change and the initial position. A major advantage of dead reckoning over position fixing is that it does not require the observations of the landmarks. Thus, the system is less susceptible to environmental interruptions. On the other hand, dead reckoning is subject to cumulative errors. For example, in automotive navigation, the odometer calculates the traveled distance by counting the number of rotations of a wheel. However, slipping of the wheel or a flat tire will result in a difference between the assumed and actual travel distance, and the error will accumulate but cannot be measured or compensated, if no additional information is provided. As a result, navigation error will be accumulated as navigation time increases.

Inertial navigation is a widely used dead reckoning method, where inertial sensors (accelerometers and gyroscopes) are implemented to achieve navigation purpose in the inertial frame. The major advantage of inertial navigation is that it is based on the Newton's laws of motion and imposes no extra assumptions on the system. As a result, inertial navigation is impervious to interference and jamming, and its application is universal in almost all navigation scenarios [5].

1.2 Inertial Navigation

The operation of inertial navigation relies on the measurements of accelerations and angular rates, which can be achieved by accelerometers and gyroscopes, respectively. In a typical Inertial Measurement Unit (IMU), there are three accelerometers and three gyroscopes mounted orthogonal to each other to measure the acceleration and angular rate components along three perpendicular directions. To keep track of the orientation of the system with respect to the inertial frame, three gyroscopes are needed. Gyroscopes measure the angular rates along three orthogonal directions. Angular rates are then integrated, and the orientation of the system is derived from these measurements. The readout of the accelerometers is called the specific force, which is composed of two parts: the gravity vector and the acceleration vector. According to the Equivalence Principle in the General Theory of Relativity, the inertial force and the gravitational force are equivalent and cannot be separated by the accelerometers. Therefore, the orientation information obtained by the gyroscopes is needed to estimate the gravity vector. With the orientation information, we can subtract the gravity vector from the specific force to obtain the acceleration vector, and revolve the acceleration vector from the system frame to the inertial frame before performing integration. Given the accelerations of the system, the change of position can be calculated by performing two consecutive integrations of the acceleration with respect to time.

The earliest concept of inertial sensor was proposed by Bohnenberger in the early nineteenth century [6]. Then in 1856, the famous Foucault pendulum experiment was demonstrated as the first rate‐integrating gyroscope [7], whose output is proportional to the change of angle, instead of the angular rate as in the case of most commercial gyroscopes. However, the first implementation of an inertial navigation system did not occur until the 1930s on V2 rockets and the wide application of inertial navigation started in the late 1960s [8]. In the early implementation of inertial navigation, inertial sensors are fixed on a stabilized platform supported by a gimbal set with rotary joints allowing rotation in three dimensions (Figure 1.1). The gyroscope readouts are fed back to torque motors that rotates the gimbals so that any external rotational motion could be canceled out and the orientation of the platform does not change. This implementation is still in common use where very accurate navigation data is required and the weight and volume of the system are not of great concern, such as in submarines. However, the gimbal systems are large and expensive due to their complex mechanical and electrical infrastructure. In the late 1970s, strapdown system was made possible, where inertial sensors are rigidly fixed, or “strapped down” to the system. In this architecture, the mechanical complexity of the platform is greatly reduced at the cost of substantial increase in the computational complexity in the navigation algorithm and a higher dynamic range for gyroscopes. However, recent development of microprocessor capabilities and suitable sensors allowed such design to become reality. The smaller size, lighter weight, and better reliability of the system further broaden the applications of the inertial navigation. Comparison of the schematics of algorithmic implementations in gimbal system and strapdown system is shown in Figure 1.2.

Figure 1.1 A schematic of gimbal system.

Source: Woodman [5]

.

Figure 1.2 Comparison of (a) gimbal inertial navigation algorithm and (b) strapdown inertial navigation algorithm.

Inertial navigation, as a dead reckoning approach to navigation, also suffers from error accumulations. In the inertial navigation algorithm, not only accelerations and angular rates are integrated but all the measurement noises are also integrated and accumulated. As a result, unlike the position fixing type of navigation, the navigation accuracy deteriorates as navigation time increases. Noise sources include fabrication imperfections of individual inertial sensors, assembly errors of the entire IMU, electronic noises, environment‐related errors (temperature, shock, vibration, etc.), and numerical errors. Thus, inertial navigation imposes challenging demands on the system, in terms of the level of errors, to achieve long‐term