Statistical Signal Processing in Engineering - Umberto Spagnolini - E-Book

Statistical Signal Processing in Engineering E-Book

Umberto Spagnolini

0,0
110,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

A problem-solving approach to statistical signal processing for practicing engineers, technicians, and graduate students This book takes a pragmatic approach in solving a set of common problems engineers and technicians encounter when processing signals. In writing it, the author drew on his vast theoretical and practical experience in the field to provide a quick-solution manual for technicians and engineers, offering field-tested solutions to most problems engineers can encounter. At the same time, the book delineates the basic concepts and applied mathematics underlying each solution so that readers can go deeper into the theory to gain a better idea of the solution's limitations and potential pitfalls, and thus tailor the best solution for the specific engineering application. Uniquely, Statistical Signal Processing in Engineering can also function as a textbook for engineering graduates and post-graduates. Dr. Spagnolini, who has had a quarter of a century of experience teaching graduate-level courses in digital and statistical signal processing methods, provides a detailed axiomatic presentation of the conceptual and mathematical foundations of statistical signal processing that will challenge students' analytical skills and motivate them to develop new applications on their own, or better understand the motivation underlining the existing solutions. Throughout the book, some real-world examples demonstrate how powerful a tool statistical signal processing is in practice across a wide range of applications. * Takes an interdisciplinary approach, integrating basic concepts and tools for statistical signal processing * Informed by its author's vast experience as both a practitioner and teacher * Offers a hands-on approach to solving problems in statistical signal processing * Covers a broad range of applications, including communication systems, machine learning, wavefield and array processing, remote sensing, image filtering and distributed computations * Features numerous real-world examples from a wide range of applications showing the mathematical concepts involved in practice * Includes MATLAB code of many of the experiments in the book Statistical Signal Processing in Engineering is an indispensable working resource for electrical engineers, especially those working in the information and communication technology (ICT) industry. It is also an ideal text for engineering students at large, applied mathematics post-graduates and advanced undergraduates in electrical engineering, applied statistics, and pure mathematics, studying statistical signal processing.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 724

Veröffentlichungsjahr: 2017

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

List of Figures

List of Tables

Preface

List of Abbreviations

How to Use the Book

About the Companion Website

Prerequisites

Why are there so many matrixes in this book?

1 Manipulations on Matrixes

1.1 Matrix Properties

1.2 Eigen‐Decompositions

1.3 Eigenvectors in Everyday Life

1.4 Derivative Rules

1.5 Quadratic Forms

1.6 Diagonalization of a Quadratic Form

1.7 Rayleigh Quotient

1.8 Basics of Optimization

Appendix A: Arithmetic vs. Geometric Mean

2 Linear Algebraic Systems

2.1 Problem Definition and Vector Spaces

2.2 Rotations

2.3 Projection Matrixes and Data‐Filtering

2.4 Singular Value Decomposition (SVD) and Subspaces

2.5 QR and Cholesky Factorization

2.6 Power Method for Leading Eigenvectors

2.7 Least Squares Solution of Overdetermined Linear Equations

2.8 Efficient Implementation of the LS Solution

2.9 Iterative Methods

3 Random Variables in Brief

3.1 Probability Density Function (pdf), Moments, and Other Useful Properties

3.2 Convexity and Jensen Inequality

3.3 Uncorrelatedness and Statistical Independence

3.4 Real‐Valued Gaussian Random Variables

3.5 Conditional pdf for Real‐Valued Gaussian Random Variables

3.6 Conditional pdf in Additive Noise Model

3.7 Complex Gaussian Random Variables

3.8 Sum of Square of Gaussians: Chi‐Square

3.9 Order Statistics for N rvs

4 Random Processes and Linear Systems

4.1 Moment Characterizations and Stationarity

4.2 Random Processes and Linear Systems

4.3 Complex‐Valued Random Processes

4.4 Pole‐Zero and Rational Spectra (Discrete‐Time)

4.5 Gaussian Random Process (Discrete‐Time)

4.6 Measuring Moments in Stochastic Processes

Appendix A: Transforms for Continuous‐Time Signals

Appendix B: Transforms for Discrete‐Time Signals

5 Models and Applications

5.1 Linear Regression Model

5.2 Linear Filtering Model

5.3 MIMO systems and Interference Models

5.4 Sinusoidal Signal

5.5 Irregular Sampling and Interpolation

5.6 Wavefield Sensing System

6 Estimation Theory

6.1 Historical Notes

6.2 Non‐Bayesian vs. Bayesian

6.3 Performance Metrics and Bounds

6.4 Statistics and Sufficient Statistics

6.5 MVU and BLU Estimators

6.6 BLUE for Linear Models

6.7 Example: BLUE of the Mean Value of Gaussian rvs

7 Parameter Estimation

7.1 Maximum Likelihood Estimation (MLE)

7.2 MLE for Gaussian Model

7.3 Other Noise Models

7.4 MLE and Nuisance Parameters

7.5 MLE for Continuous‐Time Signals

7.6 MLE for Circular Complex Gaussian

7.7 Estimation in Phase/Frequency Modulations

7.8 Least Squares (LS) Estimation

7.9 Robust Estimation

8 Cramér–Rao Bound

8.1 Cramér–Rao Bound and Fisher Information Matrix

8.2 Interpretation of CRB and Remarks

8.3 CRB and Variable Transformations

8.4 FIM for Gaussian Parametric Model

Appendix A: Proof of CRB

Appendix B: FIM for Gaussian Model

Appendix C: Some Derivatives for MLE and CRB Computations

9 MLE and CRB for Some Selected Cases

9.1 Linear Regressions

9.2 Frequency Estimation

9.3 Estimation of Complex Sinusoid

9.4 Time of Delay Estimation

9.5 Estimation of Max for Uniform pdf

9.6 Estimation of Occurrence Probability for Binary pdf

9.7 How to Optimize Histograms?

9.8 Logistic Regression

10 Numerical Analysis and Montecarlo Simulations

10.1 System Identification and Channel Estimation

10.2 Frequency Estimation

10.3 Time of Delay Estimation

10.4 Doppler‐Radar System by Frequency Estimation

11 Bayesian Estimation

11.1 Additive Linear Model with Gaussian Noise

11.2 Bayesian Estimation in Gaussian Settings

11.3 LMMSE Estimation and Orthogonality

11.4 Bayesian CRB

11.5 Mixing Bayesian and Non‐Bayesian

11.6 Expectation‐Maximization (EM)

Appendix Gaussian Mixture pdf

12 Optimal Filtering

12.1 Wiener Filter

12.2 MMSE Deconvolution (or Equalization)

12.3 Linear Prediction

12.4 LS Linear Prediction

12.5 Linear Prediction and AR Processes

12.6 Levinson Recursion and Lattice Predictors

13 Bayesian Tracking and Kalman Filter

13.1 Bayesian Tracking of State in Dynamic Systems

13.2 Kalman Filter (KF)

13.3 Identification of Time‐Varying Filters in Wireless Communication

13.4 Extended Kalman Filter (EKF) for Non‐Linear Dynamic Systems

13.5 Position Tracking by Multi‐Lateration

13.6 Non‐Gaussian Pdf and Particle Filters

14 Spectral Analysis

14.1 Periodogram

14.2 Parametric Spectral Analysis

14.3 AR Spectral Analysis

14.4 MA Spectral Analysis

14.5 ARMA Spectral Analysis

Appendix A: Which Sample Estimate of the Autocorrelation to Use?

Appendix B: Eigenvectors and Eigenvalues of Correlation Matrix

Appendix C: Property of Monic Polynomial

Appendix D: Variance of Pole in AR(1)

15 Adaptive Filtering

15.1 Adaptive Interference Cancellation

15.2 Adaptive Equalization in Communication Systems

15.3 Steepest Descent MSE Minimization

15.4 From Iterative to Adaptive Filters

15.5 LMS Algorithm and Stochastic Gradient

15.6 Convergence Analysis of LMS Algorithm

15.7 Learning Curve of LMS

15.8 NLMS Updating and Non‐Stationarity

15.9 Numerical Example: Adaptive Identification

15.10 RLS Algorithm

15.11 Exponentially‐Weighted RLS

15.12 LMS vs. RLS

Appendix A: Convergence in Mean Square

16 Line Spectrum Analysis

Why Line Spectrum Analysis?

16.1 Model Definition

16.2 Maximum Likelihood and Cramér–Rao Bounds

16.3 High‐Resolution Methods

17 Equalization in Communication Engineering

17.1 Linear Equalization

17.2 Non‐Linear Equalization

17.3 MIMO Linear Equalization

17.4 MIMO–DFE Equalization

18 2D Signals and Physical Filters

18.1 2D Sinusoids

18.2 2D Filtering

18.3 Diffusion Filtering

18.4 Laplace Equation and Exponential Filtering

18.5 Wavefield Propagation

Appendix A: Properties of 2D Signals

Appendix B: Properties of 2D Fourier Transform

Appendix C: Finite Difference Method for PDE‐Diffusion

19 Array Processing

19.1 Narrowband Model

19.2 Beamforming and Signal Estimation

19.3 DoA Estimation

20 Multichannel Time of Delay Estimation

20.1 Model Definition for ToD

20.2 High Resolution Method for ToD (L = 1)

20.3 Difference of ToD (DToD) Estimation

20.4 Numerical Performance Analysis of DToD

20.5 Wavefront Estimation: Non‐Parametric Method (L = 1)

20.6 Parametric ToD Estimation and Wideband Beamforming

Appendix A: Properties of the Sample Correlations

Appendix B: How to Delay a Discrete‐Time Signal?

Appendix C: Wavefront Estimation for 2D Arrays

21 Tomography

21.1 X‐ray Tomography

21.2 Algebraic Reconstruction Tomography (ART)

21.3 Reconstruction From Projections: Fourier Method

21.4 Traveltime Tomography

21.5 Internet (Network) Tomography

22 Cooperative Estimation

22.1 Consensus and Cooperation

22.2 Distributed Estimation for Arbitrary Linear Models (p>1)

22.3 Distributed Synchronization

Appendix Basics of Undirected Graphs

23 Classification and Clustering

23.1 Historical Notes

23.2 Classification

23.3 Classification of Signals in Additive Gaussian Noise

23.4 Bayesian Classification

23.5 Pattern Recognition and Machine Learning

23.6 Clustering

References

Index

End User License Agreement

List of Tables

Chapter 04

Table 4.1 Moments of a random process.

Table 4.2 Classification of a random process.

Table 4.3 Fourier transform properties.

Table 4.4 Properties of z‐transform.

Chapter 09

Table 9.1 Waveforms and effective bandwidths.

Chapter 14

Table 14.1 Time windows and variance reduction.

Table 14.2 Frequency smoothing and variance reduction.

Chapter 15

Table 15.1 LMS algorithms.

Table 15.2 Comparison between LMS and RLS.

Chapter 16

Table 16.1 Deterministic vs. stochastic ML in line spectrum analysis.

Chapter 18

Table 18.1 2D Fourier transform properties.

Chapter 20

Table 20.1 Taxonomy of ToD methods.

Chapter 23

Table 23.1 Taxonomy of principles and methods for classification and clustering.

Table 23.2 Classification metrics.

List of Illustrations

Chapter 01

Figure 1.1 Graph representing tables mutually interfering with inter‐table gain

.

Figure 1.2 Graph for two disjoint and non‐interfering sets.

Figure 1.3 Mutual interference in a cellular communication system.

Figure 1.4 Quadratic form and its diagonalization.

Chapter 02

Figure 2.1 Geometric view of

(

A

) and

(

A

T

) for

.

Figure 2.2 Radiometer model with constant thickness.

Figure 2.3 Rotation in a plane.

Figure 2.4 Projection onto the span of

A

.

Figure 2.5 Gram‐Schmidt procedure for QR decomposition (

).

Figure 2.6 Least squares solution of linear system.

Chapter 03

Figure 3.1 Conditional and marginal pdf from

p

(

x

, 

y

).

Figure 3.2 Joint Gaussian pdf with correlation

, and sample data (bottom).

Chapter 04

Figure 4.1 Factorization of autocorrelation sequences.

Figure 4.2 Periodic and sampled continuous‐time signals.

Chapter 05

Figure 5.1 Linear regression model.

Figure 5.2 Polynomial regression and sample prediction

.

Figure 5.3 Identification problem.

Figure 5.4 Deconvolution problem.

Figure 5.5

MIMO system.

Figure 5.6 DSL system and

MIMO system.

Figure 5.7 Wireless

MIMO system from

N

mobile transmitters (e.g., smartphones, tablets, etc…) to

N

antennas.

Figure 5.8 Multiple cells

MIMO systems.

Figure 5.9 Multiple cells and multiple antennas MIMO system.

Figure 5.10 Irregular sampling of weather measuring stations (yellow dots) [

image from Google Maps

].

Figure 5.11 Interpolation from irregular sampling.

Figure 5.12 Interpolation in 2D.

Figure 5.13 Radar system with backscattered waveforms from remote targets.

Figure 5.14 Doppler effect.

Chapter 06

Figure 6.1 Histogram of pdf

and the approximating Gaussian pdf from moments (dashed line).

Figure 6.2 Variance of different unbiased estimators, and the Cramér–Rao bound

CRB

(

θ

).

Chapter 07

Figure 7.1 Illustration of MLE for

and

.

Figure 7.2 Example of frequency modulated sinusoid, and stationarity interval

T

.

Figure 7.3 Principle of phase locked loop as iterative phase minimization between

x

(

t

) and local

x

o

(

t

).

Figure 7.4 MSE versus parameterization

p

.

Figure 7.5 Data with outliers and the non‐quadratic penalty

ϕ

(

ε

).

Chapter 08

Figure 8.1 Average likelihood

for

. Shaded insert is the log‐likelihood in the neighborhood of

.

Figure 8.2 CRB vs

θ

o

for

from example in Figure 1.

Figure 8.3 Compactness of CRB.

Figure 8.4 CRB and FIM for

.

Figure 8.5 Transformation of variance and CRB (

).

Chapter 09

Figure 9.1 Linear regression and impact of deviations

on the variance.

Figure 9.2 Non‐uniform binning.

Chapter 10

Figure 10.1 Typical Montecarlo simulations.

Figure 10.2 MSE vs

for

(upper) and

(lower) for

K=800

runs (dots) of Montecarlo simulations. Asymptotic CRB (10.1) is indicated by dashed line.

Figure 10.3 MSE vs. SNR for frequency estimation: regions and the pdf

.

Figure 10.4 Coarse/fine search of the peak of

S

(

ω

).

Figure 10.5 MSE vs. SNR (

) from Matlab code, and CRB (solid line).

Figure 10.6 Parabolic regression over three points for ToD estimation.

Figure 10.7 MSE vs. SNR (

) of ToD estimation for

Tg=[10,20,40]

samples and CRB (dashed lines).

Figure 10.8 Doppler‐radar system for speed estimation.

Figure 10.9 Periodogram peaks for Doppler shift for

8 km/h (black line) and

100 km/h (gray line) compared to the true values (dashed lines).

Figure 10.10 Speed‐error

of

K=100

Montecarlo runs (gray lines) and their mean value (black line with markers) vs EM iterations for two choices of speed as in Figure 10.9:

8 km/h and

100 km/h.

Chapter 11

Figure 11.1 Bayesian estimation (

).

Figure 11.2 Binary communication with noise.

Figure 11.3 Pdf

p

x

(

x

) for binary communication.

Figure 11.4 MMSE estimator for binary valued signals.

Figure 11.5 Example of data affected by impulse noise (upper figure), and after removal of impulse noise (lower figure) by MMSE estimator of the Gaussian samples (back line) compared to the true samples (gray line).

Figure 11.6 MMSE estimator for impulse noise modeled as Gaussian mixture.

Figure 11.7 Geometric view of LMMSE orthogonality.

Figure 11.8 Mapping between complete

and incomplete (or data)

set in EM method.

Figure 11.9 ToD estimation of multiple waveforms by EM method (

).

Figure 11.10 Mixture model of non‐Gaussian pdfs.

Chapter 12

Figure 12.1 Wiener deconvolution.

Figure 12.2 Linear MMSE prediction for WSS process.

Figure 12.3 Mean square prediction error

vs. predictor length

p

for

AR

(

N

a

) random process

x

[

n

].

Figure 12.4 Whitening of linear prediction: PSD of

ε

p

[

n

] for increasing predictor length

with AR(3).

Figure 12.5 Lattice structure of linear predictors.

Chapter 13

Figure 13.1 Bayesian tracking of the position of a moving point in a plane.

Figure 13.2 Dynamic systems model for the evolution of the state

θ

[

n

] (shaded area is not‐accessible).

Figure 13.3 Evolution of a‐posteriori pdf from a‐priori pdf in Bayesian tracking.

Figure 13.4 Linear dynamic model.

Figure 13.5 Multi‐lateration positioning.

Figure 13.6 Multi‐lateration from ranges with errors (solid lines).

Figure 13.7 Example of uncertainty regions from range errors.

Figure 13.8 Positioning in the presence of multipaths.

Chapter 14

Figure 14.1 Bias of periodogram for sinusoids.

Figure 14.2 Filter‐bank model of periodogram.

Figure 14.3 Bias and spectral leakage.

Figure 14.4 Rectangular, Bartlett (or triangular) and Hanning windows

w

[

m

] with M = 31, and their Fourier transforms

on frequency axis

ω

/2

π

.

Figure 14.5 Segmentation of data into

M

disjoint blocks of

N

samples each.

Figure 14.6 WOSA spectral analysis.

Figure 14.7 Periodogram (WOSA method) for rectangular, Bartlett, and Hamming window.

Figure 14.8 Periodogram (WOSA method) for varying

and Bartlett window.

Figure 14.9 Model for AR spectral analysis.

Figure 14.10 Radial position of pole (upper figure) and dispersion (lower figure) for AR(1) (solid line) and AR(2) (dash dot), compared to analytic model (14.10–14.11) (dashed gray line)

Figure 14.11 Scatter‐plot of poles of Montecarlo simulations for AR(1), AR(2), AR(3), AR(4) spectral analysis.

Figure 14.12 Model for MA spectral analysis.

Figure 14.13 Model for ARMA spectral analysis.

Chapter 15

Figure 15.1 Adaptive noise cancelling.

Figure 15.2 Multipath communication channel.

Figure 15.3 Adaptive identification and equalization in packet communication systems.

Figure 15.4 Adaptive filter identification.

Figure 15.5 Iterations along the principal axes.

Figure 15.6 Power spectral density

S

x

(

ω

) and iterations over the MSE surface.

Figure 15.7 Whitening in adaptive filters.

Figure 15.8 Convergence in mean and in mean square.

Figure 15.9 Excess MSE from fluctuations of

δ

a

[

n

].

Figure 15.10 Learning curve.

Figure 15.11 Optimization of the step‐size

μ

.

Figure 15.12 Filter response

a

o

(dots), and estimated samples over the interval

(solid lines).

Figure 15.13 Estimated filter response for samples in

and

vs. iterations for varying normalized step‐size

μ

.

Figure 15.14 Learning curve (Top) in linear scale (to visualize the convergence iterations) and (Bottom) in logarithmic scale (to visualize the excess MSE), shaded area is the anti‐causal part.

Chapter 16

Figure 16.1 Line spectrum analysis.

Figure 16.2 Segmentation of data into

N

samples.

Figure 16.3 Resolution of MUSIC (dashed line) vs. periodogram (solid line).

Figure 16.4 MUSIC for four lines from three sinusoids (setting of Figure 17.3).

Figure 16.5 Eigenvalues for setting in Figure 17.3.

Chapter 17

Figure 17.1 Communication system model: channel

H

(

z

) and equalization

G

(

z

).

Figure 17.2 Decision feedback equalization.

Figure 17.3 Decision levels for

.

Figure 17.4 Finite length MMSE–DFE.

Figure 17.5 Linear MIMO equalization.

Figure 17.6 MIMO–DEF equalization.

Chapter 18

Figure 18.1 2D

f

signal

s

(

x

, 

y

) and its 2D Fourier transform

S

(

ω

x

, 

ω

y

).

Figure 18.2 2D sinusoid

and the 2 F Fourier transform

(grey dots).

Figure 18.3 Product of 2D sinusoids.

Figure 18.4 Moiré pattern from two picket fences with distance

δL

.

Figure 18.5 Image (top‐left) and its 2D filtering.

Figure 18.6 Causal 2D filter.

Figure 18.7 Image acquisition system.

Figure 18.8 Blurring (left) of Figure 18.5 and Wiener‐filtering for different values of the noise level

in deblurring.

Figure 18.9 Original data (top‐left), diffuse filtering (top‐right), noisy diffuse filtering (bottom‐right), and MMSE deconvolved data (bottom‐left).

Figure 18.10 2D filtering in space and time: diffusion of the temperature.

Figure 18.11 Impulse response of propagation and backpropagation.

Figure 18.12 Superposition of three sources in 2D, and the measuring line of the wavefield in

.

Figure 18.13 Excitation at

(top‐left), propagation after

(top‐right), noisy propagation (bottom right), and backpropagation (bottom‐left).

Figure 18.14 Exploding reflector model.

Figure 18.15 Propagating waves region.

Figure 18.16 Example of

measured along a line from multiple scattering points: experimental setup with Ricker waveform (left) and the

(right).

Figure 18.17 2D blade

δ

(

x

).

Figure 18.18 Amplitude varying 2D blade.

Figure 18.19 Rotation.

Figure 18.20 Cylindrical signal along

y

:

.

Figure 18.21 2D sampling and the corresponding Fourier transform.

Chapter 19

Figure 19.1 Array geometry.

Figure 19.2 Far‐field source

s

(

t

) and wavefront impinging onto the uniform linear array.

Figure 19.3 Narrowband plane wavefront of wavelength

λ

from DoA

θ

.

Figure 19.4 Rank‐deficient configurations.

Figure 19.5 Spatial resolution of the array and spatial frequency resolution.

Figure 19.6 Beamforming configuration.

Figure 19.7 Conventional beamforming and the array‐gain vs. angle pattern

for

sensors.

Figure 19.8 Array‐gain

for MVDR beamforming for varying parameters.

Figure 19.9 Approximation of the array‐gain.

Chapter 20

Figure 20.1 Multichannel measurements for subsurface imaging.

Figure 20.2 Multichannel ToD model.

Figure 20.3 Resolution probability.

Figure 20.4 Generalized correlation method.

Figure 20.5 Truncation effects in DToD.

Figure 20.6 RMSE vs. SNR for DToD (solid lines) and CRB (dashed line).

Figure 20.7 Wavefront estimation from multiple DToD.

Figure 20.8 Curved and distorted wavefront (upper part), and measured data (lower part).

Figure 20.9 Phase‐function and wrapped phase in

from modulo‐ 2

π

.

Figure 20.10 Example of 2D phase unwrapping.

Figure 20.11 Example of delayed waveforms (dashed lines) from three sources impinging on a uniform linear array.

Figure 20.12 Delay and sum beamforming for wideband signals.

Chapter 21

Figure 21.1 X‐ray transmitter (Tx) and sensor (Rx), the X‐ray is attenuated along the line according to the Beer–Lambert law.

Figure 21.2 X‐ray tomographic experiment.

Figure 21.3 Parallel plane acquisition system and projection.

Figure 21.4 Emission tomography.

Figure 21.5 Reconstruction from Fourier transform of projections.

Figure 21.6 Filtered backprojection method.

Figure 21.7 Angular sampling in tomography.

Figure 21.8 Traveltime tomography.

Figure 21.9 Reflection tomography.

Figure 21.10 Internet tomography.

Chapter 22

Figure 22.1 Cooperative estimation among interconnected agents.

Figure 22.2 Multiple cameras cooperate with neighbors to image the complete object.

Figure 22.3 Example of random nodes mutually connected with others (radius of each node is proportional to the degree of each node).

Figure 22.4 Consensus iterations for the graph in Figure 22.3.

Figure 22.5 Cooperative estimation in distributed wireless system.

Figure 22.6 MSE vs. blocks in cooperative estimation (100 Montecarlo runs) and CRB (thick gray line). The insert details the behavior between sensing new measurements (i.e., collecting

samples at time) and

exchanging the local estimates.

Figure 22.7 Synchronization in communication engineering.

Figure 22.8 Synchrony‐states.

Figure 22.9 Temporal drift of time‐references.

Exchange of time‐stamps between two nodes with propagation delay

d

ij

.

Figure 22.11 Network Time Protocol to estimate the propagation delay.

Figure 22.12 Example of undirected graph.

Chapter 23

Figure 23.1 Binary hypothesis testing.

Figure 23.2 Receiver operating characteristic (ROC) curves for varying SNR.

Figure 23.3 Classification for Gaussian distributions: linear decision boundary for

(upper figure) and quadratic decision boundary for

(lower figure).

Figure 23.4 Decision regions for

and

.

Figure 23.5 Correlation‐based classifier (or decoder).

Figure 23.6 Linear discriminant for

.

Figure 23.7 Support vectors and margins from training data.

Figure 23.8 Clustering methods: K‐means (left) and EM (right) vs. iterations.

Guide

Cover

Table of Contents

Begin Reading

Pages

iii

iv

v

xvii

xviii

xix

xx

xxi

xxii

xxiii

xxv

xxvi

xxvii

xxix

xxx

xxxi

xxxiii

xxxv

xxxvii

xxxviii

xxxix

xl

1

2

3

4

5

6

7

8

10

11

12

13

14

16

17

18

21

22

23

24

25

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

375

376

377

378

379

380

381

382

383

384

385

386

387

388

389

390

391

392

393

394

395

396

397

398

399

400

401

402

403

404

405

406

407

408

409

410

411

412

413

414

415

416

417

418

419

420

421

422

423

424

425

426

427

428

429

430

431

432

433

434

435

437

438

439

440

441

442

443

444

445

446

447

448

449

450

451

452

453

454

455

456

457

458

459

460

461

462

463

464

465

466

467

468

469

470

471

472

473

474

475

476

477

478

479

480

481

482

483

484

485

487

488

489

490

491

492

493

494

495

496

497

498

499

500

501

502

503

504

505

506

507

508

509

510

511

512

513

514

515

516

517

518

519

521

522

523

524

525

526

527

528

529

530

531

532

533

534

535

536

537

538

539

540

541

542

543

544

545

546

547

549

550

551

552

553

554

555

557

558

559

560

561

562

Statistical Signal Processing in Engineering

 

 

Umberto Spagnolini

Politecnico di MilanoItaly

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This edition first published 2018© 2018 John Wiley & Sons Ltd

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Umberto Spagnolini to be identified as the author of this work has been asserted in accordance with law.

Registered OfficesJohn Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USAJohn Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

Editorial OfficeThe Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.

Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the accuracy of the text or exercises in this book. This work's use or discussion of MATLAB® software or related products does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the MATLAB® software.

Library of Congress Cataloging‐in‐Publication Data

Names: Spagnolini, Umberto, author.Title: Statistical signal processing in engineering / Umberto Spagnolini.Description: Hoboken, NJ: John Wiley & Sons, 2018. | Includes bibliographical references and index. |Identifiers: LCCN 2017021824 (print) | LCCN 2017038258 (ebook) | ISBN 9781119293958 (pdf) | ISBN 9781119293996 (ebook) | ISBN 9781119293972 (cloth)Subjects: LCSH: Signal processing‐Statistical methods.Classification: LCC TK5102.9 (ebook) | LCC TK5102.9 .S6854 2017 (print) | DDC 621.382/23–dc23LC record available at https://lccn.loc.gov/2017021824

Cover Design: WileyCover Image: ©Vladystock/Shutterstock

 

 

 

To my shining star Laura

List of Tables

4.1

Moments of a random process.

4.2

Classification of a random process.

4.3

Fourier transform properties.

4.4

Properties of z‐transform.

9.1

Waveforms and effective bandwidths.

14.1

Time windows and variance reduction.

14.2

Frequency smoothing and variance reduction.

15.1

LMS algorithms.

15.2

Comparison between LMS and RLS.

16.1

Deterministic vs. stochastic ML in line spectrum analysis.

18.1

2D Fourier transform properties.

20.1

Taxonomy of ToD methods.

23.1

Taxonomy of principles and methods for classification and clustering.

23.2

Classification metrics.

Preface

This book is written with the intention of giving a pragmatic reference on statistical signal processing (SSP) to graduate/PhD students and engineers whose primary interest is in mixed theory and applications. It covers both traditional and more advanced SSP topics, including a brief review of algebra, signal theory, and random processes. The aim is to provide a high‐level, yet easily accessible, treatment of SSP fundamental theory with some selected applications.

The book is a non‐axiomatic introduction to statistical processing of signals, while still having all the rigor of SSP books. The non‐axiomatic approach is purposely chosen to capture the interest of a broad audience that would otherwise be afraid to approach an axiomatic textbook due to the perceived inadequacy of their background. The intention is to stimulate the interest of readers by starting from applications from daily life, and from my personal and professional experience, I aim to demonstrate that book theory (still rigorous) is an essential tool for solving many problems. The treatment offers a unique approach to SSP: applications (somewhat simplified, but still realistic) and examples are interdisciplinary with the aim to foster interest toward the theory. The writing style is layered in order to capture the interest of different readers, offering a quick solution for field‐engineers, detailed treatments to challenge the analytical skills of students, and insights for colleagues. Re‐reading the same pages, one can discover more, and have a feeling of growth through seeing something not seen before.

Why a book for engineers? Engineers are pragmatic, are requested to solve problems, and use signals to “infer the world” in a way that can then be compared with the actual ground‐truth. They need to quickly and reliably solve problems, and are accountable for the responsibility they take. Engineers have the attitude of looking for/finding quick‐and‐dirty solutions to problems, but they also need to have the skills to go deeper if necessary. Engineering students are mostly trained in this way, at graduate level up to PhD. To attract graduate/PhD engineering students, and ultimately engineers, to read another new technical book, it should contain some recipes based on solid theory, and it should convince them that the ideas therein help them to do better what they are already doing. This is a strong motivation to deal with a new theory. After delineating the solution, engineering readers can go deeper into the theory up to a level necessary to spot exceptions, limits, malfunctioning, etc. of the current solution and find that doing much better is possible, but perhaps expensive. They can then consciously make cost‐benefit tradeoffs, as in the nature of engineering jobs.

Even if this book is for engineers and engineering students, all scientists can benefit from having the flavor of practical applications where SSP offers powerful problem‐solving tools. The pedagogical structure for school/teachers aims to give a practical vision without losing the rigorous approach. The book is primarily for ICT engineers, these being the most conventional SSP readers, but also for mechanical, remote sensing, civil, environmental, and energy engineers. The focus is to be just deep enough in theory, and to provide the background to enable the reader to pursue books with an axiomatic approach to go deeper on theory exceptions, if necessary, or to read more on applications that are surely fascinating for their exceptions, methods, and even phenomenalism.

Typical readers will be graduate and PhD students in engineering schools at large, or in applied science (physics, geophysics, astronomy), preferably with a basic background in algebra, random processes, and signal analysis. SSP practitioners are heavily involved in software development as this is the tool to achieve solutions to many of the problems. The book contains some exercises in the form of application examples with Matlab kernel‐code that can be easily adapted to solve broader problems.

I have no presumption to get all SSP knowledge into one book; rather, my focus is to give the flavor that SSP theory offers powerful tools to solve problems over broad applications, to stimulate the curiosity of readers at large, and to give guidelines on moving in depth into the SSP discipline when necessary. The book aims to stimulate the interest of readers who already have some basics to move into SSP practice. Every chapter collects into a few pages a specific professionalism, it scratches the surface of the problem and triggers the curiosity of the reader to go deeper through the essential bibliographical references provided therein. Of course, in 2017 (the time I am writing these notes), there is such easy accessibility to a broad literature, software, lecture notes about the literature, and web that my indexing to the bibliographical references would be partial and insufficient anyway. The book aims to give the reader enough critical tools to choose what is best for her/his interest among what is available.

In my professional life I have always been in the middle between applications and theory, and I have had to follow the steps illustrated in the enclosed block diagram. When facing a problem, it is important to interact with the engineers/scientists who have the deepest knowledge of the application problem itself, its approximations and bounds (stage‐A). They are necessary to help to set these limits into a mathematical/statistical framework. At the start, it is preferable if one adopts the jargon of the application in order to find a good match with application people, not only for establishing (useful) personal relations, but also in order to understand the application‐related literature. Once the boundary conditions of the problem have been framed (stage‐A), one has to re‐frame the problem into the SSP discipline. In this second stage (B), one can access the most advanced methods in algebra, statistics, and optimization. The boundary between problem definition and its solution (stage‐C) is much less clearly defined than one might imagine. Except for some simple and common situations (but this happens very rarely, unfortunately!), the process is iterative with refinements, introduction of new theory‐tools, or adaptations of tools developed elsewhere. No question, this stage needs experience on moving between application and theory, but it is the most stimulating one where one is continuously learning from application ‐ experts (stage‐A). Once the algorithm has been developed, it can be transferred back to the application (stage‐D), and this is the concluding interaction with the application‐related people. Tuning and refinement are part of the deal, and adaptation to some of the application jargon is of great help at this stage. Sometimes, in the end, the SSP‐practitioner is seen as part of the application team with solid theory competences and, after many different applications, one has the impression that the SSP‐practitioner knows a little of everything (but this is part of the professional experience). I hope many readers will be lured into this fascinating and diverse problem‐solving loop, spanning multiple and various applications, as I have been myself. The book touches all these fields, and it contains some advice, practical rules, and warnings that stem from my personal experience. My greatest hope is to be of help to readers’ professional lives.

Umberto Spagnolini, August 2017

P.S. My teaching experience led to the style of the book, and I made an effort to highlight the intuition in each page and avoid too complex a notation; the price is sometimes an awkward notation. For instance, the use of asymptotic notation that is common in many parts is replaced by “” meaning any convenient limit indicated in the text. Errors and typos are part of the unavoidable noise in the text that all SSPers have to live with! I did my best to keep this noise as small as possible, but surely I failed somewhere.

“… all models are wrong, but some are useful”

 

(George E.P. Box, 1919–2013)

List of Abbreviations

implies or follows used to simplify the equations

variable re‐assignement or asymptotic limit

convolution or complex conjugate (when superscript)

convolution of period

N

is approximately, or is approximately equal to

AIC

Akaike Criterium

AR

Autoregressive

ARMA

Autoregressive Moving Average

ART

Algebraic Reconstruction Tomography

AWGN

Additive White Gaussian Noise

BLUE

Best Linear Unbiased Estimator

CML

Conditional Maximum Likelihood

CRB

Cramer‐Rao Bound

CT

Computed Tomography

CW

Continuous Wave

DFE

Decision Feedback Equalizer

DFT

Discrete Fourier Transform

DoA

Direction of Arrival

DSL

Digital Subscriber Line

DToD

Differential Time of Delay

expectation operator

Eig

Eigenvalue decomposition

EKF

Extended Kalman Filter

EM

Expectation Maximization

ESPRIT

Estimation of Signal Parameters via Rotational Invariance

{.} or FT

Fourier transform

FIM

Fisher Information Matrix

FM

Frequency Modulation

FN

False Negative

FP

False Positive

GLRT

Generalized Likelihood Ration Test

IID

Independent and Identically Distributed

IQML

Iterative Quadratic Maximum Likelihood

KF

Kalman Filter

LDA

Linear Discriminant Analysis

LMMSE

Linear MMSE

LRT

Likelihood Ratio Test

LS

Least Squares

LTI

Linear Time Invariant

MA

Moving Average

MAP

Maximum a‐posteriori

MDL

Minimum Description Length Criterum

MIMO

Multiple Input Multiple Output

MLE

Maximum Likelihood Estimate

MMSE

Minimum Mean Square Error

MODE

Mothod of Direction Estimation

MoM

Method of moments

MRI

Magnetic Resonace Imaging

MSE

Mean Square Error

MUSIC

Multiple Signal Classification

MVDR

Minimum Variance Distortionless

MVU

Minimum Variance Unbiased

(N)LMS

(Normalized) Least Mean Square

NTP

Network Time Protocol

PDE

Partial Differential Equations

pdf

Probability Density Function

PET

Photon Emission Tomography

PLL

Phase Looked Loop

pmf

Probability Mass Function

PSD

Power Spectral Density

RLS

Recursive Least Squares

ROC

Receiver Operating Characteristic

RT

Radiotheraphy

RV

Random Variable

SINR

Signa to interference + noise ratio

SPECT

Single‐Photon Emission Thomography

SSP

Statistical Signal Processing

SSS

Strict‐Sense Stationary

st

Subject to

SVD

Singular Value Decomposition

SVM

Support Vector Machine

TN

True Negative

ToD

Time of Delay

TP

True Positive

UML

Unconditional Maximum Likelihood

WLS

Weighted Least Squares

WOSA

Window Overlap Spectral Analysis

wrt

With Respect To

WSS

Wide‐Sense Stationary

YW

Yule Walker

Zeta transform

ZF

Zero Forcing

How to Use the Book

The book is written for a heterogeneous audience. Graduate‐level students can follow the presentation order; if skilled in the preliminary parts, they can start reading from Chapter 5 or 6, depending on whether they need to be motivated by some simple examples (in Chapter 5) to be used as guidelines. Chapters 6–9 are on non‐Bayesian estimation and Chapter 10 complements this with Montecarlo methods for numerical analysis. Chapters 11–13 are on Bayesian methods, either general and specialized to stationary process (Chapter 12) or Bayesian tracking (Chapter 13). The remaining chapters can be regarded as applications of the estimation theory, starting from classical ones on spectral analysis (Chapter 14), adaptive filtering for non‐stationary contexts (Chapter 15) and line‐spectrum analysis (Chapter 16). The most specialized applications are in estimation on communication engineering (Chapter 17), 2D signal analysis and filtering (Chapter 18), array processing (Chapter 19), advanced methods for time of delay estimation (Chapter 20), tomography (Chapter 21), application on distributed inference (Chapter 22), and classification methods (Chapter 23).

Expert readers can start from the applications (Chapters 14–23), and follow the links to specific chapters or sections to go deeper and/or find the analytical justifications. The reader skilled in one application area can read the corresponding chapter, bounce back to the specific early sections (in Chapters 1–13), and follow a personal learning path.

The curious and perhaps unskilled reader can look at the broad applications where SSP is an essential problem‐solving tool, and be motivated to start from the beginning. There is no specific reading order from Chapter 14 to Chapter 23: the proposed order seems quite logical to the author, but is certainly not the only one.

Even though SSP uses standard statistical tools, the expert statistician is encouraged to start from the applications that could be of interest, preferably after a preliminary reading of Chapters 4–5 on stochastic processes, as these are SSP‐specific.

Most of the application‐type chapters correspond to a whole scientific community working on that specific area, with many research activities, advances, latest methods, and results. In the introductions of these chapters, or dispersed within the text there are some essential references. The reader can start from the chapter, get an overview, and move to the specific application area if going deeper into the subject is necessary.

The companion web‐page of the book contains some numerical exercises—computer‐based examples that mimic real‐life applications (somewhat oversimplified, but realistic):

www.wiley.com/go/spagnolini/signalprocessing

About the Companion Website

Don’t forget to visit the companion website for this book:

www.wiley.com/go/spagnolini/signalprocessing

There you will find valuable material designed to enhance your learning, including:

Repository of theory‐based and Matlab exercises

Videos by the author (lecture‐style) detailing some aspects covered by the book

Scan this QR code to visit the companion website

 

Prerequisites

The reader who is somewhat familiar (at undergraduate level) with algebra, matrix analysis, optimization problems, signals, and systems (time‐continuous and time‐discrete) can skip Chapters 1–4 where all these concepts are revised for self‐consistency and to maintain a congruent notation. The only recommended prerequisite is a good knowledge of random variables and stochastic processes, and related topics. The fundamental book by A.Papoulis and S.U. Pillai, Probability, random variables, and stochastic processes [11] is an excellent starting point for a quick comprehension of all relevant topics.

Why are there so many matrixes in this book?

Any textbook or journal in advanced signal processing investigates methods to solve large scale problems where there are multiple signals, variables, and measurements to be manipulated. In these situations, matrix algebra offers tools that are heavily adopted to compactly manage a large set of variables and this is a necessary background.1 An example application can justify this statement.

The epicenter in earthquakes is obtained by measuring the delays at multiple positions, and by finding the position that best explains the collected measurements (in jargon, data). Figure 1 illustrates an example in 2D with epicenter at coordinates . At the time the earthquake occurs, it generates a spherical elastic wave that propagates with a decaying amplitude from the epicenter, and hits a set of N geophysical sensing stations after propagation through a medium with velocity v (typical values are 2000–5000 m/s for shear waves, and above 4000 m/s for compressional, or primary, waves). The signals at the sensing stations are (approximately) a replica of the same waveform as in the Figure 1 with delays x1, x2, …, xN that depend on the propagating distance from the epicenter to each sensing station. The correspondence of each delay with the distance from the epicenter depends on the physics of propagation of elastic waves in a solid, and it is called a forward model (from model parameters to observations). Estimation of the epicenter is a typical inverse problem (from observations to model parameters) that needs at first a clear definition of the forward model. This forward model can be stated as follows: Given a set of N sensing points where the kth sensor is located at coordinates , the time of delay (ToD) of the earthquake waveform is

This depends on the distance from the epicenter as detailed by the relationship hk(.). The absolute time x0 is irrelevant for the epicenter estimation as ToDs are estimated as differences between ToDs (i.e., ) so to avoid the need for x0; from the reasoning here it can be assumed that (ToD estimation methods are in Chapter 20). All the ToDs are grouped into a vector

Figure 1