2D and 3D Image Analysis by Moments - Jan Flusser - E-Book

2D and 3D Image Analysis by Moments E-Book

Jan Flusser

0,0
92,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Presents recent significant and rapid development in the field of 2D and 3D image analysis

2D and 3D Image Analysis by Moments, is a unique compendium of moment-based image analysis which includes traditional methods and also reflects the latest development of the field.

The book presents a survey of 2D and 3D moment invariants with respect to similarity and affine spatial transformations and to image blurring and smoothing by various filters. The book comprehensively describes the mathematical background and theorems about the invariants but a large part is also devoted to practical usage of moments. Applications from various fields of computer vision, remote sensing, medical imaging, image retrieval, watermarking, and forensic analysis are demonstrated. Attention is also paid to efficient algorithms of moment computation.

Key features:

  • Presents a systematic overview of moment-based features used in 2D and 3D image analysis.
  • Demonstrates invariant properties of moments with respect to various spatial and intensity transformations.
  • Reviews and compares several orthogonal polynomials and respective moments.
  • Describes efficient numerical algorithms for moment computation.
  • It is a "classroom ready" textbook with a self-contained introduction to classifier design.
  • The accompanying website contains around 300 lecture slides, Matlab codes, complete lists of the invariants, test images, and other supplementary material.

2D and 3D Image Analysis by Moments, is ideal for mathematicians, computer scientists,   engineers, software developers, and Ph.D students involved in image analysis and recognition. Due to the addition of two introductory chapters on classifier design, the book may also serve as a self-contained textbook for graduate university courses on object recognition.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 973

Veröffentlichungsjahr: 2016

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Copyright

Dedication

Preface

Authors' biographies

Acknowledgements

About the companion website

Chapter 1: Motivation

1.1 Image analysis by computers

1.2 Humans, computers, and object recognition

1.3 Outline of the book

References

Chapter 2: Introduction to Object Recognition

2.1 Feature space

2.2 Categories of the invariants

2.3 Classifiers

2.4 Performance of the classifiers

2.5 Conclusion

References

Chapter 3: 2D Moment Invariants to Translation, Rotation, and Scaling

3.1 Introduction

3.2 TRS invariants from geometric moments

3.3 Rotation invariants using circular moments

3.4 Rotation invariants from complex moments

3.5 Pseudoinvariants

3.6 Combined invariants to TRS and contrast stretching

3.7 Rotation invariants for recognition of symmetric objects

3.8 Rotation invariants via image normalization

3.9 Moment invariants of vector fields

3.10 Conclusion

References

Chapter 4: 3D Moment Invariants to Translation, Rotation, and Scaling

4.1 Introduction

4.2 Mathematical description of the 3D rotation

4.3 Translation and scaling invariance of 3D geometric moments

4.4 3D rotation invariants by means of tensors

4.5 Rotation invariants from 3D complex moments

4.6 3D translation, rotation, and scale invariants via normalization

4.7 Invariants of symmetric objects

4.8 Invariants of 3D vector fields

4.9 Numerical experiments

4.10 Conclusion

Appendix 4.A

Appendix 4.B

Appendix 4.C

References

Chapter 5: Affine Moment Invariants in 2D and 3D

5.1 Introduction

5.2 AMIs derived from the Fundamental theorem

5.3 AMIs generated by graphs

5.4 AMIs via image normalization

5.5 The method of the transvectants

5.6 Derivation of the AMIs from the Cayley-Aronhold equation

5.7 Numerical experiments

5.8 Affine invariants of color images

5.9 Affine invariants of 2D vector fields

5.10 3D affine moment invariants

5.11 Beyond invariants

5.12 Conclusion

Appendix 5.A

Appendix 5.B

References

Chapter 6: Invariants to Image Blurring

6.1 Introduction

6.2 An intuitive approach to blur invariants

6.3 Projection operators and blur invariants in Fourier domain

6.4 Blur invariants from image moments

6.5 Invariants to centrosymmetric blur

6.6 Invariants to circular blur

6.7 Invariants to

N

-FRS blur

6.8 Invariants to dihedral blur

6.9 Invariants to directional blur

6.10 Invariants to Gaussian blur

6.11 Invariants to other blurs

6.12 Combined invariants to blur and spatial transformations

6.13 Computational issues

6.14 Experiments with blur invariants

6.15 Conclusion

Appendix 6.A

Appendix 6.B

Appendix 6.C

Appendix 6.D

Appendix 6.E

Appendix 6.F

Appendix 6.G

References

Chapter 7: 2D and 3D Orthogonal Moments

7.1 Introduction

7.2 2D moments orthogonal on a square

7.3 2D moments orthogonal on a disk

7.4 Object recognition by Zernike moments

7.5 Image reconstruction from moments

7.6 3D orthogonal moments

7.7 Conclusion

References

Chapter 8: Algorithms for Moment Computation

8.1 Introduction

8.2 Digital image and its moments

8.3 Moments of binary images

8.4 Boundary-based methods for binary images

8.5 Decomposition methods for binary images

8.6 Geometric moments of graylevel images

8.7 Orthogonal moments of graylevel images

8.8 Conclusion

Appendix 8.A

References

Chapter 9: Applications

9.1 Introduction

9.2 Image understanding

9.3 Image registration

9.4 Robot and autonomous vehicle navigation and visual servoing

9.5 Focus and image quality measure

9.6 Image retrieval

9.7 Watermarking

9.8 Medical imaging

9.9 Forensic applications

9.10 Miscellaneous applications

9.11 Conclusion

References

Chapter 10: Conclusion

10.1 Summary of the book

10.2 Pros and cons of moment invariants

10.3 Outlook to the future

Index

End User License Agreement

Pages

xv

xvi

xvii

xviii

xix

xxi

xxii

xxiii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

375

376

377

378

379

380

381

382

383

384

385

386

387

388

389

390

391

392

393

394

395

396

397

398

399

400

401

402

403

404

405

406

407

408

409

410

411

412

413

414

415

416

417

418

419

420

421

422

423

424

425

426

427

428

429

430

431

432

433

434

435

436

437

438

439

440

441

442

443

444

445

446

447

448

449

450

451

452

453

454

455

456

457

458

459

460

461

462

463

464

465

466

467

468

469

470

471

472

473

474

475

476

477

478

479

480

481

482

483

484

485

486

487

488

489

490

491

492

493

494

495

496

497

498

499

500

501

502

503

504

505

506

507

508

509

510

511

512

513

514

515

516

517

518

519

520

521

522

523

524

525

526

527

528

529

Guide

Cover

Table of Contents

Begin Reading

List of Illustrations

Preface

Figure 1 The number of moment-related publications as found in SCOPUS.

Chapter 1: Motivation

Figure 1.1 General image analysis flowchart

Figure 1.2 An example of the car licence plate recognition

Figure 1.3 Image acquisition process with degradations

Chapter 2: Introduction to Object Recognition

Figure 2.1 Two-dimensional feature space with two classes, almost an ideal example. Each class forms a compact cluster (the features are invariant to translation, rotation, scaling, and skewing of the characters) and the clusters are well separated from one another (the features are discriminative although the characters are visually similar to each other)

Figure 2.2 The object and its convex hull

Figure 2.3 The object and its minimum bounding rectangle

Figure 2.4 Radial function of the object

Figure 2.5 Star-shaped object and its radial shape vector

Figure 2.6 The object and its shape matrix

Figure 2.7 Examples of textures. The texture is often a more discriminative property than the shape and the color

Figure 2.8 The original Barabara image (a) and its wavelet decomposition into depth two (b)

Figure 2.9 Semi-differential invariants. The object is divided by inflection points. Both convex and concave cuts can be used for a description by global invariants

Figure 2.10 Partition of the feature space defines a classifier

Figure 2.11 Three different classifiers as the results of three training algorithms on the same training set. (a) Over-simplified classifier, (b) over-trained classifier, and (c) close-to-optimal classifier

Figure 2.12 Decision boundary of the NN classifier depends on the used distance: (a) the nearest distance and (b) the mean distance

Figure 2.13 Robustness of the -NN classifier. The unknown sample “+” is classified as a circle by the NN classifier and as a cross by the 2-NN classifier. The later choice better corresponds to our intuition

Figure 2.14 SVM classifiers. The hard margin (a) and the soft margin (b) constraints

Figure 2.15 Multispectral satellite image. The objects are single pixels, the features are their intensities in the individual spectral bands. This kind of data is ideal for the Bayesian classifier

Figure 2.16 Principal component transformation: (a) unstructured original data in feature space, correlated (b) transformation into new feature space , decorrelated. The first principal component is

Figure 2.17 PCT of data consisting of two classes: (a) the original feature space, (b) new feature space after the PCT. The first principal component is thanks to higher variance but the between-class separability is provided solely by

Chapter 3: 2D Moment Invariants to Translation, Rotation, and Scaling

Figure 3.1 The desired behavior of TRS moment invariants–all instances of a rotated and scaled image have almost the same values of the invariants (depicted for two invariants)

Figure 3.2 Numerical test of the normalized moment . Computer-generated scaling of the test image ranged form to . To show robustness, each image was corrupted by additive Gaussian white noise. Signal-to-noise ratio (SNR) ranged from 50 (low noise) to 10 (heavy noise). Horizontal axes: scaling factor and SNR, respectively. Vertical axis–relative deviation (in %) between of the original and that of the scaled and noisy image. The test proves the invariance of and illustrates its high robustness to noise

Figure 3.3 Numerical test of the aspect-ratio invariant . Computer-generated scaling of the test image ranged form 0.5 to 2 in both directions independently. Horizontal axes: scaling factors and , respectively. Vertical axis–relative deviation (in %) between of the original and that of the scaled image. The test illustrates the invariance of . Higher relative errors for low scaling factors and typical jagged surface of the graph are the consequences of the image resampling

Figure 3.4 Numerical test of the basic invariant . Computer-generated rotation of the test image ranged form 0 to 360 degrees. To show robustness, each image was corrupted by additive Gaussian white noise. Signal-to-noise ratio (SNR) ranged from 40 (low noise) to 10 (heavy noise). Horizontal axes: rotation angle and SNR, respectively. Vertical axis–relative deviation (in %) between of the original and that of the rotated and noisy image. The test proves the invariance of and illustrates its high robustness to noise

Figure 3.5 The smiles: (a) original and (b) another Figure created from the original according to Eq. (3.34). For the values of the respective invariants see Table 3.1

Figure 3.6 (a) Original image of a pan and (b) a virtual “two-handle” pan. These objects are distinguishable by the basic invariants but not by the Hu invariants

Figure 3.7 The test image and its mirrored version. Basic invariants of the mirrored image are complex conjugates of those of the original

Figure 3.8 Numerical test of the contrast and TRS invariant for and . Computer-generated scaling of the test image ranged from to , and the contrast stretching factor ranged from to . Horizontal axes: scaling factor and contrast stretching factor , respectively. Vertical axis–relative deviation (in %) between of the original and that of the scaled and stretched image. The test proves the invariance of with respect to both factors. However, for down-scaling with and , the resampling effect leads to higher relative errors

Figure 3.9 Sample objects with an -fold rotation symmetry. From (a) to (e): , 3, 5, 4, and 2, respectively. All depicted cases have also an axial symmetry; however this is not a rule

Figure 3.10 The matrix of the complex moments of an -fold symmetric object. The gray elements are always zero. The distance between neighboring non-zero diagonals is

Figure 3.11 The test logos (from left to right): Mercedes-Benz, Mitsubishi, Recycling, Fischer, and Woolen product

Figure 3.12 The logo positions in the space of two invariants and showing good discrimination power. The symbols: –Mercedes-Benz, –Mitsubishi, –Recycling, *–Fischer, and –Woolen product. Each logo was randomly rotated ten times

Figure 3.13 The logo positions in the space of two invariants and introduced in Theorem 3.2. These invariants have no discrimination power with respect to this logo set. The symbols: –Mercedes-Benz, –Mitsubishi, –Recycling, *–Fischer, and –Woolen product

Figure 3.14 The test patterns: capital L, rectangle, equilateral triangle, circle, capital F, diamond, tripod, cross, and ring

Figure 3.15 The space of two invariants and introduced in Theorem 3.2. The symbols: ×–rectangle, –diamond, –equilateral triangle, –tripod +–cross, •–circle, and –ring. The discriminability is very poor

Figure 3.16 The space of two invariants and introduced in Theorem 3.2, . The symbols: ×–rectangle, –diamond, –equilateral triangle, –tripod +–cross, •–circle, and –ring, *–capital F, and –capital L. Some clusters are well separated

Figure 3.17 The space of two invariants and introduced in Theorem 3.2, (logarithmic scale). The symbols: ×–rectangle, –diamond, –equilateral triangle, –tripod +–cross, •–circle, and –ring, *–capital F, and –capital L. All clusters except the circle and the ring are separated

Figure 3.18 The space of two invariants and . The symbols: ×–rectangle, –diamond, –equilateral triangle, –tripod +–cross, •–circle, and –ring, *–capital F, and –capital L. All clusters except the circle and the ring are separated. Comparing to Figure 3.17, note less correlation of the invariants and a lower dynamic range

Figure 3.19 The toy set used in the experiment

Figure 3.21 Ambiguity of the principal axis normalization. These four positions of the object satisfy . Additional constraints and make the normalization unique

Figure 3.20 Principal axis normalization to rotation–an object in the normalized position along with its reference ellipse superimposed

Figure 3.22 An example of the ambiguity of the normalization by complex moments. In all these six positions, is real and positive as required

Figure 3.23 Turbulence in a fluid (the graylevels show the local velocity of the flow, the direction is not displayed)

Figure 3.24 Image gradient as a vector field. For visualization purposes, the field is depicted by arrows on a sparse grid and laid over the original Lena image

Figure 3.25 The wind velocity forecast for the Czech Republic (courtesy of the Czech Hydrometeorological Institute, the numerical model Aladin). The longer dash is constant part of the wind, the shorter dash expresses perpendicular squalls. The Figure actually represents two vector fields

Figure 3.26 Various rotations of a vector field: (a) the original vector field, (b) the inner rotation, (c) the outer rotation, (d) the total rotation. The graylevels corresponds to the vector sizes

Figure 3.27 Optical flow as a vector field: (a) the original field, (b) the optical flow computed from the video sequence rotated by , (c) the original optical flow field after total rotation by , (d) the optical flow computed from the video sequence rotated by . All rotations are counterclockwise. The arrows show the direction and velocity of the movement between two consecutive frames of the video sequence

Chapter 4: 3D Moment Invariants to Translation, Rotation, and Scaling

Figure 4.1 3D images of various nature: (a) real volumetric data – CT of a human head, (b) real binary volumetric data measured by Kinect (the bag), (c) binary object with a triangulated surface (the pig) from the PSB, and (d) artificial binary object with a triangulated surface (decagonal prism)

Figure 4.2 Yaw, pitch, and roll angles in the Tait-Bryan convention

Figure 4.3 The generating graphs of (a) both and , (b) both and and (c) both and

Figure 4.4 The spherical harmonics. (a) , , (b) , , (c) , , (d) , . Imaginary parts are displayed for and real parts for

Figure 4.5 The microscope: (a) the original position, (b) the first random rotation, (c) the second random rotation, (d) the standard position by the covariance matrix, (e) the standard position by the matrix of inertia and (f) the standard position by the 3D complex moments

Figure 4.6 The symmetric bodies with the symmetry groups (a) (the repeated tetrahedrons), (b) (the repeated tetrahedrons), (c) (the repeated tetrahedrons), (d) (the pentagonal pyramid), (e) (the repeated pyramids), (f) (the repeated tetrahedrons), (g) (the repeated pyramids), (h) (the pentagonal prism), (i) (the repeated tetrahedrons), (j) (the regular tetrahedron), (k) (the cube) and (l) (the octahedron)

Figure 4.7 The symmetric bodies with the symmetry groups: (a) (the dodecahedron), (b) (the icosahedron), (c) (the simplest canonical polyhedron), (d) (the pyritohedral cube), (e) (the propello-tetrahedron), (f) (the snub dodecahedron), (g) (the snub cube), (h) (the chiral cube), (i) (the conic), (j) (the cylinder) and (k) (the sphere)

Figure 4.8 Ancient Greek amphoras: (a) photo of A1, (b) photo of A2, (c) wire model of the triangulation of A2

Figure 4.9 The archeological findings used for the experiment

Figure 4.10 The absolute values of the invariants in the experiment with the archeological findings from Figure 4.9. Legend: (a) , (b) ×, (c) +, (d) and (e)

Figure 4.11 Examples of the class representatives from the Princeton Shape Benchmark: (a) sword, (b) two-story home, (c) dining chair, (d) handgun, (e) ship, and (f) car

Figure 4.12 Six submarine classes used in the experiment

Figure 4.13 Example of the rotated and noisy submarine from Figure 4.12c with SNR (a) 26 dB and (b) 3.4 dB

Figure 4.14 The success rate of the invariants from the volume geometric, complex, and normalized moments compared with the spherical harmonic representation and with the surface geometric moment invariants

Figure 4.15 3D models of two teddy bears created by Kinect

Figure 4.16 The values of the invariants of two teddy bears

Figure 4.17 The bodies used in the experiment: (a) two connected tetrahedrons with the symmetry group , (b) the body with the reflection symmetry , (c) the rectangular pyramid with the symmetry group , (d) the triangular pyramid with the symmetry group and (e) the triangular prism with the symmetry group

Figure 4.18 Randomly rotated sampled bodies: (a) the rectangular pyramid, (b) the cube and (c) the sphere

Figure 4.19 The images of the symmetric objects downloaded from the PSB: (a) hot air balloon 1338, (b) hot air balloon 1337, (c) ice cream 760, (d) hot air balloon 1343, (e) hot air balloon 1344, (f) gear 741, (g) vase 527, (h) hot air balloon 1342

Figure 4.20 The objects without texture and color, used as the database templates: (a) hot air balloon 1338, (b) hot air balloon 1337, (c) ice cream 760, (d) hot air balloon 1343, (e) hot air balloon 1344, (f) gear 741, (g) vase 527, (h) hot air balloon 1342

Figure 4.21 Examples of the noisy and rotated query objects: (a) hot air balloon 1338, (b) hot air balloon 1337, (c) ice cream 760, (d) hot air balloon 1343, (e) hot air balloon 1344, (f) gear 741, (g) vase 527, (h) hot air balloon 1342

Chapter 5: Affine Moment Invariants in 2D and 3D

Figure 5.1 The projective deformation of a scene due to a non-perpendicular view. Square tiles appear as quadrilaterals; the transformation preserves straight lines but does not preserve their collinearity

Figure 5.2 The projective transformation maps a square onto a quadrilateral (computer-generated example)

Figure 5.3 The affine transformation maps a square to a parallelogram

Figure 5.4 The affine transformation approximates the perspective projection if the objects are small

Figure 5.5 The graphs corresponding to the invariants (a) (5.14) and (b) (5.15)

Figure 5.6 The graph leading to a vanishing invariant (5.18)

Figure 5.7 The graph corresponding to the invariant from (5.23)

Figure 5.8 The standard positions of the cross with varying thickness : (a) Thin cross, , (b) slightly less than , (c) slightly greater than , and (d) . Note the difference between the two middle positions

Figure 5.9 Numerical test of invariance of . Horizontal axes: horizontal skewing and rotation angle, respectively. Vertical axis – relative deviation (in %) between of the original and that of the transformed image. The test proves the invariance of

Figure 5.10 The comb. The viewing angle increases from (top) to (bottom)

Figure 5.11 The original digits used in the recognition experiment

Figure 5.12 The examples of the deformed digits

Figure 5.13 The digits in the feature space of affine moment invariants and (ten various transformations of each digit, a noise-free case)

Figure 5.14 The test patterns. (a) originals, (b) examples of distorted patterns, and (c) the normalized positions

Figure 5.15 The space of two AMIs and

Figure 5.16 The space of two normalized moments and

Figure 5.17 The part of the mosaic used in the experiment

Figure 5.18 The image of a tile with a slight projective distortion

Figure 5.19 The geometric patterns. The feature space of and : – equilateral triangle, – isosceles triangle, – square, – rhombus, – big rectangle, – small rectangle, – rhomboid, – trapezoid, – pentagon, – regular hexagon, – irregular hexagon, – circle, – ellipse

Figure 5.20 The animal silhouettes. The feature space of and : – bear, – squirrel, – pig, – cat, – bird, – dog, – cock, – hedgehog, – rabbit, – duck, – dolphin, – cow

Figure 5.21 The geometric patterns. The feature space of and : – equilateral triangle, – isosceles triangle, – square, – rhombus, – big rectangle, – small rectangle, – rhomboid, – trapezoid, – pentagon, – regular hexagon, – irregular hexagon, – circle, – ellipse

Figure 5.22 The animal silhouettes. The feature space of and : – bear, – squirrel, – pig, – cat, – bird, – dog, – cock, – hedgehog, – rabbit, – duck, – dolphin, – cow

Figure 5.23 The image of a tile with a heavy projective distortion

Figure 5.24 Scrabble tiles – the templates

Figure 5.25 Scrabble tiles to be recognized – a sample scene

Figure 5.26 The space of invariants and . Even in two dimensions the clustering tendency is evident. Legend: A, B, C, D, E, H, I, J, K, L, M, N, O, P, R, S, T, U, V, Y, Z

Figure 5.27 The mastercards. Top row from the left: Girl, Carnival, Snowtubing, Room-bell, and Fireplace. Bottom row: Winter cottage, Spring cottage, Summer cottage, Bell, and Star

Figure 5.28 The card “Summer cottage” including all its rotations. The rotations are real due to the rotations of the hand-held camera. This acquisition scheme also introduces mild perspective deformations. The third and fourth rows contain the other card from the pair

Figure 5.29 The mastercards. Legend: – Girl, – Carnival, – Snowtubing, – Room-bell and – Fireplace, – Winter cottage, – Spring cottage, – Summer cottage, – Bell and – Star. A card from each pair is expressed by the black symbol, while the other card is expressed by the gray symbol. Note that some clusters have been split into two sub-clusters. This is because the two cards of the pair may be slightly different. This minor effect does not influence the recognition rate

Figure 5.30 The hypergraphs corresponding to invariants (a) and (b)

Figure 5.31 Nonlinear deformation of the text captured by a fish-eye-lens camera

Figure 5.32 Character deformations due to the print on a flexible surface

Figure 5.33 The six bottle images used in the experiment

Chapter 6: Invariants to Image Blurring

Figure 6.1 Two examples of image blurring. Space-variant out-of-focus blur caused by a narrow depth of field of the camera (a) and camera-shake blur at a long exposure (b)

Figure 6.2 Visualization of the space variant PSF. A photograph of a point grid by a hand-held camera blurred by camera shake. The curves are images of bright points equivalent to the PSF at the particular places

Figure 6.3 Blurred image (top) to be matched against a database (bottom). A typical situation where the convolution invariants may be employed

Figure 6.4 The flowchart of image deconvolution (left) and of the recognition by blur invariants (right)

Figure 6.5 The PSF of non-linear motion blur which exhibits an approximate central symmetry but no axial symmetry. Visualization of the PSF was done by photographing a single bright point

Figure 6.6 Two examples of an out-of-focus blur on a circular aperture: (a) the books, (b) the harbor in the night

Figure 6.7 Real examples of a circularly symmetric out-of-focus PSF: (a) a disk-like PSF on a circular aperture, (b) a ring-shaped PSF of a catadioptric objective

Figure 6.8 A bokeh example. The photo was taken with a mirror-lens camera. The picture (a) is a close-up of a larger scene and shows the out-of-focus background. The estimated PSF of an annular shape, which is characteristic for this kind of lenses, is shown in (b)

Figure 6.9 Airy function, a diffraction PSF on a circular aperture

Figure 6.10 (a) The original image (the Rainer cottage in High Tatra mountains, Slovakia) and (b) its projection

Figure 6.11 Partially open diaphragm with 9 blades forms a polygonal aperture

Figure 6.12 Out-of-focus PSFs on polygonal apertures of three different cameras obtained as photographs of a bright point. A nine-blade diaphragm was used in (a) and (b); a seven-blade diaphragm in (c)

Figure 6.13 Various projections of the original image from Figure 6.10: (a) , (b) , (c) and (d)

Figure 6.14 The structure of the matrix of -FRS blur invariants. The gray elements on the diagonals are identically zero regardless of . The white elements stand for non-trivial invariants

Figure 6.15 The shape of the out-of-focus PSF can be observed in the background. This bokeh effect may serve for estimation of . In this case,

Figure 6.16 Two examples of real out-of-focus PSF on apertures with dihedral symmetry, obtained by photographing a bright point. (a) , (b)

Figure 6.17 Dihedral projections of the original image from Figure 6.10. (a) , (b) , (c) and (d)

Figure 6.18 The structure of the dihedral invariant matrix. The dark gray elements on the main diagonal vanish for any . The light gray elements on the minor diagonals are non-trivial, but their real and imaginary parts are constrained. The white elements stand for non-trivial complex-valued invariants

Figure 6.19 The image blurred by a camera shake blur, which is approximately a directional blur

Figure 6.20 Directional projections of the original image from Figure 6.10. (a) , (b)

Figure 6.21 The image of a bright point (a close-up of Figure 6.1b) can serve for estimation of the blur direction

Figure 6.22 Amplitude spectrum of a motion-blurred image. The zero lines are perpendicular to the motion direction

Figure 6.23 Estimated blur direction from the image spectrum in Figure 6.22

Figure 6.24 Gaussian blur caused by the mist coupled with a contrast decrease (Vltava river, Prague, Czech Republic)

Figure 6.25 Gaussian blur as a result of denoising. The original image corrupted by a heavy noise was smoothed by a Gaussian kernel to suppress the high-frequency noise components

Figure 6.26 2D Gaussian projection of the original image from Figure 6.10

Figure 6.27 The original image and its blurred and affinely deformed version (Liberec, Czech Republic). The values of the combined blur-affine invariants are the same for both images

Figure 6.28 The test image of the size (the statue of Pawlu Boffa in Valletta, Malta): (a) original, (b) the image blurred by circularly symmetric blur of a standard deviation 100 pixels, (c) the same blurred image without margins

Figure 6.29 The values of the invariants, the exact convolution model

Figure 6.30 The values of the invariants violated by the boundary effect, when the realistic convolution model was applied

Figure 6.31 The original high-resolution satellite photograph, the City of Plzeň, Czech Republic, with three selected templates

Figure 6.32 The templates: The confluence of the rivers Mže and Radbuza (a), the apartment block (b), and the road crossing (c)

Figure 6.33 The blurred and noisy image. An example of a frame in which all three templates were localized successfully

Figure 6.35 The graph summarizing the results of the experiment. The area below the curve denotes the domain in a “noise-blur space” in which the algorithm works mostly successfully

Figure 6.34 The boundary effect. Under a discrete convolution on a bounded support, the pixels near the template boundary (white square) are affected by the pixels lying outside the template. An impact of this effect depends on the size of the blurring mask (black square)

Figure 6.36 The influence of the boundary effect and noise on the numerical properties of the centrosymmetric convolution invariant . Horizontal axes: the blurring mask size and the signal-to-noise ratio, respectively. Vertical axis: the relative error in %. The invariant corrupted by a boundary effect (a) and the same invariant calculated from a zero-padded template where no boundary effect appeared (b)

Figure 6.37 The house image: (a) the deblurred version, (b) the original blurred version

Figure 6.38 Four images of the sunspot blurred by a real atmospheric turbulence blur of various extent. The images are ordered from the less to the most blurred one. A template is depicted in the first image to illustrate its size

Figure 6.39 Sample “clear” images of the the CASIA HFB database. The database consists of very similar faces

Figure 6.40 Sample test images degraded by heavy blur and noise ( and SNR = 0 dB)

Figure 6.41 The traffic signs used in the experiment. First row: No entry, No entry into a one-way road, Main road, End of the main road. Second row: No stopping, No parking, Give way, Be careful in winter. Third row: Roundabout ahead, Roundabout, Railway crossing, End of all prohibitions. Fourth row: Two-way traffic, Intersection, First aid, Hospital

Figure 6.42 The traffic signs successively blurred by masks with radii 0, 33, 66, and 100 pixels

Figure 6.43 The feature space of two non-invariant moments and . Legend: – No entry, – No entry into a one-way road, – Main road, – End of the main road, – No stopping, – No parking, – Give way, – Be careful in winter, – Roundabout ahead, – Roundabout, – Railway crossing, – End of all prohibitions, – Two-way traffic, – Intersection, – First aid, – Hospital

Figure 6.44 The feature space of real and imaginary parts of the invariant . Legend: – No entry, – No entry into a one-way road, – Main road, – End of the main road, – No stopping, – No parking, – Give way, – Be careful in winter, – Roundabout ahead, – Roundabout, – Railway crossing, – End of all prohibitions, – Two-way traffic, – Intersection, – First aid, – Hospital

Figure 6.45 The feature space of real parts of the invariant and . Legend: – No entry, – No entry into a one way road, – Main road, – End of the main road, – No stopping, – No parking, – Give way, – Be careful in winter, – Roundabout ahead, – Two-way traffic, – Intersection, – First aid, – Hospital

Figure 6.46 The detail of the feature space of real parts of the invariant and around zero. Legend: – No entry, – Main road, – No stopping, – Intersection, – First aid

Figure 6.47 The feature space of real parts of the invariant and – the minimum zoom. Legend: – No entry, – No entry into a one way road, – Main road, – End of the main road, – No stopping, – No parking, – Give way, – Be careful in winter, – Roundabout ahead, – Roundabout, – Railway crossing, – End of all prohibitions, – Two-way traffic, – Intersection, – First aid, – Hospital

Figure 6.49 The feature space of real parts of the invariant and – the maximum zoom. Legend: – No entry, – No stopping, – No parking

Figure 6.48 The feature space of real parts of the invariant and – the medium zoom. Legend: – No entry, – No entry into a one way road, – No stopping, – No parking, – Give way, – Be careful in winter, – Roundabout ahead, – End of all prohibitions

Chapter 7: 2D and 3D Orthogonal Moments

Figure 7.1 The graphs of the Legendre polynomials up to the sixth degree

Figure 7.2 The graphs of the standard powers up to the sixth degree

Figure 7.3 The graphs of 2D kernel functions of the Legendre moments (2D Legendre polynomials) up to the fourth degree. Black color corresponds to , white color to 1

Figure 7.4 The graphs of the Chebyshev polynomials of the first kind up to the sixth degree

Figure 7.5 The graphs of the Chebyshev polynomials of the second kind up to the sixth degree

Figure 7.6 The graphs of 2D kernel functions of the Chebyshev moments of the first kind up to the fourth order. Black color corresponds to , white color to 1

Figure 7.7 The graphs of the Hermite polynomials up to the sixth degree

Figure 7.8 The graphs of the Gaussian-Hermite polynomials with

Figure 7.9 The graphs of the Gegenbauer polynomials for up to the sixth degree

Figure 7.10 The graphs of the weighted Laguerre polynomials up to the sixth degree

Figure 7.11 The graphs of the Krawtchouk polynomials up to the sixth degree

Figure 7.12 The graphs of the weighted Krawtchouk polynomials for up to the second degree

Figure 7.13 The graphs of the weighted Krawtchouk polynomials for up to the second degree

Figure 7.14 The values of the selected invariants computed from the rotated pexeso card

Figure 7.15 The graphs of the Zernike radial functions up to the sixth degree. The graphs of the polynomials of the same degree but different repetition are drawn by the same type of the line

Figure 7.16 The graphs of the Zernike polynomials up to the fourth degree. Black , white . Real parts: 1st row: , , 3rd row: , , 5th row: , , 7th row: , , 9th row: , . Imaginary parts: 2nd, 4th, 6th, 8th and 10th row, respectively. The indices are the same as above

Figure 7.17 The graphs of the radial functions of the orthogonal Fourier-Mellin moments up to the sixth degree

Figure 7.18 The graphs of 2D kernel functions of the orthogonal Fourier-Mellin moments up to the fourth order. Black , white . Real parts: 1st row: , , 3rd row: , , 5th row: , , 7th row: , , 9th row: , . Imaginary parts: 2nd, 4th, 6th, 8th and 10th rows. The indices are the same as above

Figure 7.19 The playing cards: (a) Mole cricket, (b) Cricket, (c) Bumblebee, (d) Heteropter, (e) Poke the Bug, (f) Ferdy the Ant 1, (g) Ferdy the Ant 2, (h) Snail, (i) Ant-lion 1, (j) Ant-lion 2, (k) Butterfly, and (l) Ladybird

Figure 7.20 The feature space of two Zernike normalized moments and . – Ferdy the Ant 1, – Ferdy the Ant 2, – Ladybird, ◊– Poke the Bug, – Ant-lion 1, – Ant-lion 2, ×– Mole cricket, +– Snail, – Butterfly, – Cricket, – Bumblebee, – Heteropter

Figure 7.21 The error rate in dependency on parameter of the Gaussian-Hermite moments. There were used the moments up to: – 3rd order, – 6th order

Figure 7.22 The feature space of two Zernike normalized moments of color images and . – Ferdy the Ant 1, – Ferdy the Ant 2, – Ladybird, ◊– Poke the Bug, – Ant-lion 1, – Ant-lion 2, ×– Mole cricket, +– Snail, – Butterfly, – Cricket, – Bumblebee, – Heteropter

Figure 7.23 The collapse of the image reconstruction from geometric moments by the direct calculation. Top row: original images and , bottom row: the reconstructed images from the moments

Figure 7.24 Image reconstruction from geometric moments in the Fourier domain. The original image and the reconstructed images with maximum moment orders 21, 32, 43, 54, 65, 76, and 87, respectively

Figure 7.25 An example of the polar raster

Figure 7.26 Image reconstruction from the orthogonal moments: (a) Legendre moments, (b) Chebyshev moments of the first kind, (c) Gaussian-Hermite moments, and (d) Zernike moments

Figure 7.27 Image reconstruction from the incomplete set of discrete Chebyshev moments. The maximum moment order is , respectively. The last image (bottom-right) is a precise reconstruction of the original image

Figure 7.28 Image reconstruction from the orthogonal moments: (a) Legendre moments, (b) Chebyshev moments of the first kind, (c) Gaussian-Hermite moments, and (d) Zernike moments

Figure 7.30 Detail of the reconstruction from the Gaussian-Hermite moments: (a) reconstruction and (b) original

Figure 7.29 Image reconstruction from the discrete Chebyshev moments. There are no errors; it is identical with the original Lena image

Figure 7.31 The test image for the reconstruction experiment with discrete Chebyshev moments (Astronomical Clock, Prague, Czech Republic)

Figure 7.32 The reconstruction experiment: (a) the reconstructed cropped image and (b) the error map. The range of errors from to 9 is visualized in the range black – white. The relative mean square error is

Figure 7.33 The close-up of the error map with typical oscillations

Figure 7.34 The reconstruction experiment: (a) the reconstructed cropped image and (b) the error map. The range of errors from to 198 is visualized in the range black – white. The relative mean square error is

Figure 7.35 The close-up of the error map

Figure 7.36 The values of five selected invariants of the teddy bear

Figure 7.37 Archeological findings used for the experiment

Figure 7.38 Volumetric forms of the archeological findings

Figure 7.39 The rotated and noisy version of the object from Figure 7.38c. The noise of SNR = 12 dB was added to the circumscribed sphere

Figure 7.40 The success rates of the recognition of the archeological artifacts by Gaussian-Hermite moments and Zernike moments

Figure 7.41 Two different airplanes from the PSB. (a) Object No. 1245, (b) Object No. 1249

Figure 7.42 The noisy (10 dB) versions of the airplanes. (a) Object No. 1245, (b) Object No. 1249

Figure 7.43 The success rate of the noisy airplane recognition

Figure 7.44 The helicopter: (a) the original converted to volumetric data, (b–d) the reconstruction from the weighted Hermite moments up to the order: (b) 8, (c) 46 and (d) 84

Figure 7.45 The reconstruction of the helicopter from the seventy-first-order geometric moments

Chapter 8: Algorithms for Moment Computation

Figure 8.1 Sampling function with the steps and

Figure 8.2 The concept of a digital image (a) as a sum of Dirac -functions, (b) as a nearest neighbor interpolation of the samples, and (c) as a bilinear interpolation of the samples

Figure 8.3 The spider (PSB No. 19): (a) the original triangulated version, (b) a conversion to the volumetric representation without any preprocessing, and (c) the same with previous detecting and filling-up the holes

Figure 8.4 The delta method. In the basic version the object is decomposed into rows (a). The generalized version unifies the adjacent rows of the same length into a rectangle (b)

Figure 8.5 Generalized delta method: (a) the headless Figure (black=1) (b) the generalized delta method applied row-wise (395 blocks) (c) the generalized delta method applied column-wise (330 blocks). The basic delta method generated 1507 blocks

Figure 8.6 Quadtree decomposition of the image

Figure 8.7 Quadtree decomposition. (a) the headless figure, 3843 square blocks and (b) the moth, 5328 square blocks

Figure 8.8 Partial object decomposition after two outer loops of the morphological method

Figure 8.9 Distance transformation decomposition: (a) the headless figure, 421 blocks and (b) the moth, 1682 blocks

Figure 8.10 The first level of the GBD method. (a) The input object. (b) All possible chords connecting the cogrid concave vertices. The crosses indicate the chord intersections. (c) The corresponding bipartite graph with a maximum independent set of three vertices. Other choices are also possible, such as or . (d) The first-level object decomposition

Figure 8.11 The second level of the GBD method. (a) The first-level decomposition (solid line) and a possible second-level decomposition (dashed line). From each concave vertex a single chord of arbitrary direction is constructed. (b) If on the first level the chords , , and were chosen, then both GBD and GDM would yield the same decomposition

Figure 8.12 Graph-based decomposition: (a) the headless figure, 302 blocks and (b) the moth, 1092 blocks. In the case of the moth, the result is very similar to the GDM

Figure 8.13 The time complexity of the moment computation of the headless Figure image. Legend: - definition, - generalized delta method, × - quadtree, - graph-based decomposition, - distance transformation

Figure 8.14 The time complexity of the moment computation of the moth image. Legend: - definition, - generalized delta method, × - quadtree, - graph-based decomposition, - distance transformation

Figure 8.15 The time complexity of the moment computation of the chessboard image. Legend: - definition, - generalized delta method, × - quadtree, - graph-based decomposition, - distance transformation

Figure 8.16 Examples of 3D binary objects in a volumetric representation: (a) Teddy bear – 85068 voxels, (b) Glasses – 3753 voxels

Figure 8.17 Various methods of 3D decomposition: (a) glasses – octree – 6571 blocks, (b) airplane – octree – 4813 blocks, (c) glasses – 3GDM – 776 blocks, (d) airplane – 3GDM – 586 blocks, (e) glasses – suboptimal algorithm – 732 blocks, (f) airplane – suboptimal algorithm – 573 blocks

Figure 8.18 The random cube: (a) original (518 voxels), (b) octree decomposition into 504 blocks, (c) 3GDM decomposition into 227 blocks, (d) suboptimal decomposition into 208 blocks

Figure 8.19 An image containing ten graylevels only

Figure 8.20 Nine intensity slices of the image from Figure 8.19 (the zeroth slice has been omitted)

Figure 8.21 Bit slicing of the Klínovec image: from (a) to (h) the bit planes from 7 to 0

Figure 8.22 (a) The original graylevel photograph of the size (lookout and telecommunication tower at Klínovec, Ore Mountains, Czech Republic), (b) the image after the bit planes 0 and 1 have been removed. To human eyes, the images look the same

Figure 8.23 The computational flow of the Prata method. The initialization is and

Figure 8.24 The computational flow of the Kintner method. The first two values in each sequence, i.e., , and , must be computed directly from equation (8.39)

Figure 8.25 The computational flow of the Chong method. The first two values in each sequence, i.e., , must be computed directly from equation (8.42)

Figure 8.26 The hole filled-up with artificial triangles. (a) The asymmetric method, (b) using the artificial centroid, (c) the iterative “cutting off” method. The rest of the object is not displayed

Chapter 9: Applications

Figure 9.1 Face recognition: Detection step - the person and its face have to be localized in the scene (a). The output is typically a face segmented from the background (b) and bounded by a box, a circle or an ellipse

Figure 9.2 Face recognition: Farokhi et al. [16] proposed using Hermite kernel filters as the local features for face recognition in near infrared domain. (a) Original image, (b)–(d) the output of three directional Hermite kernels

Figure 9.3 Image registration: Satellite example. (a) Landsat image – synthesis of its three principal components, (b) SPOT image – synthesis of its three spectral bands

Figure 9.4 Image registration: Satellite example. (a) Landsat, (b) SPOT image – segmented regions. The regions with a counterpart in the other image are numbered, the matched ones have numbers in a circle

Figure 9.5 Image registration: Satellite example. The superimposed Landsat and the registered SPOT images

Figure 9.7 Image registration: Image fusion example. Examples of several low quality images of the same scene – they are blurred, noisy, and of low resolution

Figure 9.6 Image registration: Image fusion example. Image fusion flowchart

Figure 9.8 Image registration: Image fusion example. Low-quality images of the same scene with detected distinctive points

Figure 9.9 Image registration: Image fusion example. Low-quality images after the registration

Figure 9.10 Image registration: Image fusion example. (a) The output of the image fusion. The resulting image has higher resolution and the blur and noise have been removed or diminished. (b) The scaled version of an input image for comparison. The image does not show comparable quality in terms of edge sharpness, noise level, and details visibility

Figure 9.11 Robot navigation: The examples of navigation marks with detected boundaries. Introduced complex geometric deformations introduced by the fish-eye lens camera are apparent

Figure 9.12 Focus measure: Focus measurement by moments. Four sample frames from the Saturn observation sequence, ordered automatically from the sharpest to the most blurred one. The result matches the visual assessment

Figure 9.13 Image retrieval: (a) the mountain (Malá Fatra, Slovakia), (b) the city (Rotterdam, Netherlands). The images are similar in terms of simple characteristics such as color distribution, but they have very different content

Figure 9.14 Image retrieval: Suk and Novotný [233] proposed the CBIR system for recognition of woody species in Central Europe, based on the Chebyshev moments and Fourier descriptors

Figure 9.15 Watermarking: The watermarked image. The watermark “COPYRIGHT” is apparent. This is an example of the visible watermark

Figure 9.16 Watermarking: An approach based on the geometric moments. (a) the original host image to be watermarked by the method [251], (b) the corresponding watermarked image. There is no visible inserted pattern into the image, only intensity variations are apparent. These changes are known disadvantages of the method [251]

Figure 9.17 Medical imaging: Landmark recognition in the scoliosis study. An example of the human body with attached landmarks and Moire contour graphs. The aim was to detect the landmarks and find their centers

Figure 9.18 Forensic application: Detection of near-duplicated image regions. An example of the tampered image with near-duplicated regions – (a) the forged image, (b) the original image

Figure 9.19 Forensic application: Detection of near-duplicated image regions. A two-dimensional feature space. Black dots represent overlapping blocks, which have to be analyzed (left image). The method finds all similar blocks to each block and analyzes their neighborhood (right image). In other words, all blocks inside a circle, which has the analyzed block as centroid are found. The radius is determined by the similarity threshold

Figure 9.20 Forensic application: Detection of near-duplicated image regions. The estimated map of duplicated regions

Figure 9.21 Optical flow: Noise resistant estimation. The result of (a) Zernike moments; (b) the standard method [351]. Stabilization of optical flow computation in the case of noisy magnetic resonance images of the heart

Figure 9.22 Solar flare: An example of the solar flare (the arrow-marked bright object). Dark areas correspond to sunspots

Figure 9.23 Solar flare: The time curve of a solar flare – (a) skewness, (b) first principal component. Data from Ondřejov observatory: the scanning began on the 18 December 2003, at 12:25 (time 0 in the graphs)

List of Tables

Chapter 3: 2D Moment Invariants to Translation, Rotation, and Scaling

Table 3.1 The values of the Hu invariants and the basic invariants (3.32) of “The smiles” in Figure 3.5. The only invariant discriminating them is . (The values shown here were calculated after bringing Figure 3.5 into normalized central position and nonlinearly scaled to the range from −10 to 10 for display.)

Chapter 4: 3D Moment Invariants to Translation, Rotation, and Scaling

Table 4.1 The numbers of the 3D irreducible and independent rotation invariants of weight

Table 4.2 The numbers of the 3D irreducible and independent complex moment rotation invariants

Table 4.3 The actual values of the invariants that should vanish due to the circular symmetry of the vases

Table 4.4 The actual values of the invariants that should vanish due to the two fold rotation symmetry of the vases

Table 4.5 The success rates of recognition of the generic classes

Table 4.6 The mean values of the invariants in the experiment with the artificial objects. In the first row, there is the corresponding symmetry group, “c” denotes the cube and “o” the octahedron

Table 4.7 The standard deviations of the invariants in the experiment with the artificial objects. In the first row, there is the corresponding symmetry group, “c” means cube, and “o” means octahedron. In the second row, there are the values of the ideal triangulated bodies, and in the third row the values of the sampled volumetric data

Chapter 5: Affine Moment Invariants in 2D and 3D

Table 5.1 The numbers of the irreducible and independent affine invariants

Table 5.2 The independent graph-generated AMIs up to the sixth order, which were selected thanks to their correspondence with the Hickman's set

Table 5.3 All possible terms of the invariant

Table 5.4 The matrix of the system of linear equations for the coefficients. The empty elements are zero. The solution is in the last two rows

Table 5.5 The values of the affine moment invariants of the comb; is the approximate viewing angle

Table 5.6 The recognition rate (in %) of the AMIs

Table 5.7 The recognition rate (in %) of the limited number of the AMIs

Table 5.8 The numbers of errors in recognition of 200 tile snaps

Table 5.9 The contingency Table of the AMIs up to eighth order

Table 5.10 Classification confidence coefficients of four letters (each having six different instances of distortion) using the invariant distance (top) and the spline-based elastic matching (bottom). MC means a misclassified letter

Chapter 6: Invariants to Image Blurring

Table 6.1 Template matching in astronomical images

Chapter 8: Algorithms for Moment Computation

Table 8.1 Comparison of the numbers of blocks and of the decomposition time

Table 8.2 The numbers of blocks in the individual bit planes, computation time, and the relative mean square error of the moments due to omitting less significant bit planes

2D and 3D Image Analysis by Moments

 

Jan Flusser, Tomáš Suk, Barbara Zitová

Institute of Information Theory and Automation,Czech Academy of Sciences,Prague,Czech Republic

 

 

 

 

This edition first published 2017 © 2017, John Wiley & Sons, Ltd

Registered office

John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom

For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.

The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. It is sold on the understanding that the publisher is not engaged in rendering professional services and neither the publisher nor the author shall be liable for damages arising herefrom. If professional advice or other expert assistance is required, the services of a competent professional should be sought.

MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the accuracy of the text or exercises in this book. This book's use or discussion of MATLAB® software or related products does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the MATLAB® software.

Library of Congress Cataloging-in-Publication data