223,99 €
The second edition of this accepted reference work has been updated to reflect the rapid developments in the field and now covers both 2D and 3D imaging.
Written by expert practitioners from leading companies operating in machine vision, this one-stop handbook guides readers through all aspects of image acquisition and image processing, including optics, electronics and software. The authors approach the subject in terms of industrial applications, elucidating such topics as illumination and camera calibration. Initial chapters concentrate on the latest hardware aspects, ranging from lenses and camera systems to camera-computer interfaces, with the software necessary discussed to an equal depth in later sections. These include digital image basics as well as image analysis and image processing. The book concludes with extended coverage of industrial applications in optics and electronics, backed by case studies and design strategies for the conception of complete machine vision systems. As a result, readers are not only able to understand the latest systems, but also to plan and evaluate this technology.
With more than 500 images and tables to illustrate relevant principles and steps.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 1511
Veröffentlichungsjahr: 2017
Cover
Title Page
Copyright
Preface Second Edition
Preface First Edition
Why a Further Book on Machine Vision?
List of Contributors
Chapter 1: Processing of Information in the Human Visual System
1.1 Preface
1.2 Design and Structure of the Eye
1.3 Optical Aberrations and Consequences for Visual Performance
1.4 Chromatic Aberration
1.5 Neural Adaptation to Monochromatic Aberrations
1.6 Optimizing Retinal Processing with Limited Cell Numbers, Space, and Energy
1.7 Adaptation to Different Light Levels
1.8 Rod and Cone Responses
1.9 Spiking and Coding
1.10 Temporal and Spatial Performance
1.11 ON/OFF Structure, Division of the Whole Illuminance Amplitude
1.12 Consequences of the Rod and Cone Diversity on Retinal Wiring
1.13 Motion Sensitivity in the Retina
1.14 Visual Information Processing in Higher Centers
1.15 Effects of Attention
1.16 Color Vision, Color Constancy, and Color Contrast
1.17 Depth Perception
1.18 Adaptation in the Visual System to Color, Spatial, and Temporal Contrast
1.19 Conclusions
Acknowledgements
References
Chapter 2: Introduction to Building a Machine Vision Inspection
2.1 Preface
2.2 Specifying a Machine Vision System
2.3 Designing a Machine Vision System
2.4 Costs
2.5 Words on Project Realization
2.6 Examples
Chapter 3: Lighting in Machine Vision
3.1 Introduction
3.2 Demands on Machine Vision lighting
3.3 Light used in Machine Vision
3.4 Interaction of Test Object and Light
3.5 Basic Rules and Laws of Light Distribution
3.6 Light Filters
3.7 Lighting Techniques and Their Use
3.8 Lighting Control
3.9 Lighting Perspectives for the Future
References
Chapter 4: Optical Systems in Machine Vision
4.1 A Look at the Foundations of Geometrical Optics
4.2 Gaussian Optics
4.4 Information Theoretical Treatment of Image Transfer and Storage
4.5 Criteria for Image Quality
4.6 Practical Aspects: How to Specify Optics According to the Application Requirements?
References
Chapter 5: Camera Calibration
5.1 Introduction
5.2 Terminology
5.3 Physical Effects
5.4 Mathematical Calibration Model
5.5 Calibration and Orientation Techniques
5.6 Verification of Calibration Results
5.7 Applications
References
Chapter 6: Camera Systems in Machine Vision
6.1 Camera Technology
6.2 Sensor Technologies
6.3 Block Diagrams and Their Description
6.4 mvBlueCOUGAR-X Line of Cameras
6.5 Configuration of a GigE Vision Camera
6.6 Qualifying Cameras and Noise Measurement (Dr. Gert Ferrano MV)
6.7 Camera Noise (by Henning Haider AVT, Updated by Author)
6.8 Useful Links and Literature
6.9 Digital Interfaces
Chapter 7: Smart Camera and Vision Systems Design
7.1 Introduction to Vision System Design
7.2 Definitions
7.3 Smart Cameras
7.4 Vision Sensors
7.5 Embedded Vision Systems
7.6 Conclusion
References
Further Reading
Chapter 8: Camera Computer Interfaces
8.1 Overview
8.2 Camera Buses
8.3 Choosing a Camera Bus
8.4 Computer Buses
8.5 Choosing a Computer Bus
8.6 Driver Software
8.7 Features of a Machine Vision System
8.8 Summary
References
Chapter 9: Machine Vision Algorithms
9.1 Fundamental Data Structures
9.2 Image Enhancement
9.3 Geometric Transformations
9.4 Image Segmentation
9.5 Feature Extraction
9.6 Morphology
9.7 Edge Extraction
9.8 Segmentation and Fitting of Geometric Primitives
9.9 Camera Calibration
9.10 Stereo Reconstruction
9.11 Template Matching
9.12 Optical Character Recognition
References
Chapter 10: Machine Vision in Manufacturing
10.1 Introduction
10.2 Application Categories
10.3 System Categories
10.4 Integration and Interfaces
10.5 Mechanical Interfaces
10.6 Electrical Interfaces
10.7 Information Interfaces
10.8 Temporal Interfaces
10.9 Human–Machine Interfaces
10.10 3D Systems
10.11 Industrial Case Studies
10.12 Constraints and Conditions
References
Appendix
Index
End User License Agreement
xxiii
xxv
xxvi
xxvii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
801
802
803
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
Cover
Table of Contents
Preface
Begin Reading
Preface First Edition
Figure 1 Information processing chain.
Chapter 1
Figure 1.1 Dimensions and schematic optics of the left human eye, seen from above. The anterior corneal surface is traditionally set to coordinate zero. All positions are given in millimeters, relative to the anterior corneal surface (drawing not to scale). The refracting surfaces are approximated by spheres so that their radii of curvatures can be defined. The cardinal points of the optical system, shown on the top, are valid only for rays close to the optical axis (
Gaussian approximation
). The focal length of the eye in the vitreous (the posterior focal length) is 24.0 mm
H
= 22.65 mm. The nodal points
K
and
K
permit us to calculate the retinal image magnification. In the first approximation, the posterior nodal distance (
PND
, which is the distance
K
to the focal point at the retina) determines the linear distance on the retina for a given visual angle. In the human eye, this distance is about 24.0 mm 7.3 mm = 16.7 mm. One degree in the visual field maps on the retina to 16.7 tan(1) = 290 m. Given that the foveal photoreceptors are 2 m thick, 140 receptors are sampling in the visual field, which leads to a maximum resolution of 70 cycles per degree. The schematic eye by Gullstrand represents an
average eye
. The variability in natural eyes is so large that it does not make sense to provide average numbers on dimensions with several digits. Refractive indices, however, are surprisingly similar among different eyes. The index of the lens (here homogenous model,
n
= 1.41) is not real but calculated to produce a lens power that makes the eye emmetropic. In a real eye, the lens has a gradient index (see text).
Figure 1.2 Binocular geometry of human eyes, seen from above. Since the fovea is temporally displaced with regard to the optical axis by the angle , the optical axes of the eyes do not reflect the direction of fixation. is highly variable among eyes, ranging from 0 to 11, with an average of . In the illustrated case, the fixation target is at a distance for which the optical axes happen to be parallel and straight. The distance of the fixation target for which this is true can be easily calculated: for an angle of 4, and a pupil distance of 64 mm, this condition would be met if the fixation target is at 32 mm/tan(4, or 457.6 mm. The optic nerve head (also called the
optic disk
, or
blind spot
, the position at which the axons of the retinal ganglion cells leave the eye) is nasally displaced relative to the optical axis. The respective angle is in a similar range as . Under natural viewing conditions, the fixation angles must be extremely precise since double vision will be experienced if the fixation lines do not exactly cross on the fixation target – the tolerance is only a few minutes of arc).
Figure 1.3 Spatial information in an image can be reconstructed as a linear superposition of sine wave components (spatial frequencies) with different amplitudes and phases (Fourier components). Low spatial frequencies (SFs) are generally available with high contrast in the natural visual environment, whereas the contrast declines for higher SFs, generally proportional to 1/SF (input). Because of optical imperfections and diffraction, the image on the retina does not retain the input contrast at high SFs. The decline of
modulation transfer
, the ratio of the output to input contrast, is described by the modulation transfer function (MTF, thick white line). At around 60 cycles per degree, the optical modulation transfer of the human eye reaches zero, with small pupil sizes due to diffraction and with larger pupils due to optical imperfections. These factors limit our contrast sensitivity at high spatial frequencies, even though the retina extracts surprisingly much information from the low-contrast images of high SFs, by building small receptive fields for foveal ganglion cells with antagonistic ON/OFF center/surround organization.
Figure 1.4 Spurious resolution. The modulation transfer function (Figure 1.3) shows oscillations beyond the cutoff spatial frequency, which show up in defocused gratings as contrast reversals. On the left, a circular grating shows the contrast reversals at the higher spatial frequencies in the center (top: in focus, below: defocused). On the right, the grating shown in Figure 1.3 was defocused. Note the lack of contrast at the first transition to zero contrast, and the repeated subsequent contrast reversals. Note also that defocusing has little effect on low spatial frequencies.
Figure 1.5 Regional specializations of the retina. The fovea is free from rods, and L and M cones are packed as tightly as possible (reaching a density of 200 000 mm – histology on top replotted after [11]). In the fovea, the retinal layers are pushed to the side to reduce scattering of light that has reached the photoreceptors – resulting in the foveal pit. Rods reach a peak density of 130 000 mm at away from the fovea. Accordingly, a faint star can be seen only if it is not fixated. As a result of the drop in cone densities and due to increasing convergence of cone signals, visual acuity drops even faster: at 10, visual acuity is only about 20% of the foveal peak. Angular positions relative to the fovea vary between individuals and are therefore approximate. (Adapted from Curcio
et al
. 1990 [11].)
Figure 1.6 Aliasing (undersampling) and Moiré patterns. If a grating is imaged on the photoreceptor array, and the sampling interval of the receptors is larger than half the spatial wavelength of the grating, patterns appear. The photoreceptor lattice (left) is from a histological section of a monkey's retina. If laser interferometry is used to image fine gratings with spatial frequency beyond the resolution limit on the fovea of human subjects, the subjects see Moiré patterns, which are drawn on the right. (Adapted from Williams 1985 [12]. Reproduced with permission of Elsevier.)
Figure 1.7 Chromatic aberration and some of its effects on vision. Because of the increase in the refractive indices of the ocular media with decreasing wavelength, the eyes become more myopic in the blue. (a) The chromatic aberration function shows that the chromatic defocus between L and M cones is quite small (about a quarter of a diopter) but close to 1 D for the S cone. (b) Because of transverse chromatic aberration, rays of different wavelengths that enter the pupil reach the retina normally not in the same position. If a red line and green line are imaged on the retina through selected parts of the pupil, and the subject can align them via a joystick, the
achromatic axis
can be determined. Light of different wavelengths entering the eye along the achromatic axis is imaged at the same retinal position (although with a wavelength-dependent focus). (c) Because of longitudinal chromatic aberration, light of different wavelengths is focused in different planes. Accordingly, myopic subjects (with too long eyes) see best in the red and hyperopic subjects (with too short eyes) best in the blue.
Figure 1.8 Principle of the phototransduction cascade and the role of calcium in light/dark adaptation in a rod photoreceptor. The pigment molecule embedded in the outer segment disk membrane of the photoreceptor, consisting of a protein (opsin) and retinal (an aldehyde), absorbs a photon and converts it into an activated state which can stimulate a G protein (transducin in rods). Transducin, in turn, activates an enzyme, cGMP phosphodiesterase, which catalyzes the breakdown of cGMP to -GMP. cGMP has a key role in phototransduction. To open the
cGMP-gated cation channels
, three cGMP molecules have to bind to the channel protein. Therefore, if cGMP is removed by cGMP phosphodiesterase, the channels cannot be kept open. The influx stops (which depolarizes the cell against its normal resting potential), and the membrane potential moves to its resting potential; this means hyperpolarization. When the channels are closed during illumination, the intracellular calcium levels decline. This removes the inhibitory effects of calcium on (i) the cGMP binding on the channel, (ii) the resynthesis pathway of cGMP, and (iii) the resynthesis of rhodopsin. All three steps reduce the gain of the phototransduction cascade (
light adaptation
). It should be noted that complete dark or light adaptation is slow: it takes up to 1 h.
Figure 1.9 Photon responses of rods and cones. From complete darkness to a moon-lit night, rods respond to single photons: their signals are binary (either
yes
or
no
). Because not every rod can catch a photon (here illustrated as small white ellipses), and because photons come in randomly as predicted by a Poisson distribution, the image appears noisy and has low spatial resolution and little contrast. Even if it is 1000 times brighter (a bright moon-lit night), rods do not catch several photons during their integration time of 100–200 ms and they cannot summate responses. Until up to 100 photons per integration time, they show linear summation, but their response curve is still corrupted by single photon events. Beyond 100 photons per integration time, rods show light adaptation (see Figure 1.8). At 1000 photons/integration time, they are saturated and silent. Cones take over, and they work best above 1000 photons per integration time. Because cone gather their signal from so many photons, photon noise is not important and their response to brightness changes is smooth and gradual. If the number of photon rises further, the sensitivity of the cone phototransduction cascade is reduced by light adaptation, and their response curve is shifted to higher light levels. Similar to rods, they can respond over a range of about 4 log units of ambient illuminance change.
Figure 1.10 Rod and cone pathways and ON/OFF channels. To make the system more sensitive to differences rather than to absolute brightness, the image on the retina is analyzed by an ON/OFF system, that is, cells that respond preferably by changes in brightness in either direction. The division into these two major channels occurs already at the first synapse, the photoreceptor
endfoot
. Because the photoreceptors can hyperpolarize only in response to illumination, the subsequent cells must be either depolarized (
excited
) or hyperpolarized (
inhibited
). This means that the signal must either be inverted (
ON channel
) or conserved (
OFF channel
). It is shown how the signals change their signs along the processing pathway. Since rods and cones respond to different illuminance ranges, it would be a waste of space and energy to give both of them separate lines. In fact, the rods have only the first cells, the rod ON bipolar cell, which then
jumps
on the cone pathways via the
AII amacrine cell
; they are not used by the cones at low light. Rods do not have their own OFF bipolar cell. Cones, with the need for coding small differences in brightness with high resolution and with large information content, have two separate lines (
ON
and
OFF
) to increase information capacity.
Figure 1.11 Feed-forward projections from the eyes to the brain and topographic mapping. In each eye, the visual field on the left and right of the fovea (the cut goes right through the fovea!) projects on to different cortical hemispheres: the ipsilateral retina projects on to the ipsilateral visual cortex, and the contralateral retina crosses the contralateral cortex (
hemi-field crossing
in the optic chiasma). The first synapse of the retinal ganglion cells is in the lateral geniculate nucleus (LGN), but information from the left (L) and right (R) eye remains strictly separated. The LGN consists of six layers; layers 1 and 2 are primarily occupied by the magnocellular pathway, and 3–6 by the parvocellular pathway. Information from both eyes comes first together in the visual cortex, area 17, layers 2 and 3, and a strict topographic projection is preserved (follow the color-coded maps of the visual field areas (b)). The wiring in A17 (c) has been extensively studied, in particular by the Nobel Prize winners Hubel and Wiesel (1981). The input from the LGN ends in layer 4C alpha (magnocellular) and 4C beta (parvocellular) and layers 1–3 (koniocellular). These cells project further into the
blobs
, cytochromoxidase-rich peg-shaped regions (pink spots in (c)). The innervation has a remarkable repetitive pattern; parallel to the cortical surface, the preferred orientation for bars presented in the receptive field of the cells shifts continuously in angle (illustrated by color-coded orientation angles on top of the tissue segment shown in (c)). Furthermore, the regions where the contra or ipsilateral eye has input into layer 4 interchange in a striking pattern. A17 is the cortical input layer with mostly
simple cells
, that is, cells that respond to bars and edges with defined directions of movements. At higher centers (a), two streams can be identified on the basis of single cell recordings and functional imaging with new imaging techniques (functional magnetic resonance imaging, fMRI): a
dorsal stream
, concerned about motion and depth (
where?
stream) and a ventral stream concerned about object features, shape color, structure (
what? stream
). Feedback projections are not shown, and only the major projections are shown.
Chapter 2: Introduction to Building a Machine Vision Inspection
Figure 2.1 Requirements on the processing time.
Figure 2.2 Directions for a line scan camera.
Figure 2.3 Field of view.
Figure 2.4 Model of a thin lens.
Figure 2.5 Areas illuminated by the lens and camera; the left side displays an appropriate choice.
Figure 2.6 Bearing with rivet and disk.
Figure 2.7 Bearing, rivet, and disk in lateral view.
Figure 2.8 Setup of part, illumination, and camera.
Figure 2.9 Rivet and disk as imaged by the system.
Figure 2.10 Feature localization by thresholding and blob analysis.
Figure 2.11 Circular edge detection and subsequent circle fitting.
Figure 2.12 User interface for the rivet inspection.
Figure 2.13 Required field of view when using six cameras.
Figure 2.14 Positioning of the camera.
Figure 2.15 Lateral view of one set of camera and light.
Figure 2.16 Frames, as captured by one camera.
Figure 2.17 Generating trigger signals using a rotary encoder.
Figure 2.18 Tube, as imaged by the system.
Figure 2.19 (a) Defect as imaged. (b) Defect as thresholded by the system.
Figure 2.20 Merging defects, which are partly visible in two frames.
Chapter 3: Lighting in Machine Vision
Figure 3.1 Different lighting techniques applied on a glass plate with a chamfer: (a) diffuse incident bright field lighting, (b) telecentric incident bright field lighting, (c) directed incident dark field lighting, (d) diffuse transmitted bright field lighting, (e) telecentric transmitted bright field, lighting, (f) directed transmitted dark field lighting.
Figure 3.2 Glass plate with a chamfer in the familiar view of the human eye.
Figure 3.3 Different parts under different lighting conditions. (a) Metal bolt with diffuse backlight, (b) metal bolt with telecentric backlight, (c) blue potentiometer under blue light, (d) blue potentiometer under yellow light, (e) cap with diffuse lighting, (f) cap with directed lighting.
Figure 3.4 Basic structure of a Machine Vision solution and main parts of a vision system.
Figure 3.5 Some interactions of the components of a Machine Vision system (selection, not complete).
Figure 3.6 Examples for the variance of Machine Vision lighting. (a) Illumination of the calibration process for medical thermometers. Demands: robustness, brightness, homogeneity, and protection against vibrations and splashes of water. (b) Illumination of mechanical parts, running on a dirty and dark conveyor belt. Demands: robustness, tough mounting points, brightness, protection against dust, protection against voltage peaks. (c) Precise illumination for measurement of milled parts. Demands: obvious mounting areas for adjustment, brightness, homogeneity. (d) Lighting stack of a free adaptable combination of red and IR light in an automat for circuit board inspection. Demands: brightness control of different parts of the illuminations using standard interfaces, brightness, shock and vibration protection. (e) Lighting plates for inspection in the food industry. Demands: homogeneity, wide range voltage input, defined temperature management. (f) Telecentric lighting and optics' components for highly precise measurements of optically unfavorable (shiny) parts. Demands: stable assembly with option to adjust, homogeneity, stabilization, possibility to flash.
Figure 3.7 Elements to adjust lighting characteristics (brightness and flash duration) on a telecentric backlight directly on the lighting component. All electronics are included.
Figure 3.8 Robust mounting threads for a tough fixing and adjustment.
Figure 3.9 Refraction: refractive indices, angles, and velocities.
Figure 3.10 Spectral response of the human eye, typical monochrome CCD image sensor (Sony ICX204AL), typical monochrome CMOS image sensor (camera AVT Marlin F131B). For demonstration, the spectral emission of the sun is also presented.
Figure 3.11 Spectral response of the three color channels (caused by the mosaic filter) of a typical one-chip-color CCD image sensor (camera AVT Marlin F033C).
Figure 3.12 Normalized light emission of different colored LEDs.
Figure 3.13 Wavelength composition of emitted light in dependence of the temperature of the radiator (radiance over wavelength) [3].
Figure 3.14 Spectral emission of a metal vapor lamp.
Figure 3.15 Typical spectral emission of a xenon flash lamp. The wide spectral distribution brings good color balance.
Figure 3.16 Typical spectral emission of an HF driven fluorescent ring light.
Figure 3.17 Distribution and span of hues of white LEDs. Use for color classification and sorting of white LEDs [7]. and are color coordinates.
Figure 3.18 Example for a current–luminous intensity relationship of a white LED.
Figure 3.19 Illumination of a structured surface with a laser line: (a) with focused imaging objective, (b) with unfocused imaging objective. Clearly perceptible in both cases is the pepper–salt pattern of the light line caused by speckles.
Figure 3.20 (a)–(f) How much of the requirements a lighting has to meet to be used in Machine Vision? Assessments are made from 1 = bad to 5 = very good.
Figure 3.21 Aging of different LEDs [9].
Figure 3.22 Brightness behavior of LEDs depending on environmental temperature.
Figure 3.23 Time dependence of brightness of a typical fluorescent lamp PL-S11W/840/4P [Philips].
Figure 3.24 Quantities of the incoming light at the test object and their distribution.
Figure 3.25 Different qualities of surfaces and their influence on the distribution of the reflected light. (a) Directed reflection, (b) regular diffuse reflection, (c) irregular diffuse (mixed) reflection.
Figure 3.26 Dependence of the incident angle of light – reflectance for a polished aluminum mirror surface.
Figure 3.27 Light reflection from a stamped metal part: (a) stamped with a new cutting tool and (b) stamped with a wear out cutting tool. The light distribution changes totally due to the different shape.
Figure 3.28 Total reflection can be found only in the gray area. At smaller angles the transparent material is only refracting the light.
Figure 3.29 Different qualities of transparent materials and their influence on the distribution of the transmitted light, (a) directed transmission, (b) diffuse transmission, (c) irregular diffuse (mixed) transmission.
Figure 3.30 Image of directed and diffuse transmission: round glass block with chamfer. In the middle the light is directly transmitted. An annulus around shows diffused and strongly reduced transmission caused by the rough surface of the chamfer. Lighting component: telecentric lighting.
Figure 3.31 (a) Principle of diffraction with interference of waves touching a body. The incoming wave interferes with the new created wave from the physical edge of the test object. This results in typical diffraction patterns, (b) gray value distribution of a real edge with notable diffraction. Image scale of 5 : 1 in combination with a telecentric backlighting of 450 nm wavelength. Pixel resolution is 1.5 m per pixel.
Figure 3.32 Diffraction on dust spots and geometrical structures (chrome on glass) at a test chart for photolithography. The pixel resolution of 0.688 m per pixel is on the resolution limit for visible light. Diffraction limits the detectability.
Figure 3.33 Change of light propagation in transparent test objects in telecentric backlight, (a) glass rod, (b) curved transparent plastic body. Curved transparent objects act like optical elements and refracted light. A parallel glass plate would not influence the brightness, it would appear homogenously bright.
Figure 3.34 Colors and complementary colors. On the opposite side of the circle can be read the complementary color.
Figure 3.35 Yellow-greenish connection block with orange terminals, (a) illuminated with red light, (b) illuminated with green light.
Figure 3.36 Different color perceptions caused by different illumination colors. The color bars are red, green, blue (from left to right), (a) red illumination, (b) green illumination, (c) blue illumination, (d) white illumination.
Figure 3.37 Circuit board with dark green solder resist, illuminated in red (a) and IR, 880 nm (b). The IR light passes the solder resist almost without loss and is reflected on the surface of the supporting material of the circuit board.
Figure 3.38 To see the
invisible
detail: panel opening in a vacuum cleaner bag, (a) illuminated with white light, (b) illuminated with the IR light. It is to recognize how the IR light makes the contrast of a green printing worse.
Figure 3.39 Unpolarized, linear, circular, and elliptical polarized light.
Figure 3.40 Principle of polarizer and analyzer: (a) top light application and (b) backlight application.
Figure 3.41 Example for polarization with transmission: the Glass handle of a cup. The transparent glass part between two crossed linear polarizing filters change the polarization direction as a result of stretched and pressed particles inside the glass. Mechanical tension becomes viewable with the principle of tension optics.
Figure 3.42 Polarized incident light in combination with polarizing filter in front of the camera. (a) The transparent plastic label inside a stainless steel housing of a pacemaker is viewable because it turns the polarization different from the steel surface. (b) In comparison, the same part without polarized light.
Figure 3.43 Larger polarization effects tend to occur especially on electrically nonconduction materials: barcode reading of lacquered batteries, (a) with polarization filter, (b) without polarization filter.
Figure 3.44 Schematic connection between SI unit luminous intensity and other photometric quantities of lighting engineering. More about lighting units and basics can be found in [11, 12].
Figure 3.45 Definition of the solid angle with unit steradian (sr). A light source that emits light into all directions (full sphere) covers sr, a half sphere 2 sr. One steradian (sr) solid angle includes a cone with 65.5 plane angle.
Figure 3.46 Course of the illuminance at the object depends on the distance of the lighting due to the photometric inverse square law.
Figure 3.47 Conversion of illuminance into luminance [14].
Figure 3.48 Identical luminance (gray value) on the part surface, (a) Imaged with a short working distance of the objective, (b) imaged with a long working distance.
Figure 3.49 Distribution of the illuminance in 0.5 m distance of a commercial fluorescence light source with two compact radiators of each 11 W.
Figure 3.50 Exemplary distributions of illuminances, (a) ring light with fresnel lens, (b) dark field ring light, (c) shadow-free lighting.
Figure 3.51 Brightness profile of a lighting to compensate natural vignetting of an imaging objective.
Figure 3.52 Gray bars with constant gray value differences of 50. That means a constant contrast of 50 from bar to bar. Note the seemingly decreasing contrast from left to right at assessment with the human eye.
Figure 3.53 Images of one part: (a) imaged with poor contrast and (b) imaged with strong contrast.
Figure 3.54 Connector housing with inside contact springs, (a) without saturation. There is nothing to recognize in the holes, (b) local overexposure of the housing makes the connector springs viewable.
Figure 3.55 Major characteristics of light filters.
Figure 3.56 Parallel offset of the optical axis caused by a tilted light filter.
Figure 3.57 Typical course of reflectances of an uncoated, single coated (CaF coating) and multi-coated glass plate (average optical glass, transitions air–glass–air) in dependence of the wavelength [17].
Figure 3.58 Typical transmission curve of an UV blocking filter [17].
Figure 3.59 Typical transmission curves of different filter glass sorts of daylight suppression filters [19].
Figure 3.60 Typical transmission curves of an IR suppression filter glass used for Machine Vision [17].
Figure 3.61 Typical transmission curves of neutral filters with different densities [17].
Figure 3.62 Plausibility check of orange (top position) and green (bottom position) LEDs: (a) with green bandpass color filter; (b) with orange color glass filter.
Figure 3.63 Transmission characteristics of an IR suppression filter and a red color filter and the resulting characteristics of their combination.
Figure 3.64 Classification of lighting techniques: spatial arrangement of the lighting.
Figure 3.65 Luminance indicatrix of a Lambert radiator. Most diffuse area lighting reacts like a Lambert radiator.
Figure 3.66 Brightness distribution of LEDs (luminance indicatrix) with different directive properties, (a) cardioid characteristics, (b) beam shaping characteristics, (c) brightness distribution on the surface of an object. Left: from a single LED. Right: from an LED cluster.
Figure 3.67 Principle function of a telecentric lighting.
Figure 3.68 Bright field reflections from a glass surface. The flawless surface appears bright; engraved scratches are dark.
Figure 3.69 Partial brightfield illumination on a brushed part.
Figure 3.70 Darkfield illumination with test object, mirror bar. Only the dust corns and the grinded edges of the mirror bar scatter light.
Figure 3.71 (a) Coaxial diffuse light, (b) tilted camera and tilted diffuse light.
Figure 3.72 The diffuse incident lighting levels differences of brightness from tooling marks on the test object surface: chip sticked on to an aluminum sheet.
Figure 3.73 (a) Surface check of an interference fit-pin connection with coaxial incident bright field lighting, (b) data matrix code is recognizable at rough casting parts with directed incident light.
Figure 3.74 Influence of a tilting object using coaxial telecentric bright field incident light.
Figure 3.75 Combination of a telecentric lighting (right bottom) with a beam splitter unit (middle) and a telecentric objective (cylinder left).
Figure 3.76 Highly specular surface with engraved characters. Only at the perfect reflecting surface parts the image is bright. Disturbing structures destroy the telecentric characteristics of light. So the characters appear dark.
Figure 3.77 Principle of triangulation.
Figure 3.78 (a) Principle of the light-slit method, (b) application of the light-slit method for the height measurement of stacked blocks.
Figure 3.79 Principle of multiple parallel lines.
Figure 3.80 Relationships for distances, angles, and focal lengths of partial bright field components.
Figure 3.81 Light emission from ring lights, (a) direct emission, (b) diffused emission, (c) focused emission, (d) different models of LED ring lights.
Figure 3.82 (a) Principle of a lighting with through-camera view. (b) Lighting component with through-camera view.
Figure 3.83 (a) Principle of a shadow-free lighting, (b) dome light components.
Figure 3.84 Detection of cracks in a forged shiny ball joint for steering systems. The use of a shadow-free lighting is the only way to check these safety relevant parts with a step-by-step rotation of only three steps of 120.
Figure 3.85 Reading characters on knitted surfaces, (a) conventional diffuse lighting with low and strongly changing contrasts, (b) shadow-free lighting ensures a contrastful and homogenous image.
Figure 3.86 The effect of dark field illumination in dependence of different distances to a test object with needled data matrix codes. (a) Lighting is laying on the part surface. (b) 15 mm distance. (c) 30 mm distance.
Figure 3.87 Principle and path of rays of a dark field incident ring light, (a) sharp edged part, (b) well-rounded edged part.
Figure 3.88 Streaking light from left. The directed area lighting component at large angle emphasizes left vertical edges using the dark field effect.
Figure 3.89 (a) Engraved numbers in a specular metal plate, (b) embossed characters on a plastic part.
Figure 3.90 (a) Course of parallel incident light at a typically shaped edge (microstructure: broken edge) not all light can return to the objective on the top, (b) apparent differences at one part (glass plate with chamfer) illuminated with incident light (top) and transmitted light (bottom). With incident light the complete part is not viewable.
Figure 3.91 Principle of diffuse bright field transmitted lighting.
Figure 3.92 (a) Lead frame silhouette, (b) inspection of a filament inside the glass bulb.
Figure 3.93 Principle of directed bright field transmitted lighting.
Figure 3.94 Green diffuse transparent molded packaging part illuminates from back with a green directed bright field transmitted lighting to check the completeness of molding. This lighting technique gives a much more contrastful image than the incident light.
Figure 3.95 Course of light on the surface of a shiny cylinder: (a) with diffuse transmitted light, (b) with telecentric transmitted light, and (c) course of gray values at the image sensor.
Figure 3.96 Shiny cylindrical metal part imaged with a telecentric objective. (a) With diffuse transmitted lighting: the large lighting aperture causes undefinable brightness transitions depending on surface quality, lighting size, and lighting distance. (b) With telecentric lighting: the small lighting aperture guarantees sharp and well-shaped edges for precise and reliable edge detection.
Figure 3.97 Tilting and projection leads to changed results in the projection. (a) Flat part (2D). (b) Deep part (3D).
Figure 3.98 Telecentric lighting components of different sizes.
Figure 3.99 (a) Quality check for a shiny milled part. (b) Diameter measurement of glass rods. (c) Check of completeness of sinter parts.
Figure 3.100 Principle of diffuse/directed transmitted dark field lighting.
Figure 3.101 Possible LED connection: (a) series connection, (b) parallel connection, (c) parallel connection with series resistors.
Figure 3.102 Current–voltage characteristics of different LED substrates (colors).
Figure 3.103 Classes of forward voltages for red LEDs [6].
Figure 3.104 Edge triggering with very low delay times allow us to achieve flashes from 0.5 to 100 s that are able to image blur free object speeds up to 30 m s. The image shows ink drops that are injected under high pressure.
Figure 3.105 Time diagram of the connected processes during flash light synchronization.
Figure 3.108 Different programming examples of a diffuse adaptive area lighting: (a) correction of natural vignetting of an objective, (b) and (c) compensation of reflexes at shiny parts.
Figure 3.106 Block diagram of adaptive lighting.
Figure 3.107 Windows user interface for programming an adaptive lighting. The brightness and flash duration of each single LED can be selected and adjusted by a mouse click. The demonstrated light pattern can be used for vignetting compensation.
Chapter 4: Optical Systems in Machine Vision
Figure 4.1 Spherical wavefronts and light rays.
Figure 4.2 Plane wavefronts and a parallel light-ray bundle.
Figure 4.3 Pinhole camera.
Figure 4.4 Central projection.
Figure 4.5 Linear camera model.
Figure 4.6 Laws of reflection and refraction.
Figure 4.7 Dispersion of white light by a prism.
Figure 4.8 Ray pencils in the image space.
Figure 4.9 Imaging with the linearized law of refraction.
Figure 4.10 Definitions for image orientations.
Figure 4.11 Definition of the magnification ratio .
Figure 4.12 Virtual image by reflection.
Figure 4.13 Image by a mirror followed by a camera.
Figure 4.14 Virtual object.
Figure 4.15 Tilt rule.
Figure 4.16 Image position with two reflections.
Figure 4.17 Definition of the image-side focal point and the principal plane .
Figure 4.18 Definition of the object-side focal point and the object-side principal plane .
Figure 4.19 Graphical construction of the image position.
Figure 4.20 Geometry of the thick lens.
Figure 4.21 Position of the cardinal elements.
Figure 4.22 Image construction with a thin lens.
Figure 4.23 Beam-diverging lens.
Figure 4.24 Position of the image-side focal point for a beam-diverging lens.
Figure 4.25 Image construction with a thin beam-diverging lens.
Figure 4.26 Real object, real image.
Figure 4.28 Virtual object, real image.
Figure 4.29 Real object, virtual image.
Figure 4.30 Virtual object, real image.
Figure 4.31 Derivation of the reciprocity equation.
Figure 4.32 Derivation of Newton's equations.
Figure 4.33 General imaging equation.
Figure 4.34 Complete overlap of the object and image space.
Figure 4.35 Object-side field angle and focal length.
Figure 4.36 Graphical construction of the overall cardinal elements.
Figure 4.37 Construction of the ray direction refracted at .
Figure 4.38 Conventions for describing the Gaussian data of an optical system: for example, and are the -coordinates (in the light direction) of the points and related to the origins and , respectively.
Figure 4.39 Limitation of ray pencils.
Figure 4.40 Limitation of ray pencils in a simplified camera.
Figure 4.41 Example for ray pencil limitations.
Figure 4.42 Concept of pupils and windows.
Figure 4.43 Ray pencils for a real optical system.
Figure 4.44 Chief rays as centers of the circle of confusion.
Figure 4.45 Field angles and for infinite object distance.
Figure 4.46 Projection model and Gaussian optics.
Figure 4.47 Depth of a field.
Figure 4.48 Relations between the Newton and pupil coordinates: .
Figure 4.49 Wide angle and tele perspective.
Figure 4.50 Imaging with a pinhole camera.
Figure 4.51 Imaging with different object distances.
Figure 4.52 Positional arrangement of the objects in space.
Figure 4.53 Constructing the images of the object scene.
Figure 4.54 Image size ratio and object distance.
Figure 4.55 Object-side telecentric perspective.
Figure 4.56 Hypercentric perspective.
Figure 4.57 Entocentric and telecentric perspective.
Figure 4.58 Viewing direction from the top of the object.
Figure 4.59 Principle of object-side telecentric imaging.
Figure 4.60 Afocal systems.
Figure 4.61 Kepler telescope.
Figure 4.62 Real final images with afocal systems.
Figure 4.62 Real final images with afocal systems.
Figure 4.63 Limitation of ray bundles for a bilateral telecentric system.
Figure 4.65 Diffraction by a plane screen.
Figure 4.67 Imaging situation for a diffraction-limited optical system.
Figure 4.67 Imaging situation for a diffraction-limited optical system.
Figure 4.69 Isophotes of intensity distribution near the focus.
Figure 4.70 Intensity distribution of a diffraction-limited system along the optical axis.
Figure 4.71 Contour line plots of fractions of total energy.
Figure 4.71 Contour line plots of fractions of total energy.
Figure 4.72 Approximation of the point spread extension and geometrical shadow limit.
Figure 4.73 Extension of the point spread function at the limits of the geometrical depth of focus.
Figure 4.74 Physical system as a mathematical operator.
Figure 4.75 Dirac impulse.
Figure 4.76 Shifting property of the -function.
Figure 4.77 Electrical networks as invariant systems.
Figure 4.78 Space invariance with optical systems.
Figure 4.79 Isoplanatic regions for rotationally symmetric optical systems.
Figure 4.80 Isoplanatic regions with a decentered optical system.
Figure 4.81 Transition to different representation domains.
Figure 4.82 Harmonic wave as a component of the Fourier integral, Equation 4.214.
Figure 4.83 Relationship between the two representations for optical systems.
Figure 4.84 Optical transfer function and the effect on a plane wave component at frequency .
Figure 4.85 Fourier transform of the Dirac pulse.
Figure 4.86 Plane wave with spatial frequency components .
Figure 4.87 Relationships between the representations in space and spatial frequency domain.
Figure 4.88 Radial and tangential spatial frequencies and the corresponding cross-sections of the transfer function.
Figure 4.89 Pixel sensitivity function.
Figure 4.90 Output function of the linear system.
Figure 4.91 Convolution with the pixel sensitivity function.
Figure 4.92 Transmission chain.
Figure 4.93 Convolution of the impulse response with the rectangular sensitivity function of the pixel.
Figure 4.94 Discretizing with pixel distance .
Figure 4.95 Multiplication of the transfer functions.
Figure 4.96 Convolution with Dirac comb in the spatial frequency domain.
Figure 4.97 Limitation of the impulse response by the length of the array: in spatial domain.
Figure 4.98 Limitation of the impulse response by the length of the array: in the spatial frequency domain.
Figure 4.99 Periodic repetition in the spatial domain with basic period .
Figure 4.100 Discretization in the spatial frequency domain (discrete Fourier transform).
Figure 4.101 PSF, MTF, and aliasing, Example 1.
Figure 4.102 PSF, MTF, and aliasing for a hypothetical sensor optic combination (Example 2).
Figure 4.103 Space-variant nature of aliasing.
Figure 4.104 Aliasing measures for the constellation of Figure 4.101.
Figure 4.105 Line spread function .
Figure 4.106 Integration directions for LSFs.
Figure 4.107 Relation between LSF and ESF.
Figure 4.108 Different edge spread functions (ESFs).
Figure 4.109 Distortion over the relative image height.
Figure 4.110 Modulation as a function of spatial frequency.
Figure 4.111 MTF as a function of image height.
Figure 4.112 (a) Natural vignetting and (b) relative illumination for the lens 1.4/17 mm.
Figure 4.113 Spectral transmittance.
Figure 4.114 Deviation from telecentricity.
Figure 4.115 Requirements for the optical system depending on the measurement task.
Chapter 5: Camera Calibration
Figure 5.1 Principle of central perspective [7].
Figure 5.2 Sensor coordinate system.
Figure 5.3 Typical distortion curve of a lens.
Figure 5.4 Radial symmetrical and tangential distortion.
Figure 5.5 Effects of affinity.
Figure 5.6 Correction grid for a camera with large sensor.
Figure 5.7 Reference plate for camera calibration.
Figure 5.8 Residual after bundle adjustment.
Figure 5.9 Imaging setup for calibration [1].
Figure 5.10 Positions of reference plate during calibration.
Figure 5.11 Scale setup for calibrating one camera.
Figure 5.12 Scale setup for calibrating two cameras.
Figure 5.13 Principle of plumb-line method.
Figure 5.14 Spatial test object for VDI 2634 test.
Figure 5.15 Length measurement error diagram.
Figure 5.16 Photogrammetric deformation measurement of a crashed car.
Figure 5.17 Camera positions for crash car measurement.
Figure 5.18 Tube inspection system with fixed cameras and reference points.
Figure 5.19 Online measurement system with three cameras.
Figure 5.20 Principle of fringe projection.
Figure 5.21 White light scanner with digital color projection unit.
Chapter 6: Camera Systems in Machine Vision
Figure 6.1 Nipkow Scheibe. (BR-online.)
Figure 6.2 Video camera in the mid-1930s. (BR-online.)
Figure 6.3 Switching between area scan and line scan mode.
Figure 6.4 Progressive scan interline transfer.
Figure 6.5 Interlaced scanning scheme.
Figure 6.6 Interlaced sensor with field integration.
Figure 6.7 Interlaced sensor with frame integration.
Figure 6.8 Quad tap CCD sensor (SONY).
Figure 6.9 Tap arrangement (shown with taps misbalanced on purpose).
Figure 6.10 HAD principle of SONY CCD sensors.
Figure 6.11 Super HAD (II) principle of SONY CCD sensors.
Figure 6.12 Blooming due to excessive sunlight hitting the CCD sensor.
Figure 6.13 Smear of CCD image sensor due to bright spot.
Figure 6.14 Dual tap sensor smear appearance.
Figure 6.15 Integration time for different light pixels with two knee points for an OnSemi MT9V034 sensor.
Figure 6.16 Nonlinear response curve with two knee points for an OnSemi MT9V034 sensor.
Figure 6.17 Practical response curve with two knee points for an OnSemi MT9V034 sensor.
Figure 6.18 High dynamic range mode with one knee point.
Figure 6.19 Conventional CMOS readout architecture (according to SONY).
Figure 6.20 Column-based CMOS readout architecture (according to SONY).
Figure 6.21 Rolling shutter visualization (by OnSemi).
Figure 6.22 Rolling shutter (b) versus global shutter (a).
Figure 6.23 Global reset release schematics (OnSemi).
Figure 6.24 Active pixel structure (by Cypress/FillFactory).
Figure 6.25 Serialization of integration and readout (OnSemi).
Figure 6.26 CMV2000/4000 pixel architecture (by CMOSIS).
Figure 6.27 CCD and CMOS camera building blocks (by OnSemi).
Figure 6.28 Sensitivity enhancements using standard front side illumination structures (by OnSemi).
Figure 6.29 FSI and BSI structure comparison (by SONY).
Figure 6.30 Sensor sizes.
Figure 6.31 Block Diagram of SONY Progressive Scan Analog Camera.
Figure 6.32 SONY ICX415AL sensor structural overview.
Figure 6.33 Vertical binning.
Figure 6.34 Relative spectral sensitivity of SONY XC-HR57/58, taken from a technical manual of SONY.
Figure 6.35 CDS processing (Analog Devices).
Figure 6.36 Photo of extremely small SONY XC-ES50 camera.
Figure 6.37 Block diagram of a SONY color camera.
Figure 6.38 Architecture of SONY CCD complementary color sensor.
Figure 6.39 SONY ICX254AK spectral sensitivity (according to datasheet of SONY semiconductor corporation) for CyYeMgG complementary color filter array.
Figure 6.40 Block diagram b/w camera.
Figure 6.41 Spectral sensitivity of mvBlueCOUGAR-X120DG without cut filter and optics.
Figure 6.42 Block diagram color camera.
Figure 6.43 Block diagram of AFE.
Figure 6.44 CCD signal entering CDS stage.
Figure 6.45 Architecture of SONY CCD primary color sensor.
Figure 6.46 Spectral sensitivity of ICX415Q without cut filter and optics.
Figure 6.47 Histogram of Gretag MC-Beth color chart with full AOI.
Figure 6.48 Histogram of Gretag MC-Beth color chart with reduced AOI.
Figure 6.49 Programmable LUT: Gamma.
Figure 6.50 LUT Wizard.
Figure 6.51 Shading correction:
Source
: image with nonuniform illumination – horizontal/vertical line profile.
Figure 6.52 Example of shaded image.
Figure 6.53 Running image average.
Figure 6.54 Bayer demosaicing using bilinear interpolation.
Figure 6.55 Color alias due to Bayer demosaicing: (a) Bilinear and (b) adaptive edge sensing.
Figure 6.56 Spectral sensitivity of image sensor and human cone sensitivity overlaid.
Figure 6.57 Example matrix for a specific SONY color CCD.
Figure 6.58 Color gamut of different display technologies.
Figure 6.59 (a) Selection of display and (b) resulting matrix.
Figure 6.60 Image with CCM: deltaE visualized.
Figure 6.61 Hardware trigger mode of FrameStart and Exposure Mode TriggerWidth with Trigger Delay.
Figure 6.62 Trigger source selection overview.
Figure 6.63 Setting up a frame burst of 5 images from one external trigger.
Figure 6.64 Frame burst trigger mode.
Figure 6.65 Sequence with three different exposure times.
Figure 6.66 Action command sent as broadcast to all devices in the subnet.
Figure 6.67 Action command using secondary application to send broadcast to all devices in the subnet.
Figure 6.68 Principle of scheduled action commands.
Figure 6.69 GVSP.
Figure 6.70 Layer protocols and interaction.
Figure 6.71 Pixel format BayerBG12Packed and data example.
Figure 6.72 Upper left: Industrial M12 option with x-coded GigE and IO; lower left: mini USB2; lower middle: USB2 Type B; upper middle: RJ45 and HiRose; upper right: micro USB3; lower right: dual RJ45 plus dual HiRose (motorized lens control) and video lens control.
Figure 6.73 HiRose connector pin assignment.
Figure 6.74 mv Resulting Frame Rate.
Figure 6.75 Device temperature selector: GenICam definition; XML-description; detailed feature info in impact acquire and how it appears in PropView.
Figure 6.76 Icons of machine vision software libraries which are supported by Matrix-Vision cameras.
Figure 6.77 (a) Physical model of a camera and (b) mathematical model of a camera.
Figure 6.78 Linearity curve from mvBlueCOUGAR-X104dG.
Figure 6.79 Linearity curve with saturation.
Figure 6.80 Photon transfer response function of mvBlueCOUGAR-X104dG.
Figure 6.81 Influence of nonlinearity on photon transfer response function.
Figure 6.82 Influence of bad ADC on photon response nonlinearity function.
Figure 6.83 Different noise components and their location.
Chapter 7: Smart Camera and Vision Systems Design
Figure 7.1 Workflow diagram demonstrating the iterative nature of typical vision application design.
Figure 7.2 Overview of choices for building vision systems.
Figure 7.3 MATRIX VISION mvBlueGEMINI.
Figure 7.4 Block diagram of a typical smart camera.
Figure 7.5 IP67 and standard Ethernet connectors.
Figure 7.6 BVS-E vision sensor.
Figure 7.7 NI compact vision system.
Chapter 8: Camera Computer Interfaces
Figure 8.1 Components of the GenICam Standard. GenICam version 1.0 released in late 2006, version 2.0 in 2009, and 2015 version 3.0 release [1].
Figure 8.2 Components of the IIDC2 Standard [1].
Figure 8.3 Interlaced video frame.
Figure 8.4 Composite video signal.
Figure 8.5 (a) BNC connector and (b) S-video connector.
Figure 8.6 Parallel digital timing diagram.
Figure 8.7 Image sensor taps.
Figure 8.8 Paper inspection using a line scan sensor.
Figure 8.9 (a) DVI connector, (b) MDR connector, (c) VHDCI connector, (d) 62-pin high-density DSUB connector, (e) 100-pin SCSI connector, and (f) 12-pin Hirose connector.
Figure 8.10 (a) IEEE 1394 six-pin connector, (b) IEEE 1394 four-pin connector, (c) IEEE 1394 latched six-pin connector, (d) IEEE 1394 nine-pin connector, and (e) IEEE 1394 fiber connector.
Figure 8.11 Asynchronous transfer.
Figure 8.12 Isochronous transfer.
Figure 8.13 Isochronous transfer using large data packets.
Figure 8.14 Isochronous transfer using small data packets.
Figure 8.15 Defining subregions in an image.
Figure 8.16 IIDC isochronous video packet.
Figure 8.17 Common IEEE 1394 network topologies.
Figure 8.18 Camera link timing diagram.
Figure 8.19 Camera link connector.
Figure 8.20 (a) USB Type A connector and (b) USB Type B connector.
Figure 8.21 RJ-45 ethernet connector.
Figure 8.22 Common ethernet topologies.
Figure 8.23 Hardware digital interface standard comparison [1].
Figure 8.24 Timeline of common computer buses.
Figure 8.25 Multi-drop bus configuration.
Figure 8.26 Point-to-point bus configuration.
Figure 8.27 PCI Express width interoperability.
Figure 8.28 Computer buses that tolerate deviation from ideal.
Figure 8.29 Layers of typical driver software architecture.
Figure 8.30 Selecting the right acquisition mode.
Figure 8.31 Snap acquisition.
Figure 8.32 Grab acquisition.
Figure 8.33 Sequence acquisition.
Figure 8.34 Ring acquisition.
Figure 8.35 Ring acquisition with processing.
Figure 8.36 Spatial reference of the (0, 0) pixel.
Figure 8.37 Bytes per pixel.
Figure 8.38 RGB and HSL pixel representation.
Figure 8.39 Internal image representation: (1) image, (2) image border, (3) vertical resolution, (4) left alignment, (5) should be Horizontal Resolution, (6) right alignment, and (7) line width.
Figure 8.40 Bayer encoding.
Figure 8.41 Nondestructive overlay.
Figure 8.42 Tap configuration.
Figure 8.43 Trigger configuration example.
Figure 8.44 Acquiring an image into the system memory.
Figure 8.45 Acquiring an image into memory on a frame grabber.
Figure 8.46 Sample transformation using LUTs.
Figure 8.47 Region of interest.
Figure 8.48 Effects of shading on image processing.
Chapter 9: Machine Vision Algorithms
Figure 9.1 Different subpixel-precise contours. Contour 1 is a closed contour, while contours 2–5 are open contours. Contours 3–5 meet at a junction point.
Figure 9.2 Examples of linear gray value transformations. (a) Original image. (b) Decreased brightness (). (c) Increased brightness (). (d) Decreased contrast (). (e) Increased contrast (). (f) Gray value normalization. (g) Robust gray value normalization (, ).
Figure 9.3 (a) Histogram of the image in Figure 9.2a. (b) Corresponding cumulative histogram with probability thresholds and superimposed.
Figure 9.4 Examples of calibrated density targets that are traditionally used for radiometric calibration in laboratory settings. (a) Density step target (image acquired with a camera with linear response). (b) Twelve-patch ISO 14524 target (image simulated as if acquired with a camera with linear response).
Figure 9.5 (a) Two-dimensional histogram of two images taken with an exposure ratio of 0.5 with a linear camera. (b) Two-dimensional histogram of two images taken with an exposure ratio of 0.5 with a camera with a strong gamma response curve. For better visualization, the 2D histograms are displayed with a square root LUT. Note that in both cases the values in the 2D histogram correspond to a line. Hence, linear responses cannot be distinguished from gamma responses without knowing the exact exposure ratio.
Figure 9.6 (a) Five images taken with a linear camera with exposure times of 32, 16, 8, 4, and 2 ms. (b) Calibrated inverse response curve. Note that the response is linear, but the camera has set a slight offset in the amplifier, which prevents very small gray values from being assumed. (c) Six images taken with a camera with a gamma response with exposure times of 30, 20, 10, 5, 2.5, and 1.25 ms. (d) Calibrated inverse response curve. Note the strong gamma response of the camera.
Figure 9.7 (a)Image of an edge. (b) Horizontal gray value profile through the center of the image. (c) Noise in (a) scaled by a factor of 5. (d) Horizontal gray value profile of the noise.
Figure 9.8 (a) Image of an edge obtained by averaging 20 images of the edge. (b) Horizontal gray value profile through the center of the image.
Figure 9.9 (a) Image of an edge obtained by smoothing the image of Figure 9.7a with a mean filter. (b) Horizontal gray value profile through the center of the image.
Figure 9.10 (a) Frequency response of the mean filter. (b) Image with one-pixel-wide lines spaced three pixels apart. (c) Result of applying the mean filter to the image in (b). Note that all the lines have been smoothed out. (d) Image with one-pixel-wide lines spaced two pixels apart. (e) Result of applying the mean filter to the image in (d). Note that the lines have not been completely smoothed out, although they have a higher frequency than the lines in (b). Note also that the polarity of the lines has been reversed.
Figure 9.11 (a) One-dimensional Gaussian filter with . (b) Two-dimensional Gaussian filter with .
Figure 9.12 Images of an edge obtained by smoothing the image of Figure 9.7a. Results of (a) a Gaussian filter with and (b) a mean filter of size ; and (c) the corresponding gray value profiles. Note that the two filters return very similar results in this example. Results of (d) a Gaussian filter with and (e) a mean filter; and (f) the corresponding profiles. Note that the mean filter turns the edge into a ramp, leading to a badly defined edge, whereas the Gaussian filter produces a much sharper edge.
Figure 9.13 Images of an edge obtained by smoothing the image of Figure 9.7a. (a) Result with a median filter of size , and (b) the corresponding gray value profile. (c) Result of a median filter, and (d) the corresponding profile. Note that the median filter preserves the sharpness of the edge to a great extent.
Figure 9.14 Example of aliasing. (a) Two cosine waves, one with a frequency of 0.25 and the other with a frequency of 0.75. (b) Two cosine waves, one with a frequency of 0.4 and the other with a frequency of 0.6. Note that if both functions are sampled at integer positions, denoted by the crosses, the discrete samples will be identical.
Figure 9.15 (a) Image of a map showing the texture of the paper. (b) Fourier transform of (a). Because of the high dynamic range of the result, is displayed. Note the distinct peaks in , which correspond to the texture of the paper. (c) Filter used to remove the frequencies that correspond to the texture. (d) Convolution . (e) Inverse Fourier transform of (d). (f, g) Detail of (a) and (e), respectively. Note that the texture has been removed.
Figure 9.16 Affine transformation of an image. Note that integer coordinates in the output image transform to non-integer coordinates in the original image, and hence must be interpolated.
Figure 9.17 (a) A pixel in the output image is transformed back to the input image. Note that the transformed pixel center lies at a non-integer position between four adjacent pixel centers. (b) Nearest-neighbor interpolation determines the closest pixel center in the input image and uses its gray value in the output image. (c) Bilinear interpolation determines the distances to the four adjacent pixel centers and weights their gray values using the distances.
Figure 9.18 (a) Image showing a serial number of a bank note. (b) Detail of (a). (c) Image rotated such that the serial number is horizontal using nearest-neighbor interpolation. (d) Detail of (c). Note the jagged edges of the characters. (e) Image rotated using bilinear interpolation. (f) Detail of (e). Note the smooth edges of the characters.
Figure 9.19 (a) Image showing a serial number of a bank note. (b) Detail of (a). (c) The image of (a) scaled down by a factor of 3 using bilinear interpolation. (d) Detail of (c). Note the different stroke widths of the vertical strokes of the letter H. This is caused by aliasing. (e) Result of scaling the image down by integrating a smoothing filter (in this case a mean filter) into the image transformation. (f) Detail of (e).
Figure 9.20 (a,b) Images of license plates. (c,d) Result of a projective transformation that rectifies the perspective distortion of the license plates.
Figure 9.21 (a) Image of the center of a CD showing a circular bar code. (b) Polar transformation of the ring that contains the bar code. Note that the bar code is now straight and horizontal.
Figure 9.22 (a,b) Images of prints on ICs with a rectangular ROI overlaid in light gray. (c,d) Result of thresholding the images in (a),(b) with and .
Figure 9.23 (a,b) Images of prints on ICs with a rectangular ROI overlaid in light gray. (c,d) Gray value histogram of the images in (a) and (b) within the respective ROI. (e,f) Result of thresholding the images in (a) and (b) with a threshold selected automatically based on the gray value histogram.
Figure 9.24 (a) Image of a print on an IC with a rectangular ROI overlaid in light gray. (b) Gray value histogram of the image in (a) within the ROI. Note that there are no significant minima and only one significant maximum in the histogram.
Figure 9.25 (a) Image showing a small part of a print on an IC with a one-pixel-wide horizontal ROI. (b) Gray value profiles of the image and the image smoothed with a mean filter. Note that the text is substantially brighter than the local background estimated by the mean filter.
Figure 9.26 (a) Image of a print on an IC with a rectangular ROI overlaid in light gray. (b) Result of segmenting the image in (a) with a dynamic thresholding operation with and a mean filter.
Figure 9.27 (a,b) Two images of a sequence of 15 showing a print on the clip of a pen. Note that the letter V in the MVTec logo moves slightly with respect to the rest of the logo. (c) Reference image of the variation model computed from the 15 training images. (d) Standard deviation image . For better visibility, is displayed. (e,f) Minimum and maximum threshold images and computed with and .
Figure 9.28 (a) Image showing a logo with errors in the letters T (small hole) and C (too little ink). (b) Errors displayed in white, segmented with the variation model of Figure 9.27. (c) Image showing a logo in which the letter V has moved too high and to the right. (d) Segmented errors.
Figure 9.29 Two possible definitions of connectivity on rectangular pixel grids: (a) 4-connectivity and (b) 8-connectivity.
Figure 9.30 Some peculiarities occur when the same connectivity, in this case 8-connectivity, is used for the foreground and background. (a) The single line in the foreground clearly divides the background into two connected components. (b) If the line is very slightly rotated, there is still a single line, but now the background is a single component, which is counterintuitive. (c) The single region in the foreground intuitively contains one hole. However, the background is also a single connected component, indicating that the region has no hole, which is also counterintuitive.
