Camera Image Quality Benchmarking - Jonathan B. Phillips - E-Book

Camera Image Quality Benchmarking E-Book

Jonathan B. Phillips

0,0
97,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

The essential guide to the entire process behind performing a complete characterization and benchmarking of cameras through image quality analysis

Camera Image Quality Benchmarking contains the basic information and approaches for the use of subjectively correlated image quality metrics and outlines a framework for camera benchmarking.  The authors show how to quantitatively compare image quality of cameras used for consumer photography. This book helps to fill a void in the literature by detailing the types of objective and subjective metrics that are fundamental to benchmarking still and video imaging devices. Specifically, the book provides an explanation of individual image quality attributes and how they manifest themselves to camera components and explores the key photographic still and video image quality metrics. The text also includes illustrative examples of benchmarking methods so that the practitioner can design a methodology appropriate to the photographic usage in consideration.

The authors outline the various techniques used to correlate the measurement results from the objective methods with subjective results. The text also contains a detailed description on how to set up an image quality characterization lab, with examples where the methodological benchmarking approach described has been implemented successfully. This vital resource:

  • Explains in detail the entire process behind performing a complete characterization and benchmarking of cameras through image quality analysis
  • Provides best practice measurement protocols and methodologies, so readers can develop and define their own camera benchmarking system to industry standards
  • Includes many photographic images and diagrammatical illustrations to clearly convey image quality concepts
  • Champions benchmarking approaches that value the importance of perceptually correlated image quality metrics 

Written for image scientists, engineers, or managers involved in image quality and evaluating camera performance, Camera Image Quality Benchmarking combines knowledge from many different engineering fields, correlating objective (perception-independent) image quality with subjective (perception-dependent) image quality metrics. 

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 798

Veröffentlichungsjahr: 2017

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Copyright

About the Authors

Series Preface

Preface

List of Abbreviations

About the Companion Website

Chapter 1: Introduction

1.1 Image Content and Image Quality

1.2 Benchmarking

1.3 Book Content

Summary of this Chapter

References

Chapter 2: Defining Image Quality

2.1 What is Image Quality?

2.2 Image Quality Attributes

2.3 Subjective and Objective Image Quality Assessment

Summary of this Chapter

References

Chapter 3: Image Quality Attributes

3.1 Global Attributes

3.2 Local Attributes

3.3 Video Quality Attributes

Summary of this Chapter

References

Chapter 4: The Camera

4.1 The Pinhole Camera

4.2 Lens

4.3 Image Sensor

4.4 Image Signal Processor

4.5 Illumination

4.6 Video Processing

4.7 System Considerations

Summary of this Chapter

References

Chapter 5: Subjective Image Quality Assessment—Theory and Practice

5.1 Psychophysics

5.2 Measurement Scales

5.3 Psychophysical Methodologies

5.4 Cross-Modal Psychophysics

5.5 Thurstonian Scaling

5.6 Quality Ruler

5.7 Subjective Video Quality

Summary of this Chapter

References

Chapter 6: Objective Image Quality Assessment—Theory and Practice

6.1 Exposure and Tone

6.2 Dynamic Range

6.3 Color

6.4 Shading

6.5 Geometric Distortion

6.6 Stray Light

6.7 Sharpness and Resolution

6.8 Texture Blur

6.9 Noise

6.10 Color Fringing

6.11 Image Defects

6.12 Video Quality Metrics

6.13 Related International Standards

Summary of this Chapter

References

Chapter 7: Perceptually Correlated Image Quality Metrics

7.1 Aspects of Human Vision

7.2 HVS Modeling

7.3 Viewing Conditions

7.4 Spatial Image Quality Metrics

7.5 Color

7.6 Other Metrics

7.7 Combination of Metrics

7.8 Full-Reference Digital Video Quality Metrics

Summary of this Chapter

References

Chapter 8: Measurement Protocols—Building Up a Lab

8.1 Still Objective Measurements

8.2 Video Objective Measurements

8.3 Still Subjective Measurements

8.4 Video Subjective Measurements

Summary of this Chapter

References

Chapter 9: The Camera Benchmarking Process

9.1 Objective Metrics for Benchmarking

9.2 Subjective Methods for Benchmarking

9.3 Methods of Combining Metrics

9.4 Benchmarking Systems

9.5 Example Benchmark Results

9.6 Benchmarking Validation

Summary of this Chapter

References

Chapter 10: Summary and Conclusions

References

Index

End User License Agreement

Pages

xv

xvi

xvii

xviii

xix

xx

xxi

xxiii

xxiv

xxv

xxvii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

352

353

354

355

356

357

359

360

361

362

363

364

365

366

367

368

Guide

Cover

Table of Contents

Preface

Begin Reading

List of Illustrations

Chapter 1: Introduction

Figure 1.1 Image of first permanent photograph circa 1826 by N. Niépce on its original pewter plate.

Figure 1.2 Enhanced version of first permanent photograph circa 1826 by N. Niépce.

Figure 1.3 Three renditions of a viola. Left: line sketch; middle: colored clip art (Papapishu, 2007); right: photograph. Each shows different aspects of object representation.

Figure 1.4 Example illustrating simultaneous contrast. The center squares are identical in hue, chroma, and lightness. However, they appear different when surrounded by backgrounds with different colors.

Figure 1.5 Example illustrating chromatic adaptation and differences between absolute and relative colorimetry. The fruit basket in the original photo clearly exhibits varying hues. A cyan bias is added to the original photo to generate the middle photo. With chromatic adaptation, this photo with the cyan cast will have perceptible hue differences as well, allowing the observer to note a yellowish hue to the bananas relative to the other fruit colors. However, the bottom photo illustrates that replacing the bananas in the original photo with the cyan-cast bananas (the identical physical color of the bananas in the middle cyan-cast photo) results in a noticeably different appearance. Here, the bananas have an appearance of an unripe green state because chromatic adaptation does not occur.

Figure 1.6 With a thin chromatic border bounded by a darker chromatic border, the internal region is perceived by the HVS to have a faint, light hue similar to the inner chromatic border even though the region has no hue other than the white background on the rest of the page. The regions within the shapes fill in with an orange or green tint due to the nature of the undulating borders and the hue of the inner border.

Figure 1.7 Examples showing how geons combine to form various objects. Far left: briefcase; center left: drawer; center right: mug; far right: pail.

Figure 1.8 An example of an occluded object. Left: the vertices are occluded, making discernment of the object difficult. Right: only segments are occluded. In this right image, the object is more recognizable as a flashlight.

Figure 1.9 An image associated with top-down processing in order to recognize the shape of a Dalmatian exploring the melting snow.

Figure 1.10 Influence of texture on appearance of fake versus real fruit. The fruits on the left in the top panoramic photo are all fake while the fruits on the right are real. Closer inspection of the pear surfaces can be seen in the bottom pair of images. The fake pear is on the left and the real pear is on the right. The texture appearance of the fake pear is composed of red paint drops.

Figure 1.11 Left: the original image; right: after applying a sigma filter similar to one that would be used to reduce image noise (See Chapter 4 for more information on sigma filters.). Note the loss of texture in the hair, skin, and clothing, which lowers overall quality even though edges of the face, eyes, and teeth remain mostly intact.

Figure 1.12 Top: monochrome candy ribbons with low sharpness and bit depth, bottom: colorful candy ribbons with substantial sharpness. Note that the bottom image is more able to convey a sense of depth versus the top image.

Figure 1.13 Variations in luminance levels and dynamic range for an example scene. (a) Underexposed by 2 f-stops. (b) Normal exposure. (c) Overexposed by 2 f-stops. (d) Normal exposure with localized tonemapping.

Figure 1.14 Random-dot cinematograms.

(a)

First frame of a two-frame cinematogram.

(b)

Second frame of a two-frame cinematogram.

(c)

The same frame shown in (a), with moving dots shown in red.

(d)

The same frame shown in (b), with moving dots shown in red.

(e)

A plausible motion hypothesis for a two-frame cinematogram in which the dots move from the positions in black to those in red.

(f)

Another plausible motion hypothesis for a two-frame cinematogram in which the dots move from the positions in black to those in red.

Figure 1.15 This example benchmark matrix shows the image quality assessment of various scene content and application categories for a consumer camera. Note how the quality varies as these categories change.

Chapter 2: Defining Image Quality

Figure 2.1 Illustration of the difference between global and local attributes. The images in the top and bottom rows have been manipulated in the same way and only differ in size. The leftmost image is the original, unmodified image. In the middle, the original image was modified by reducing the color saturation. To the right, the original image was blurred. In the top row, these modifications are easily seen. In the bottom row, the difference between the leftmost and middle images is as clearly seen as in the top row. The difference between the leftmost and rightmost images is, however, not as obvious. Therefore, color rendition represents a global attribute, while sharpness can be categorized as a local attribute.

Chapter 3: Image Quality Attributes

Figure 3.1 Example of underexposed image (left), well exposed image (middle), and overexposed image (right).

Figure 3.2 Examples of images produced by cameras with a low (left) and high (right) dynamic range. Top: low exposure; bottom: high exposure. As explained in the text, there is no significant difference between these images in terms of image quality.

Figure 3.3 Examples of low exposure images that have been digitally enhanced. Left: Low dynamic range camera; right: high dynamic range camera. In this case, the low dynamic range camera shows considerably less detail due to noise masking, see text.

Figure 3.4 Example of an image with a mid-tone shift, giving an unnatural appearance.

Figure 3.5 Example of flare. Note the loss of contrast in the upper left part of the image, where the flare is most noticeable.

Figure 3.6 Examples of two different renditions of color, each equally realistic.

Figure 3.7 Examples of colorimetrically “correct” rendition of a scene (left), and a preferential rendition with increased color saturation and vividness (right).

Figure 3.8 Example of unacceptable color, such as seen in the green sky.

Figure 3.9 Example of unacceptable white balance, here with a distinct overall blue shift.

Figure 3.10 Examples of optical distortion. Left: pincushion; right: barrel.

Figure 3.11 Example of image distortion due the use of a rolling shutter. The propeller blades appear to be “floating” in mid-air instead of being attached to the hub of the propeller.

Figure 3.12 Example of luminance shading. The sand in the corners is distinctly darker than in the center of the image.

Figure 3.13 A color shading example in which the center region is greenish and the corners are reddish.

Figure 3.14 Example of blur due to lens aberrations. Note the deterioration in sharpness toward the edges and corners of the image.

Figure 3.15 Illustration of depth of field where the foreground object on the right is strongly out of focus, while structures farther back appear sharp.

Figure 3.16 Images showing the distinction between sharpness and resolution. The left image is clearly sharper than the right image. However, the right image shows more detail in fine structures such as in the segmentation and edges of the reeds.

Figure 3.17 Example of sharpening artifacts. Left: original; right: oversharpened. Note especially the bright “halo” around the mountain range in the far distance, due to excessive sharpening.

Figure 3.18 Example of motion blur. Note that the person in the foreground appears blurrier compared to people in the background. This is mainly explained by the fact that objects farther away from the camera are moving at a lower angular speed, and therefore travel over a smaller portion of the field of view during a given time period, compared with objects closer to the camera.

Figure 3.19 Examples of different types of noise. Top left: white luminance noise; top right: luminance noise; bottom left: column noise; bottom right: chrominance noise.

Figure 3.20 Example of texture blur. Even though the dog's nose as well as structures in the pillow appear sharp, the fur is blurry, giving an unnatural appearance.

Figure 3.21 Example of color fringing. The effect is most clearly seen toward the edges and corners of the image.

Figure 3.22 Example of image defects. Note the light blue defect in the upper left corner which is an example of a pixel defect, and the blurry dark circular objects in the sky which are examples of particles in the optical path.

Figure 3.23 Example of aliasing artifacts. The image to the right is a downsampled and upscaled version of the left image. Notice the patterns appearing in the tatami mat.

Figure 3.24 Example of demosaicing artifacts. Note the colored “zipper” artifacts around sharp edges.

Figure 3.25 JPEG compression artifacts, seen especially around the antennas of the butterfly and as an overall “blockiness.”

Figure 3.26 Example of flicker as seen in the parallel horizontal dark bands.

Figure 3.27 An HDR processed image showing typical tone mapping artifacts, for example, the strong “halos” around the tree.

Figure 3.28 Lens ghost example, seen as faint rings and circles in the upper left of the image emanating from the sun in the lower right.

Chapter 4: The Camera

Figure 4.1 The camera obscura.

Figure 4.2 The principle of the lens. is the focal length.

Figure 4.3 Ray trace of a thick lens focused at infinity, showing spherical aberration, see text.

Figure 4.4 Images of a point source due to third-order aberrations. Left: spherical; middle: coma; right: astigmatism.

Figure 4.5 Two types of optical distortion. Left: pincushion; right: barrel distortion.

Figure 4.6 Ray trace of a triplet lens as described in a patent by Baur and Freitag (1963).

a

and

b

: principal planes;

c

: exit pupil;

d

: entrance pupil. Green rays are 550 nm and red rays 650 nm light.

Figure 4.7 Wavelength sensitivity of the human eye.

Figure 4.8 Illustration of vignetting, see text.

Figure 4.9 Mitigating the effect of vignetting by stopping down. In this case, the rays from both objects are equally obstructed by the aperture stop.

Figure 4.10 Through focus MTF. The MTF value for one specific spatial frequency is plotted as a function of focus shift. Red curve: sagittal MTF; blue curve: tangential MTF.

Figure 4.11 Images of the diffraction point spread function. The image shown in the right Figure was generated using an f-number twice as large as in the left image.

Figure 4.12 Basic structure of a MOS capacitor.

Figure 4.13 Three-phase readout scheme of a CCD, see text.

Figure 4.14 Schematic of a 4T APS CMOS pixel.

Figure 4.15 Graphical representation of a CMOS image sensor. Yellow boxes each represent one pixel.

Figure 4.16 Timing diagram of the exposure of a rolling shutter CMOS sensor.

Figure 4.17 Timing diagram of the exposure of a global reset CMOS sensor with a mechanical shutter.

Figure 4.18 Spectral sensitivity of the ON Semiconductor KAC-12040 image sensor. Dashed lines: no color filters; full lines: with red, green, and blue color filters.

Figure 4.19 The Bayer pattern.

Figure 4.20 Alternative color filter array pattern including clear pixels.

Figure 4.21 Example photon transfer curve. Three distinct regions can be distinguished: at low signal levels, signal independent dark noise dominates; at intermediate levels, the photon shot noise has the largest influence; at the highest levels, the photo response nonuniformity is the main noise source.

Figure 4.22 Signal to noise ratio of an image sensor as a function of signal value. Just as for the noise versus signal graph in Figure 4.21, three distinct regions can be distinguished in the graph.

Figure 4.23 Illustration of color error introduced by having an incorrect black level correction. Left: black level subtracted before color correction; right: no black level subtraction before color correction.

Figure 4.25 Example of white balancing and color correction. Left: no white balance; middle: white balanced; right: white balanced and color corrected.

Figure 4.24 Geometry for bilinear color interpolation.

Figure 4.26 Example of noise filtering. Left: original image; middle: linear filter; right: sigma filter. Note how sharp edges are retained for the sigma filter, while low contrast texture is smeared out.

Figure 4.27 Example of unsharp masking. Left: no sharpening; right: unsharp masking applied. The two bottom images show crops of the images above. Note how the right image appears significantly sharper, but also introduces “halos” around edges.

Figure 4.28 An example of the blockwise Discrete Cosine Transform. (a) Image blocks in the spatial domain. (b) Image blocks in the frequency domain (via DCT).

Figure 4.29 Frame differences with and without motion compensation. is the signal variance, a measure of the difference between images, see text. (a) Frame (reference). = 3478. (b) Frame . = 3650. (c) Frame . = 3688. (d) Frame . = 3745. (e) Difference of frames and . = 1426. (f) Difference of frames and . = 2265. (g) Difference of frames and . = 3020. (h) Motion-compensated differences of frames and . = 205. (i) Motion-compensated differences of frames and . = 299. (j) Motion-compensated differences of frames and . = 363.

Chapter 5: Subjective Image Quality Assessment—Theory and Practice

Figure 5.1 The relationship of lightness () versus pitch (Hz) for 16 observers comparing lightness of OECF patches to a series of single pitches. Note the predictable and linear relationship for the range tested with scales plotted in perceptually linear units of and log(Hz).

Figure 5.2 A diagram showing an example triplet comparison on the left of stimuli 1, 2, and 3 versus the equivalent of three paired comparisons of the same three stimuli on the right. For the left case, the observer compares all three stimuli at one viewing time, whereas on the right case, three separate comparisons are necessary. Even though observer judgment time is longer for the left triplet comparison, an experiment with triplet comparisons can be judged more quickly than the experiment with the same stimuli using separate paired comparisons because less presentations are necessary.

Figure 5.3 The diagram demonstrates key aspects of the ISO 20462 Part 3 quality ruler components, including the calibrated scale from 32 to 0, the associated quality categories, and representations of ruler quality levels for a given scene (ISO, 2012).

Figure 5.4 A comparison of subjective evaluation with anchored pairs performed in four different labs with four different sets of equipment. Increased treatment resulted in increasing amount of texture blur. This was corroborated by the psychometric results. Error bars are standard error values.

Figure 5.5 The relationship between the psychometrically determined JNDs using six expert judges and the ISO 20462 Part 3 calibrated . The results are for the “girl” scene viewed at 34 inches as compared to the calibrated values (the solid line) for the same conditions, indicating that the modeled values are valid. Each data point for this experiment performed in Lab 2 represents = 6 judgments and error bars are standard error.

Figure 5.6 JND averages for each of four companies' labs. Labs 1, 2, and 3 followed the softcopy quality ruler approach while Lab 4 followed an alternative anchored paired comparison method. The standard deviation is plotted versus the JND average for each stimulus. Data sets are fitted to a second order polynomial fit.

Figure 5.7 95% confidence limits (in units of JNDs) for varying sample sizes at given standard deviation level.

Chapter 6: Objective Image Quality Assessment—Theory and Practice

Figure 6.1 Example OECF. The red curve shows the transfer curve of the sRGB color space, as described by Eq. (6.15).

Figure 6.2 The visual spectrum. Note that due to limitations of the reproducing medium, the colors corresponding to particular wavelengths only serve as an approximate illustration of the actual color.

Figure 6.3 Examples of wavelength distributions of some common light sources. Top left: red LED; top right: white LED; bottom left: halogen light; bottom right: fluorescent light.

Figure 6.4 The wavelength distribution of the radiance of black bodies with varying temperatures. The plots are normalized to unity peak value.

Figure 6.5 CIE illuminant power spectral distributions. Top left: A; top right: CIE D50 and D65; bottom left: CIE F11; bottom right: CIE E.

Figure 6.6 Reflections from a surface. Left: specular reflection; right: diffuse reflection.

Figure 6.7 The CIE color matching functions , , and .

Figure 6.8 Example of a CIE xy chromaticity diagram. Included in the diagram are chromaticities of black bodies between 2 000 and 10 000 K (Planckian locus, red line), as well as a selection of CIE standard illuminants.

Figure 6.9 Example distortion charts.

Figure 6.10 Definition of the optical distortion metric. Dashed lines correspond to the undistorted case.

Figure 6.11 Presentation of optical distortion.

Figure 6.12 Definition of the TV distortion metric.

Figure 6.13 Distortion chart corresponding to the data presented in Figure 6.11. The height difference, , at the corners is clearly very close to zero in this case, leading to negligible TV distortion. However, the distortion is clearly noticeable.

Figure 6.14 The setup used in the veiling glare measurement.

Figure 6.15 Graphical example of structure with varying spatial frequency content. Low spatial frequencies are found at the left end of the image and high spatial frequencies at the right end.

Figure 6.16 Images showing the distinction between sharpness and resolution. The left upper image is clearly sharper than the right upper image. However, in the zoomed in parts, shown in the bottom row, the right image shows more detail in fine structures.

Figure 6.17 Illustration of phase reversal due to negative OTF values. Left: MTF of defocus blur; right: blurred image due to defocus.

Figure 6.18 Change in amplitude and phase of a one-dimensional function passing through a system with an MTF given by and a PTF .

Figure 6.19 Some examples of MTF curves. Red: diffraction limited lens; green: defocused lens; blue: sharpening filter applied in the image processing.

Figure 6.20 The distinction between sharpness and resolution explained by the MTF. The red curve is the MTF of the imaging system used to produce the left image in Figure 6.16, and the green curve represents the MTF corresponding to the right image in that figure.

Figure 6.21 Example resolution chart.

Figure 6.22 Position and orientation of point spread functions in the image produced by an example lens.

Figure 6.23 Illustration of discrepancies in MTF curves depending on orientation. The right PSF is a copy of the left, but rotated 30 degrees counterclockwise. The bottom graphs show MTFs calculated in the vertical and horizontal directions. The results are evidently different.

Figure 6.24 Example of aliasing. The dashed line represents the reconstructed signal. is the sampling interval.

Figure 6.25 Examples of a band limited signal that exhibits aliasing (upper curve), and without aliasing (lower curve).

Figure 6.26 The system MTF is the combination of several MTFs from separate parts of the camera.

Figure 6.27 The principle of the slanted edge SFR measurement method. a) Line by line sampling of image values across the edge. b) Edge profiles of each line. c) Displaced edge profiles yielding an oversampled ESF. d) Binned ESF. e) Differentiated edge profile, yielding the LSF. f) SFR calculated from Fourier transform of LSF.

Figure 6.28 A sine modulated Siemens star test pattern.

Figure 6.29 A dead leaves test pattern.

Figure 6.30 Amplification of noise due to the color correction matrix. Left: without CCM; right: with CCM. Note how noise coloration becomes more prominent when a CCM is applied.

Figure 6.31 Examples of noise with different power spectra and autocorrelation functions. Top: white Gaussian noise; middle: noise; bottom: noise filtered with a Gaussian convolution kernel. Left: noise samples; middle: noise power spectrum; right: autocorrelation function. Note that all images have the same variance.

Figure 6.32 Illustration of how noise gets modified in a camera system with nonlinear signal transfer characteristics. A higher slope, as in the shadow region, will amplify the noise considerably, while in the highlights the slope is lower, leading to diminished noise.

Figure 6.33 Spatial frequency characteristics of a highpass filter proposed to correct for image nonuniformities in noise measurements.

Figure 6.34 Calculating the distances between green and red and green and blue pixels for the LCA metric.

Chapter 7: Perceptually Correlated Image Quality Metrics

Figure 7.1 Cone spectral sensitivities calculated from the CIE standard observer and the Hunt–Pointer–Estevez transformation in Eq. (7.1). The red, green, and blue curves describe the sensitivities of the L, M, and S cones, respectively.

Figure 7.2 The top images have been lowpass filtered in the luminance channel in increasing amounts from left to right. In the bottom images, the same amount of blurring was instead applied to the chrominance channels. Notice the distinct difference in appearance between the image sets.

Figure 7.3 Human contrast sensitivity functions plotted.

Figure 7.4 A Campbell–Robson chart.

Figure 7.5 Campbell–Robson charts for the chrominance channels.

Figure 7.6 MTF and CSF curves used to calculate the acutance for a 100% magnification viewing on a computer screen, as discussed in the text.

Figure 7.7 MTF and CSF curves used to calculate the acutance for viewing on a 8″ 10″ print, as discussed in the text.

Figure 7.8 Plot relating quality JND to edge acutance values. Dashed blue curve is a straight line with equation .

Figure 7.9 Image approximately corresponding to an SQS JND value of 0. The image should be viewed at a distance of 40 cm.

Figure 7.10 The IHIF fitted to the Baxter and Murray (2012) visual noise metric data (red squares) and the JND values of the noisy patches that were used to calibrate the proposed CPIQ metric (green circles).

Figure 7.11 Frame degradations, with PSNR and SSIM values, for an image pixels in size. The degradation processes, from left to right, are additive white Gaussian noise (AWGN), quantization of intensity values, and truncation of the 2D DCT for pixel blocks. (a) Uncorrupted image. (b) AWGN, . PSNR = 30 dB; SSIM = 0.696. (c) 11 intensity levels. PSNR = 30 dB; SSIM = 0.846. (d) 84% DCT truncation. PSNR = 30 dB; SSIM = 0.831. (e) AWGN, . PSNR = 24 dB; SSIM = 0.453. (f) 7 intensity levels. PSNR = 25 dB; SSIM = 0.737. (g) 98.5% DCT truncation. PSNR = 25 dB; SSIM = 0.640. (h) AWGN, . PSNR = 20 dB; SSIM = 0.290. (i) 4 intensity levels. PSNR = 19 dB; SSIM = 0.561. (j) 99.8% DCT truncation. PSNR = 19 dB; SSIM = 0.530.

Figure 7.12 An overview of the VDP algorithm. Rectangles represent images, and semicircles represent filtering operations. See the main text for details.

Chapter 8: Measurement Protocols—Building Up a Lab

Figure 8.1 Example of three light sources with advertised 2700 K CCT. Note the differences in spectral power distributions for incandescent, compact fluorescent, and LED light sources.

Figure 8.2 Example of lab with lighting at 45 to the normal of the chart surface. This particular setup features multiple light sources and computer-controlled light level adjustment via a DMX interface and also allows for moving the tandem lighting closer or farther away, depending on light level needs.

Figure 8.3 Example of light booth designed specifically for illuminating a chart. Multiple light sources are included and all are set up to provide uniform illumination.

Figure 8.4 Example of a portable light box designed for illuminating a chart. The type of lighting can have various sources such as LED or fluorescent lights.

Figure 8.5 Example of an integrating sphere as the source for illuminating a chart. The type of lighting can have various sources such as LED or tungsten lights.

Figure 8.6 For chart printing quality, the plots include the minimum printed target frequency in cycles/mm with 80% SFR for a given chart size (A series formats) and image sensor resolution

Source

: Data for graph from I3A (2009a).

Figure 8.7 Example of a transmissive chart made with neutral density (ND) filters of varying densities. The ND filters are more uniform than other technologies, for example, printing with film recording.

Figure 8.8 A typical camera alignment approach is to use a mirror in the plane of the chart and ensure that the camera's lens reflection in the mirror is centered in the camera viewfinder.

Figure 8.9 Example of real world objects to capture in lab conditions. Note differences in characteristics such as color, texture, gloss, and reflectance.

Figure 8.10 Example of real world objects combined with chart components to capture in lab conditions. This collection is permanently fixed and limited in depth which functions well for photographing in an objective measurements lab using existing chart lighting.

Figure 8.11 The pictured example timing box can be used to measure metrics such as frame rate and frame exposure time. The multiple tracks of LED arrays allow for moving, visible traces of light to be captured by the camera. The video stream is subsequently analyzed. Note that the fiducials are present to assist autofocus and framing.

Figure 8.12 Shutter trigger examples. Left: Mechanical finger example.

Source

: Reproduced with permission of Image Engineering. Right: Capacitive trigger example.

Figure 8.13 Example of a motorized, articulated platform with six degrees of freedom which can vary a camera's orientation and local position.

Figure 8.14 Example of a softcopy viewing lab setup. Note the lighting used to illuminate the spectrally neutral wall behind the monitor without generating front surface reflections on the monitor. The monitor and illuminated wall have similar brightness and the desk surface is a neutral gray.

Figure 8.15 Example set of scenes, captured with four different cameras, to use for subjective measurements. Note the various scene content such as outdoor, indoor, and macro. Differences in global attributes can be noted, while apparent differences in local attributes would require larger magnification.

Chapter 9: The Camera Benchmarking Process

Figure 9.1 Example scene selections from four cameras with varying imaging quality for the outdoor day landscape category of VIQET (VQEG (Video Quality Experts Group) Image Quality Evaluation Tool) analysis. Note that the differences in aspect ratio are native to each camera. Counterclockwise from upper right for each scene: (a) Flip phone; (b) Generation 5 smartphone; (c) Generation 6 smartphone; (d) Basic DSLR. Images were captured handheld.

Figure 9.2 Example scene selections from four cameras with varying imaging quality for the indoor arrangements category of VIQET (VQEG (Video Quality Experts Group) Image Quality Evaluation Tool) analysis. Note that the differences in aspect ratio are native to each camera. Counterclockwise from upper right for top scene and left to right for bottom scene: (a) Flip phone; (b) Generation 5 smartphone; (c) Generation 6 smartphone; and (d) Basic DSLR. Images were captured handheld.

Figure 9.3 Example scene selections from four cameras with varying imaging quality for the outdoor night landmark category of VIQET (VQEG (Video Quality Experts Group) Image Quality Evaluation Tool) analysis. Note that the differences in aspect ratio are native to each camera. Counterclockwise from upper right for each scene: (a) Flip phone; (b) Generation 5 smartphone; (c) Generation 6 smartphone; and (d) Basic DSLR. Images were captured on a tripod.

Figure 9.4 Cropped scene selections from four cameras for the outdoor day landscape (top) and outdoor night landmark (bottom) categories. Images were 1080 resized as with the VIQET process, prior to cropping. Counterclockwise from upper right for each scene: (a) Flip phone; (b) Generation 5 smartphone; (c) Generation 6 smartphone; (d) Basic DSLR.

Figure 9.5 Flat field capture with flip phone for color uniformity metric captured under U30 10 lux illumination. Notice the color shifts from a yellowish cast at the top of the image to a bluish cast at the bottom. The objective metric value of maximum chrominance variation is 12.1 for this image.

Figure 9.6 Color chart captured with Generation 6 smartphone under U30 10 lux, TL84 100 lux, and D65 500 lux illumination. The objective metric values of mean chroma level are 86.2, 114, and 108, respectively. Notice the lower chroma level in the 10 lux capture, due in part to the underexposure. ColorChecker Digital SG chart reproduced with permission of X-Rite, Incorporated.

Figure 9.7 Crop of SFR chart captured with Generation 5 smartphone under U30 10 lux (left) and D65 500 lux (right) illumination. The objective metric values of edge acutance are 60.3% and 129%, respectively. The significant differences in edge acuity are due to variables such as scene illuminance level and tuning of image processing.

Figure 9.8 Crop of dead leaves chart captured with Generation 5 smartphone under U30 10 lux (left) and Generation 6 smartphone under D65 500 lux (right) illumination. The objective metric values of texture acutance are 34.4% and 108%, respectively. The significant differences in texture acuity are due to variables such as scene illuminance and tuning of image processing.

Figure 9.9 Crop of OECF chart captured with Generation 5 smartphone under TL84 100 lux (left) and D65 500 lux (right) illumination. The objective metric values of visual noise are 0.72 and 0.86, respectively. This is unexpected as higher illuminance levels typically have lower visual noise. Presumably, the image processing is tuned differently for each illuminance and the noise reduction appears stronger for the 100 lux capture. However, when converted to JNDs, this objective difference is 0.9 JNDs, that is, perceptually small.

Figure 9.10 Crop of dot chart captured with basic DSLR under TL84 100 lux illumination. This particular dot was cropped from the lower right corner and represents the maximum LCD of 1.6 arcminutes.

Figure 9.11 Unmatched crop of dead leaves chart captured with the four cameras of interest. Counterclockwise from upper right: (a) Flip phone; (b) Generation 5 smartphone; (c) Generation 6 smartphone; (d) Basic DSLR. Texture acutance values are 62.0, 34.4, 75.4, and 46.6%, respectively.

Figure 9.12 Crops of 100% magnification of cameras and conditions with highest visual noise metric results. Content varies due to differences in native resolution of cameras. Top left, worst CPIQ visual noise; top right, worst ISO visual noise; bottom left, worst CPIQ visual noise in context of image; bottom right, worst ISO visual noise in context of image. Left images are Generation 5 smartphone and right images are flip phone. All are taken at low light, for example, 10 lux for the charts.

Figure 9.13 People scene from four cameras: full scene to compare global attributes and cropped to compare local attributes. Counterclockwise from upper right for each scene: (a) Flip phone; (b) Generation 5 smartphone; (c) Generation 6 smartphone; (d) Basic DSLR.

Figure 9.14 Edge SFR and texture MTF curves of high-end 21 MP DSLR at 10, 100, and 500 lux captures conditions. Note the ISO speed was fixed at 100 to achieve consistent and high acutance levels. With the onboard sharpness setting at maximum, sharpening is evident in edge SFR values greater than 1. The texture MTF shows consistent and strong texture MTF for much of the spatial frequency with minor amounts of sharpening.

Figure 9.15 Edge SFR and texture MTF curves of Generation 6 smartphone camera at 10, 100, and 500 lux captures conditions. The image was captured in automatic mode. Note the evidence of sharpening in both plots, with SFR and MTF values reaching and surpassing 1.4.

Figure 9.16 Comparison of cropped dead leaves chart captured under D65 500 lux. Top left, high-end 21 MP DSLR; top right, Generation 6 smartphone; bottom, reference dead leaves pattern. Note how the captured images both diverge from the reference, but to differing degrees.

Figure 9.17 Comparison of cropped regions of images taken with Generation 6 smartphone, left, and high-end DSLR, right. Note the differences in texture and sharpness in the hardware pieces of an antique bellows camera. Images represent the use case of 100% magnification on a computer monitor.

Figure 9.18 Comparison of CPIQ total quality loss predictions and DxOMark Mobile Photo scores for the 9 CPIQ validation phones (Jin

et al

., 2017). Note the general trend that total quality loss decreases as the DxOMark Photo score increases, an expected outcome for correlated results.

List of Tables

Chapter 4: The Camera

Table 4.1 Relation between radiometric and photometric units

Table 4.2 Noise types typically encountered in CMOS and CCD sensors

Chapter 5: Subjective Image Quality Assessment—Theory and Practice

Table 5.1 Stevens' Law values for various modalities. Note the range from 0.33 to 1.0 to 3.5, which includes both compressive, linear, and expansive perceptual response, respectively.

Source

: Adapted from Stevens 1975. Reproduced with permission of Wiley. Copyright (c) 1975 by John Wiley & Sons, Inc

Table 5.2 Measurement scales related to psychophysical testing

Table 5.3 Advantages and disadvantages of fundamental psychophysical methods. Based on Table 2 in CPIQ Phase 2 Subjective Evaluation Methodology (I3A, 2009). Adapted and reprinted with permission from IEEE. Copyright IEEE 2012. All rights reserved

Table 5.4 Example ISO 20462 Part 3 JND values for scenes taken with a Canon EOS 1Ds Mark II D-SLR camera to be used for ruler images judged at a viewing distance of 34 inches (from supplemental material for ISO (2012)). Note the sub-JND spacings for the high-quality end of the calibrated scale (rulers with highest JND values)

Table 5.5 Comparison of ITU BT.500 recommendations for viewing setup (International Telecommunication Union, 2012). The Lab condition is for stringent assessment, while the Home condition is slightly more critical than a typical home.

Source

: Reproduced with permission of ITU

Table 5.6 Subjective scales used for rating quality or impairment levels as recommended in ITU BT.500 (International Telecommunication Union, 2012).

Source

: Reproduced with permission of ITU

Table 5.7 Subjective scale used for rating the comparison of a test clip to a reference clip as recommended in ITU BT.500 (International Telecommunication Union, 2012).

Source

: Reproduced with permission of ITU

Chapter 6: Objective Image Quality Assessment—Theory and Practice

Table 6.1 CIE chromaticities and correlated color temperatures of CIE standard illuminants

Chapter 7: Perceptually Correlated Image Quality Metrics

Table 7.1 Example viewing conditions for visual image quality metrics. The last item is an example of a viewing condition likely to become relevant for a broad consumer range in the near future. This last item demonstrates the need to update the standard viewing conditions regularly

Table 7.2 Coefficients defining the luminance and chrominance CPIQ CSFs,

Source

: adapted from Johnson and Fairchild (2003)

Table 7.3 Parameters for the original S-CIELAB CSFs.

Source

: data from Zhang and Wandell (1997)

Table 7.4 Coefficients defining the luminance and chrominance CSFs for the iCAM model.

Source

: data from Reinhard

et al.

(2008)

Chapter 8: Measurement Protocols—Building Up a Lab

Table 8.1 For the given image quality attributes, some measurements from example charts are presented. Charts reproduced with permission of DxO Labs and Imatest. ColorChecker Classic chart reproduced with permission of X-Rite, Incorporated

Table 8.2 Capture distances (cm) for two sizes of a combination chart with OECF and SFR components. The distances in

bold

were captured with the specified chart framing as per ISO (2014). The 4x chart was only captured at a far distance. The 4x size and 2x size are closest to A series formats A0 and A2, respectively (Koren, 2016).

Source

: Data from Koren, 2016

Table 8.3 Results for SFR acutance and visual noise at different capture conditions. The results for the capture of the 2x-sized chart (400 mm 610 mm) at the closest distance of 56.5 cm, which was captured with the specified chart framing as per ISO (2014), are compromised compared to the results for captures at farther distances. This provides an example of how measurements can be impacted by, presumably, the print quality of the chart. The results were calculated for a use case of a 4k UHDTV (30 inches, 146 ppi) viewed at 50 cm using Imatest Master 4.5.7 (Koren, 2016).

Source

: Data from Koren, 2016

Table 8.4 Comparison of CPIQ texture acutance for varying vertical FOV of the dead leaves pattern (Nielsen, 2017). The results are for the use case of viewing the image at 100% magnification on a 100 ppi monitor from a distance of 60 cm

Chapter 9: The Camera Benchmarking Process

Table 9.1 Comparison of VIQET scores for flip phone, camera phones, and basic DSLR. Note that the variability for each value is 0.1. Outdoor day and indoor images were captured handheld. Outdoor night images were captured on a tripod

Table 9.2 Comparison of OM results captured under U30 light at 10 lux. Metrics include chroma level (CL), color uniformity (CU), local geometric distortion (LGD), spatial frequency response (SFR), texture blur (TB), visual noise (VN), and lateral chromatic displacement (LCD)

Table 9.4 Comparison of OM results captured under D65 light at 500 lux. Metrics include chroma level (CL), color uniformity (CU), local geometric distortion (LGD), spatial frequency response (SFR), texture blur (TB), visual noise (VN), and lateral chromatic displacement (LCD)

Table 9.5 Comparison of CPIQ and cross correlation texture blur (TB) acutance results. For each camera, the value is an average of the results from U30 10 lux, TL84 100 lux, and D65 500 lux captures

Table 9.6 Comparison of CPIQ and ISO visual noise (VN) results. Due to the context of the visual aspect of noise, that is, in context of a color photograph or neutral flat field, respectively, the two metrics have different scale strengths

Table 9.7 Comparison of individual and total quality loss (QL) results captured under U30 light at 10 lux. Subjective predictions include chroma level (CL), color uniformity (CU), local geometric distortion (LGD), spatial frequency response (SFR), texture blur (TB), visual noise (VN), and lateral chromatic displacement (LCD)

Table 9.9 Comparison of individual and total QL results captured under D65 light at 500 lux. Subjective predictions include chroma level (CL), color uniformity (CU), local geometric distortion (LGD), spatial frequency response (SFR), texture blur (TB), visual noise (VN), and lateral chromatic displacement (LCD)

Table 9.10 Comparison of DxOMark Mobile scores for the Generation 5 and 6 camera phones. Note that the variability for each value is 2 for the DxOMark Mobile score. The normalized score for VIQET and the average CPIQ total quality loss values are compared

Table 9.11 Comparison of SFR and texture acutance values for the use case of 100% magnification on a 100 ppi monitor viewed at 86 cm

Table 9.13 Comparison of SFR and texture acutance values for the use case of a 60-inch 73.43 ppi UHDTV viewed at 200 cm

Table 9.14 VIQET results for the phones used in the CPIQ validation study (Jin, 2017). The variability for each predicted VIQET MOS value is 0.1 standard error. Many of the results between cameras in each category are statistically the same. Note that Phone 2 was not part of the VIQET study

Wiley-IS&T Series in Imaging Science and Technology

 

Series Editorial Board:

Susan Farnand

Geoffrey Wolfe

Raja Bala

Gaurav Sharma

Steven J. Simske

Suzanne Grinnan

 

Reproduction of Colour (6th Edition)

R. W. G. Hunt

 

Colorimetry: Fundamentals and Applications

Noburu Ohta and Alan R. Robertson

 

Color Constancy

Marc Ebner

 

Color Gamut Mapping

Ján Morovič

 

Panoramic Imaging: Sensor-Line Cameras and Laser Range-Finders

Fay Huang, Reinhard Klette and Karsten Scheibe

 

Digital Color Management (2nd Edition)

Edward J. Giorgianni and Thomas E. Madden

 

The JPEG 2000 Suite

Peter Schelkens, Athanassios Skodras and Touradj Ebrahimi (Eds.)

 

Color Management: Understanding and Using ICC Profiles

Phil Green (Ed.)

 

Fourier Methods in Imaging

Roger L. Easton, Jr.

 

Measuring Colour (4th Edition)

R.W.G. Hunt and M.R. Pointer

 

The Art and Science of HDR Imaging

John McCann and Alessandro Rizzi

 

Computational Colour Science Using MATLAB (2nd Edition)

Stephen Westland, Caterina Ripamonti and Vien Cheung

 

Color in Computer Vision: Fundamentals and Applications

Theo Gevers, Arjan Gijsenij, Joost van de Weijer and Jan-Mark Geusebroek

 

Color Appearance Models (3rd Edition)

Mark D. Fairchild

 

2.5D Printing: Bridging the Gap between 2D and 3D Applications

Carinna Parraman and Maria V. Ortiz Segovia

 

 

Published in Association with the Society for Imaging Science and Technology

Camera Image Quality Benchmarking

 

Jonathan B. Phillips

Google Inc., USA

 

Henrik Eliasson

Eclipse Optics AB, Sweden

 

 

With contributions on video image quality by Hugh Denman

 

 

This edition first published 2018

© 2018 John Wiley and Sons Ltd

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Jonathan B. Phillips and Henrik Eliasson to be identified as the authors of this work / the editorial material in this work has been asserted in accordance with law.

Registered Offices

John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA

John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

Editorial Office

The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print-on-demand. Some content that appears in standard print versions of this book may not be available in other formats.

Limit of Liability/Disclaimer of Warranty

While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

The content of the book, including the views and opinions expressed, are solely attributable to the Authors and do not reflect any official policy or practice of Kodak, its subsidiaries or affiliates. Kodak's permission should not imply any approval or verification by Kodak of the factual and opinion contents of the book.

Library of Congress Cataloging-in-Publication Data

Names: Phillips, Jonathan B., 1970- author. | Eliasson, Henrik, author. | Denman, Hugh, 1978- contributor.

Title: Camera image quality benchmarking / by Jonathan B. Phillips, Henrik Eliasson ; with contributions on video image quality by Hugh Denman.

Description: Hoboken, NJ : John Wiley & Sons, 2017. | Includes bibliographical references and index. |

Identifiers: LCCN 2017024315 (print) | LCCN 2017041277 (ebook) | ISBN 9781119054528 (pdf) | ISBN 9781119054511 (epub) | ISBN 9781119054498 (cloth)

Subjects: LCSH: Image processing. | Imaging systems-Image quality.

Classification: LCC TA1637 (ebook) | LCC TA1637 .P483 2017 (print) | DDC 771.3-dc23

LC record available at https://lccn.loc.gov/2017024315

Cover Design: Wiley

Cover Image: © jericho667/Getty Images

About the Authors

Source: Courtesy of Weinberg-Clark Photography

Jonathan B. Phillips is Staff Image Scientist at Google where his responsibilities include overseeing the approach to defining, measuring, and developing image quality for consumer hardware. His involvement in the imaging industry spans more than 25 years, including an image scientist position at NVIDIA and two decades at Eastman Kodak Company where he was Principal Scientist of Imaging Standards. His focus has been on photographic quality, with an emphasis on psychophysical testing for both product development and fundamental perceptual studies. His broad experience has included image quality work with capture, display, and print technologies. He received the 2011 I3A Achievement Award for his work on camera phone image quality and headed up the 2012 revision of ISO 20462 – Psychophysical experimental methods for estimating image quality – Part 3: Quality ruler method. He is a United States delegate to the ISO Technical Committee 42/Working Group 18 on photography and a longstanding member of the IEEE CPIQ (Camera Phone Image Quality) initiative. With sponsorship from Kodak, Jonathan's graduate work was in color science in the Munsell Color Science Lab and the Center for Imaging Science at Rochester Institute of Technology. His undergraduate studies in chemistry and music were at Wheaton College (IL).

Henrik Eliasson received his Masters and PhD degrees in physics from Göteborg University. His thesis work focused on relaxation processes in polymers around the glass transition temperature. He has been working in the optics and imaging industry for the past 17 years, first as a consultant designing optical measurement systems and between 2003 and 2012 as a camera systems engineer at Sony Ericsson/Sony Mobile Communications. There he engineered the camera systems in many of the successful products made by the company. He was also deeply involved in the image quality improvement work and in building up the camera labs as well as designing and implementing new image quality assessment methods. In this role he was the company representative in the CPIQ (Camera Phone Image Quality) initiative, where he was a key contributor in developing many of the image quality metrics. He has also been a Swedish delegate to the ISO Technical Committee 42/Working Group 18 on photography. His experience and expertise in imaging covers many different areas, such as color science, optical measurements, image sensor characterization and measurements, as well as algorithm development and image systems simulations and visualization. He also has a keen interest in photography in general, providing many of the photographs found in this book. Currently, he is working at Eclipse Optics in Sweden as an image sensor and image analysis specialist. Dr. Eliasson is a Senior Member of SPIE.

Series Preface

At the turn of the century, a cellular phone with an on-board camera did not exist and the film camera market had just hit its historic peak. In 1999, digital cameras were introduced, and in the early 2000s cameras were first integrated into mobile phones. By 2015, more than 25% of the world's population were using smartphones. With this explosion of “pocket supercomputers”, the need to understand and evaluate the quality of the pictures captured by digital cameras has increased markedly, and a resource for Camera Image Quality Benchmarking has become essential. In this so-named book, part of the Wiley-IS&T Series in Imaging Science and Technology, Jonathan Phillips and Henrik Eliasson provide information on image quality metrics, how they were developed, why they are needed, and how they are used.

This book provides the framework for understanding the visual quality of digitally captured images. It defines image quality and its attributes, and sketches a detailed perspective on the qualitative and quantitative approaches to the evaluation of captured images. There are many existing sources for learning about the subjective and objective procedures for evaluating image quality; however, this book goes many steps further. It provides the reader with an understanding of the important elements of the camera itself as well as of the physiology and physicality of the human visual system. This awareness of both the human and machine capture systems provides the background needed to understand why the accepted metrics were developed. The book also elucidates why measuring perceived quality has been such an intractable problem to solve.

Additionally, a key contribution of Camera Image Quality Benchmarking is that it provides detailed information on how to set up a lab for efficiently conducting this work. This means describing the testing, including how to select image content and observers when needed. This information is invaluable to those who aim to understand the capabilities of camera prototypes and to evaluate finished products.

The authors have been engaged in the development of camera-captured image quality measurements for many years. Their complementary backgrounds provide them with a somewhat different emphasis. Mr. Jonathan Phillips has generally been focused on subjective and applied aspects of image quality evaluation. He has played an important role in image capture evaluation in industry. As a seasoned Image Scientist, currently at Google, and previously at NVIDIA and Kodak, he has been deeply engaged in the development and evaluation of image quality measurements and the use of these to foster improved capture products. Additionally, he has been a key member of the IEEE Camera Phone Image Quality (CPIQ) initiative and the ISO Technical Committee 42 on photography. Thus, he has been instrumental in the development of international standards for quantifying photographic quality. The research focus of Mr. Phillips' graduate work in Color Science at the Rochester Institute of Technology was on perceptual image quality. His undergraduate studies were in chemistry and music at Wheaton College (IL). His accomplishments include the 2011 Achievement Award from International Imaging Industry Association for his contributions to the CPIQ image quality test metrics. His academic and industrial backgrounds serve as a solid foundation for making this valuable contribution to the Wiley-IS&T Series in Imaging Science and Technology.

Partnering Mr. Phillips' attention to the subjective and applied aspects, Dr. Henrik Eliasson has generally been focused on the objective and theoretical side of image quality measurement. Dr. Eliasson completed his graduate work in Physics at Göteborg University. Since then, he has designed optical measurement systems and, more recently, engineered camera systems for Sony Ericsson/Sony Mobile Communications. His work at Sony Ericsson also involved establishing the camera labs as well as designing and implementing improved image quality evaluation techniques. Currently, he is working as a consultant at Eclipse Optics in Sweden, with a focus on image sensor technology and image analysis. His publications cover a breadth of imaging and image quality topics including optics simulation, white balancing assessment, and image sensor crosstalk characterization. He, like Mr. Phillips, has played an important role in the CPIQ (Camera Phone Image Quality) initiative. He has served as a Swedish delegate in the ISO Technical Committee 42/Working Group 18 on photography. Dr. Eliasson is a Senior Member of SPIE. Together, the two authors bring significant experience in and understanding of the world of Camera Image Quality Benchmarking.

As cameras become ubiquitous for everything from selfies to “shelfies” (inventory management), and from surveillance to purveyance (automated point of sale), image quality assessment needs to become increasingly automated, so the right information is disambiguated and the unnecessary images are discarded. It is hard to imagine a world without digital image capture, and yet we have only begun. This book is sure to maintain its relevance in a world where automated capture, categorization, and archival imaging become increasingly critical.

Susan P. Farnand Steven J. Simske

Preface

The seed