Foundations of Image Science - Harrison H. Barrett - E-Book

Foundations of Image Science E-Book

Harrison H. Barrett

0,0
227,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Winner of the 2006 Joseph W. Goodman Book Writing Award! A comprehensive treatment of the principles, mathematics, and statistics of image science In today's visually oriented society, images play an important role in conveying messages. From seismic imaging to satellite images to medical images, our modern society would be lost without images to enhance our understanding of our health, our culture, and our world. Foundations of Image Science presents a comprehensive treatment of the principles, mathematics, and statistics needed to understand and evaluate imaging systems. The book is the first to provide a thorough treatment of the continuous-to-discrete, or CD, model of digital imaging. Foundations of Image Science emphasizes the need for meaningful, objective assessment of image quality and presents the necessary tools for this purpose. Approaching the subject within a well-defined theoretical and physical context, this landmark text presents the mathematical underpinnings of image science at a level that is accessible to graduate students and practitioners working with imaging systems, as well as well-motivated undergraduate students. Destined to become a standard text in the field, Foundations of Image Science covers: * Mathematical Foundations: Examines the essential mathematical foundations of image science * Image Formation-Models and Mechanisms: Presents a comprehensive and unified treatment of the mathematical and statistical principles of imaging, with an emphasis on digital imaging systems and the use of SVD methods * Image Quality: Provides a systematic exposition of the methodology for objective or task-based assessment of image quality * Applications: Presents detailed case studies of specific direct and indirect imaging systems and provides examples of how to apply the various mathematical tools covered in the book * Appendices: Covers the prerequisite material necessary for understanding the material in the main text, including matrix algebra, complex variables, and the basics of probability theory

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 3377

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Cover

Half Title page

Title page

Copyright page

Dedication

Preface

Organization of the Book

Suggestions for Course Outlines

Prologue

Kinds of Imaging Systems

Objects and Images as Vectors

Imaging as A Mapping Operation

Detectors and Measurement Noise

Image Reconstruction and Processing

Objective Assessment of Image Quality

Probability and Statistics

Chapter 1: Vectors and Operators

1.1 Linear Vector Spaces

1.2 Types of Operators

1.3 Hilbert-Space Operators

1.4 Eigenanalysis

1.5 Singular-Value Decomposition

1.6 Moore-Penrose Pseudoinverse

1.7 Pseudoinverses and Linear Equations

1.8 Reproducing-Kernel Hilbert Space

Chapter 2: Dirac Delta and Other Generalized Functions

2.1 Theory of Distributions

2.2 One-Dimensional Delta Function

2.3 Other Generalized Functions in 1D

2.4 Multidimensional Delta Functions

Chapter 3: Fourier Analysis

3.1 Sines, Cosines and Complex Exponentials

3.2 Fourier Series

3.3 1D Fourier Transform

3.4 Multidimensional Fourier Transforms

3.5 Sampling Theory

3.6 Discrete Fourier Transform

Chapter 4: Series Expansions and Integral Transforms

4.1 Expansions in Orthogonal Functions

4.2 Classical Integral Transforms

4.3 Fresnel Integrals and Transforms

4.4 Radon Transform

Chapter 5: Mixed Representations

5.1 Local Spectral Analysis

5.2 Bilinear Transforms

5.3 Wavelets

Chapter 6: Group Theory

6.1 Basic Concepts

6.2 Subgroups and Classes

6.3 Group Representations

6.4 Some Finite Groups

6.5 Continuous Groups

6.6 Groups of Operators on A Hubert Space

6.7 Quantum Mechanics and Image Science

6.8 Functions and Transforms on Groups

Chapter 7: Deterministic Descriptions of Imaging Systems

7.1 Objects and Images

7.2 Linear Continuous-To-Continuous Systems

7.3 Linear Continuous-To-Discrete Systems

7.4 Linear Discrete-To-Discrete Systems

7.5 Nonlinear Systems

Chapter 8: Stochastic Descriptions of Objects and Images

8.1 Random Vectors

8.2 Random Processes

8.3 Normal Random Vectors and Processes

8.4 Stochastic Models for Objects

8.5 Stochastic Models for Images

Chapter 9: Diffraction Theory and Imaging

9.1 Wave Equations

9.2 Plane Waves and Spherical Waves

9.3 Green’s Functions

9.4 Diffraction by A Planar Aperture

9.5 Diffraction in the Frequency Domain

9.6 Imaging of Point Objects

9.7 Imaging of Extended Planar Objects

9.8 Volume Diffraction and 3D Imaging

Chapter 10: Energy Transport and Photons

10.1 Electromagnetic Energy Flow and Detection

10.2 Radiometric Quantities and Units

10.3 Boltzmann Transport Equation

10.4 Transport Theory and Imaging

Chapter 11: Poisson Statistics and Photon Counting

11.1 Poisson Random Variables

11.2 Poisson Random Vectors

11.3 Random Point Processes

11.4 Random Amplification

11.5 Quantum Mechanics of Photon Counting

Chapter 12: Noise in Detectors

12.1 Photon Noise and Shot Noise in Photodiodes

12.2 Other Noise Mechanisms

12.3 X-Ray and Gamma-Ray Detectors

Chapter 13: Statistical Decision Theory

13.1 Basic Concepts

13.2 Classification Tasks

13.3 Estimation Theory

Chapter 14: Image Quality

14.1 Survey of Approaches

14.2 Human Observers and Classification Tasks

14.3 Model Observers

14.4 Sources of Images

Chapter 15: Inverse Problems

15.1 Basic Concepts

15.2 Linear Reconstruction Operators

15.3 Implicit Estimates

15.4 Iterative Algorithms

Chapter 16: Planar Imaging with X Rays and Gamma Rays

16.1 Digital Radiography

16.2 Planar Imaging in Nuclear Medicine

Chapter 17: Single-Photon Emission Computed Tomography

17.1 Forward Problems

17.2 Inverse Problems

17.3 Noise and Image Quality

Chapter 18: Coherent Imaging and Speckle

18.1 Basic Concepts

18.2 Speckle in A Nonimaging System

18.3 Speckle in An Imaging System

18.4 Noise and Image Quality

18.5 Point-Scattering Models and Non-Gaussian Speckle

18.6 Coherent Ranging

Chapter 19: Imaging in Fourier Space

19.1 Fourier Modulators

19.2 Interferometers

Epilogue

Appendix A: Matrix Algebra

A.1 Notation and Terminology

A.2 Basic Algebraic Operations

A.3 Matrix Inversion

A.4 Eigenvectors and Eigenvalues

A.5 Determinants

A.6 Traces

A.7 Functions of Matrices

A.8 Definite Matrices and Quadratic Forms

A.9 Differentiation Formulas

A.10 Taylor Expansions

A.11 Matrix and Vector Inequalities

Appendix B: Complex Variables

B.1 Complex Algebra

B.2 Functions of A Complex Variable

B.3 Complex Integration

Appendix C: Probability

Introduction

C.1 Calculus of Probability

C.2 Single Random Variables

C.3 Functions of A Single Random Variable

C.4 Two Random Variables

C.5 Continuous Probability Laws

C.6 Discrete Probability Laws

C.7 Sampling Methods

Bibliography

Index

Foundations of Image Science

WILEY SERIES IN PURE AND APPLIED OPTICS

Founded by Stanley S. Ballard, University of Florida

EDITOR: Bahaa E. A. Saleh

BARRETT AND MYERS · Foundations of Image Science

BEISER · Holographic Scanning

BERGER-SCHUNN · Practical Color Measurement

BOYD · Radiometry and The Detection of Optical Radiation

BUCK · Fundamentals of Optical Fibers

CATHEY · Optical Information Processing and Holography

CHUANG · Physics of Optoelectronic Devices

DELONE AND KRAINOV · Fundamentals of Nonlinear Optics of Atomic Gases

DERENIAK AND BOREMAN · Infrared Detectors and Systems

DERENIAK AND CROWE · Optical Radiation Detectors

DE VANY · Master Optical Techniques

GASKILL · Linear Systems, Fourier Transform, and Optics

GOODMAN · Statistical Optics

HOBBS · Building Electro-Optical Systems: Making It All Work

HUDSON · Infrared System Engineering

IIZUKA · Elements of Photonics, Volume I: In Free Space and Special Media

IIZUKA · Elements of Photonics, Volume II: For Fiber and Integrated Optics

JUDD AND WYSZECKI · Color in Business, Science, and Industry, Third Edition

KAFRI AND GLATT · The Physics of Moire Metrology

KAROW · Fabrication Methods for Precision Optics

KLEIN AND FURTAK · Optics, Second Edition

MALACARA · Optical Shop Testing, Second Edition

MILONNI AND EBERLY · Lasers

NASSAU · The Physics and Chemistry of Color: The Fifteen Causes of Color, Second Edition

NIETO-VESPERINAS · Scattering and Diffraction in Physical Optics

OSCHE · Optical Detection Theory for Laser Applications

O’SHEA · Elements of Modern Optical Design

OZAKTAS · The Fractional Fourier Transform

SALEH AND TEICH · Fundamentals of Photonics, Second Edition

SCHUBERT AND WILHELMI · Nonlinear Optics and Quantum Electronics

SHEN · The Principles of Nonlinear Optics

UDD · Fiber Optic Sensors: An Introduction for Engineers and Scientists

UDD · Fiber Optic Smart Structures

VANDERLUGT · Optical Signal Processing

VEST · Holographic Interferometry

VINCENT · Fundamentals of Infrared Detector Operation and Testing

WILLIAMS AND BECKLUND · Introduction to the Optical Transfer Function

WYSZECKI AND STILES · Color Science: Concepts and Methods, Quantitative Data and Formulae, Second Edition

XU AND STROUD · Acousto-Optic Devices

YAMAMOTO · Coherence, Amplification, and Quantum Effects in Semiconductor Lasers

YARIV AND YEH · Optical Waves in Crystals

YEH · Optical Waves in Layered Media

YEH · Introduction to Photorefractive Nonlinear Optics

YEH AND GU · Optics of Liquid Crystal Displays

Copyright © 2004 by John Wiley & Sons, Inc. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey.Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4744, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, e-mail: [email protected].

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services please contact our Customer Care Department within the U.S. at (877) 762-2974, outside the U.S. at (317) 572-3993 or fax (317) 572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic format.

Library of Congress Cataloging-in-Publication Data is available.

ISBN 0-471-15300-1

To Cathy, David, Mindy, Ryan, Justin Matt, Ben, Andrew

Preface

Images are ubiquitous in the modern world. We depend on images for news, communication and entertainment as well as for progress in medicine, science and technology. For better or worse, television images are virtually a sine qua non of modern life. If we become ill, medical images are of primary importance in our care. Satellite images provide us with weather and crop information, and they provide our military commanders with timely and accurate information on troop movements. Biomedical research and materials science could not proceed without microscopic images of many kinds. The petroleum reserves so essential to our economy are usually found through seismic imaging, and enemy submarines are located with sonic imaging. These examples, and many others that readily come to mind, are ample proof of the importance of imaging systems.

While many of the systems listed above involve the latest in high technology, it is not so obvious that there is an underlying intellectual foundation that ties the technologies together and enables systematic design and optimization of diverse imaging systems. A substantial literature exists for many of the subdisciplines of image science, including quantum optics, ray optics, wave propagation, image processing and image understanding, but these topics are typically treated in separate texts without significant overlap. Moreover, the practitioner’s goal is to make better images, in some sense, but little attention is paid to the precise meaning of the word “better.” In such circumstances, can imaging be called a science?

There are three elements that must be present for a discipline to be called a science. First, the field should have a common language, an agreed-upon set of definitions. Second, the field should have an accepted set of experimental procedures. And finally, the field should have a theory with predictive value. It is the central theme of this book that there is indeed a science of imaging, with a well-defined theoretical and experimental basis. In particular, we believe that image quality can be defined objectively, measured experimentally and predicted and optimized theoretically.

Our goal in writing this book, therefore, is to present a coherent treatment of the mathematical and physical foundations of image science and to bring image evaluation to the forefront of the imaging community’s consciousness.

ORGANIZATION OF THE BOOK

There are a number of major themes that weave their way throughout this book, as well as philosophical stances we have taken, so we recommend that the reader begin with the prologue to get an introduction to these themes and our viewpoint. Once this big picture is absorbed, the reader should be ready to choose where to jump into the main text for more detailed reading.

Mathematical Foundations

The first six chapters of this book represent our estimation of the essential mathematical underpinnings of image science. In our view, anyone wishing to do advanced research in this field should be conversant with all of the main topics presented there. The first four chapters are devoted to the important tools of linear algebra, generalized functions, Fourier analysis and other linear transformations. Chapter 5 treats a class of mathematical descriptions called mixed representations, that is, descriptions that mix seemingly incompatible variables such as spatial position and spatial frequency. Chapter 6 presents the basic concepts of group theory, the mathematics of symmetry, which will be applied to the description of imaging systems in later chapters.

It was our objective in writing these introductory chapters to present the mathematical foundations of image science at a level that will be accessible to graduate students and well-motivated undergraduates. At the same time, we have attempted to include sufficient advanced material so that the material will be beneficial to established workers in the field. This dual goal requires examining many concepts at different levels of sophistication. We have attempted to do this by providing both elementary explanations of the key points and more detailed mathematical treatments. The reader will find that the level of mathematical rigor is not uniform throughout these chapters or even within a particular chapter. We hope that this approach allows each reader to extract from the book insights appropriate to his or her individual interests and mathematical preparation.

Image Formation: Models and Mechanisms

A quick perusal of the of Contents will reveal that a significant portion of the book is devoted to the subject of image formation. We have strived to present a comprehensive and unified treatment of the mathematical and statistical principles of imaging. We hope this serves the image-science community by giving a common language and framework to our many disciplines. Additionally, a thorough understanding of the image-formation process is a prerequisite for the image-evaluation methodology we advocate.

The deterministic analysis of imaging systems begins in Chap. 7, where we present a wide variety of mathematical descriptions of objects and images and mappings from object to image. We argue in Chap. 7 (and briefly also in the Prologue) that digital imaging systems are best described as a mapping from a function to a discrete set of numbers, so much of the emphasis in that chapter will be on such mappings. More conventional mappings such as convolutions are, however, also treated in a unified way. An important tool in Chap. 7 is singular-value decomposition, which is introduced mathematically in Chap. 1.

The deterministic mappings are not a complete description of image formation. Repeated images of a single object will not be identical because of electronic noise in detectors and amplifiers, as well as photon noise, which arises from the discrete nature of photoelectric interactions. In addition, the object itself can often be usefully regarded as random. Object statistics are important in pattern recognition, image reconstruction and evaluation of image quality. Chapter 8 provides a general mathematical framework for the description of random vectors and processes. Particular emphasis is given to Gaussian random vectors and processes, which often arise as the result of the central-limit theorem.

The next two chapters go more deeply into specific mechanisms of image formation. Chapter 9 develops the theory of wave propagation from first principles and treats diffraction and imaging with waves within this framework. Though the objective of the discussion is to develop deterministic models of wave-optical imaging systems, we cannot avoid discussing random processes when we consider the coherence properties of wave fields, so an understanding of the basics of random processes, as presented in Chap. 8, is needed for a full understanding of Chap. 9. The reader with previous exposure to such topics as autocorrelation functions and complex Gaussian random fields can, however, skip Chap. 8 and move directly to Chap. 9.

Chapter 10 is ostensibly devoted to radiometry and radiative transport, but actually it covers a wide variety of topics ranging from quantum electrodynamics to tomographic imaging. A key mathematical tool developed in that chapter is the Boltzmann equation, a general integro-differential equation that is capable of describing virtually all imaging systems in which interference and diffraction play no significant role. The Boltzmann equation describes a distribution function that can loosely be interpreted as a density of photons in phase space, so it is necessary to discuss in that chapter just what we mean by the ubiquitous word photon.

In Chap. 10 we discuss only the mean photon density or the mean rate of photoelectric interactions, but in Chap. 11 we begin to discuss fluctuations about these means. In particular, we present there an extensive discussion of the Poisson probability law and its application to simple counting detectors and imaging arrays. Included is a discussion of photon counting from a quantum-mechanical perspective. Many of the basic principles of random vectors and processes enunciated in Chap. 8 are used in Chap. 11.

Chapter 12 goes into more detail on noise mechanisms in various detectors of electromagnetic radiation. The implications of Poisson statistics are discussed in practical terms, and a number of noise mechanisms that are not well described by the Poisson distribution are introduced. A long section is devoted to x-ray and gamma-ray detectors, not only because of their practical importance in medical imaging, but also because they illustrate some important aspects of the theory developed in Chap. 11.

Inferences from Images

With the background developed in Chaps. 1–12, we can discuss ways of drawing inferences from image data. The central mathematical tool we need for this purpose is statistical decision theory, introduced in Chap. 13. This theory allows a systematic approach to estimation of numerical parameters from image data as well as classifying the object that produced a given image, and it will form the cornerstone of our treatment of image quality. In accordance with this theory, we shall define image quality in terms of how well an observer can extract some desired information from an image.

Chapter 14 is a nuts-and-bolts guide to objective assessment of image quality for both hardware and software. Particular attention is paid to evaluating the performance of human observers, for whom most images are intended.

Chapter 15 provides a general treatment of inverse problems or image reconstruction, defined as inferring properties of an object from data that do not initially appear to be a faithful image of that object. Considerable attention is given to what information one can hope to extract and what aspects of an object are intrinsically inaccessible from data obtained with a specific imaging system. A wide variety of image-reconstruction algorithms will be introduced, and special attention will be given to the statistical properties of the resulting images.

Applications

Chapters 16–19 are intended as detailed case studies of specific imaging systems, with the goal of providing examples of how various mathematical tools developed earlier in the book can be applied. Two of these chapters (16 and 18) cover direct imaging systems in which the image is formed without the need for a processing or reconstruction algorithm, and two of them (17 and 19) cover indirect systems in which the initial data set is not a recognizable image. By a different dichotomy, two of the chapters (16 and 17) relate to imaging with × rays and gamma rays, and two of them (18 and 19) relate to imaging with light. The key physical and pedagogical difference is that × rays and gamma rays have such short wavelengths that interference and diffraction can be neglected, and the Boltzmann transport equation of Chap. 10 is applicable. With light, the diffraction theory developed in Chap. 9 takes a central role.

Appendices

Three appendices are provided: one on matrix algebra, a second on complex variables and a third on the fundamentals of probability theory. The material contained there is expected to have been seen by most readers during their undergraduate training. In writing the appendices, we tried to provide a self-contained treatment of the prerequisite material necessary for the understanding of the material in the main text.

SUGGESTIONS FOR COURSE OUTLINES

Drafts of this book have been used as text material for three different courses that have been taught at the Optical Sciences Center of the University of Arizona. Each course has been taught several times, and there has been some experimentation with the course outlines as the book evolved.

The first six chapters of the book were developed for a one-semester course called Mathematical Methods for Optics. This course was originally intended for first-year graduate students but has proved to be more popular with advanced students. It is basically an introductory course in applied mathematics with emphasis on topics that are useful in image science. Expected preparation includes calculus and differential equations and an elementary understanding of matrix algebra and complex analysis. Appendices A and B were originally used as introductory units in the course but are now considered to define the prerequisites for the course. The current syllabus covers Chaps. 1–6 of the book. Earlier, however, a more optics-oriented course was offered based on Chaps. 1–3, 9 and 10. For this course it was necessary to assume that the students had some elementary understanding of random processes.

For advanced graduate students, especially ones who will pursue dissertation research in image science, there is a two-course sequence: Principles of Image Science, taught in the Fall semester, and Noise in Imaging Systems, taught in the Spring. The Principles course begins with Chap. 1; this is a review for those who have previously taken the Mathematical Methods course, but there have been no complaints about redundancy. Chapters 7, 9, 10 and 15 are then covered sequentially. Occasionally it is necessary to review material from Chaps. 2–5, but basically these chapters are assumed as prerequisites. Appendices A and B are available for reference.

Noise in Imaging Systems covers Chaps. 8 and 11–14. Appendix C defines the prerequisite knowledge of probability and statistics, but a general acquaintance with Chaps. 1–3 is also presumed. Neither Mathematical Methods nor Principles of Image Science is a formal prerequisite for the Noise course.

Alternatively, a one-year advanced sequence could be taught by covering Chap. 1 and then 7–15 in sequence. Prerequisite material in this case would be defined by Chaps. 2 and 3 and the three appendices. Necessary topics in Chaps. 4–6 could be sketched briefly in class and then assigned for reading.

The applications chapters, 16–19, have not been used in teaching, although they have been reviewed by graduate students working in image science. They could form the basis for an advanced seminar course.

Acknowledgments

The seeds of this project can be found in the interactions of the authors with Robert F. Wagner, who more than anyone else founded the field of objective assessment of image quality, especially in regard to radiological imaging. Without his insights and guidance, neither that field nor this book would have been born.

Many people have read parts of this book and provided invaluable feedback, but two in particular must be singled out. Matthew Myers may be the only person other than the authors who has read every word of the book (to date, we hope!), and Eric Clarkson has read large portions. Both have provided continuing guidance on mathematics, physics and pedagogy; our debt to them is enormous.

A bevy of students at the University of Arizona also struggled through many parts of the book, sometimes in early draft form, and their diligence and insightful feedback have been invaluable. It is almost unfair to make a list of those who have helped in this respect, since we will surely leave off many who should not be overlooked, but we thank in particular Rob Parada, Jim George, Elena Goldstein, Angel Pineda, Andre Lehovich, Jack Hoppin, Kit-Iu Cheong, Dana Clarke, Liying Chen and Bill Hunter. Former students, too, have been very helpful, especially Brandon Gallas, Craig Abbey, John Aarsvold and Jannick Rolland. Colleagues who have provided invaluable review and guidance include Rolf Clackdoyle, Jeffrey Fessler, Charles Metz, Keith Wear, Robert Gagne, Xiaochuan Pan, Mike Insana, Roger Zemp, Steve Moore, Adriaan Walther, Jim Holden, Todd Peterson, Elizabeth Krupinski, Jack Denny, Donald Wilson and Matthew Kupinski.

Staff at the Radiology Department of the University of Arizona, especially Debbie Spargur, Lisa Gelia and Jane Lockwood, have been a continuing source of cheerful and highly competent assistance in myriad details associated with this project. We also thank Brian W. Miller and Meredith Whitaker for their assistance with figures and Bo Huang for his assistance in converting parts of the book from another word processor to . The authors have benefited significantly from the help of staff at the Center for Devices and Radiological Health of the FDA as well, especially Phil Quinn and Jonathan Boswell.

Special thanks are owed to Stefanie Obara, also known as , for her diligence and care in polishing up our and producing the final camera-ready text. describes herself as “anal and proud of it,” and we are proud of her production. She participated very capably in formatting, indexing and preparing the bibliography as well as meticulous editing.

Finally, we thank our loving families, who supported and encouraged us during the many years it took to bring this project to fruition.

HARRISON H. BARRETTKYLE J. MYERS

Tucson, Arizona July 1, 2003

Prologue

We shall attempt here to provide the reader with an overview of topics covered in this book as well as some of the interrelationships among them. We begin by surveying and categorizing the myriad imaging systems that might be discussed and then suggest a unifying mathematical perspective based on linear algebra and stochastic theory. Next we introduce a key theme of this book, objective or task-based assessment of image quality. Since this approach is essentially statistical, we are led to ruminate on Bayesian and frequentist interpretations of probability. In discussing image quality, probability and statistics, our personal views, developed as we have worked together on imaging issues for two decades, will be much in evidence. The viewpoints presented here are, we hope, more firmly given mathematical form and physical substance in the chapters to follow.

KINDS OF IMAGING SYSTEMS

There are many kinds of objects to be imaged and many mechanisms of image formation. Consequently, there are many ways in which imaging systems can be classified. One such taxonomy, represented by Table I, classifies systems by the kind of radiation or field used to form an image. The most familiar kind of radiation is electromagnetic, including visible light, infrared and ultraviolet radiation. Also under this category we find long-wavelength radiation such as microwaves and radio waves and short-wavelength radiation in the extreme ultraviolet and soft x-ray portions of the spectrum. Of course, the electromagnetic spectrum extends further in both directions, but very long wavelengths, below radio frequencies, do not find much use in imaging, while electromagnetic waves of very short wavelength, such as hard × rays and gamma rays, behave for imaging purposes as particles. Other particles used for imaging include neutrons, protons and heavy ions.

Table I. CLASSIFICATION BY KIND OF RADIATION OR FIELD

Other kinds of waves are also used in various imaging systems. Mechanical waves are used in seismology, medical ultrasound and even focusing of ocean waves. The DeBroglie principle tells us that matter has both wave-like and particle-like characteristics. The wave character of things we usually call matter is exploited for imaging in scanning tunneling microscopes and in recent work on diffraction of atoms.

Not only radiation, in the usual sense, but also static or quasistatic fields may be the medium of imaging. Magnetic fields are of interest in geophysics and in biomagnetic imaging, while electric fields are imaged in some new medical imaging modalities.

A second useful taxonomy of imaging systems groups them according to the property of the object that is displayed in the image (see Table II). In other words, what does the final image represent?

Table II. CLASSIFICATION BY PROPERTY BEING IMAGED

In an ordinary photographic camera, the snapshot usually represents the light reflected from a scene. More precisely, the image reaching the film is related to the product of the optical reflectance of the object and its illumination. Other imaging techniques that essentially map object reflectance include radar imaging and medical ultrasound.

In some instances, however, a photograph measures not reflectance but the source strength of a self-luminous source; a snapshot of a campfire, an astronomical image and a fluorescence micrograph are all examples of emission images. The source strength, in turn, is often related to some concentration or density of physical interest. For example, in nuclear medicine one is interested ultimately in the concentration of some pharmaceutical; if the pharmaceutical is radioactive, its concentration is directly related to the strength of a gamma-ray-emitting source.

Other optical properties can also be exploited for imaging. The index of refraction is used in phase-contrast microscopy, while attenuation or transmissivity of radiation is used in film densitometry and ordinary x-ray imaging. The complex amplitude of a wave is measured in many kinds of interferometry and some forms of seismology, and scattering properties are used in medical ultrasound and weather radar. Electrical and magnetic properties such as impedance and magnetization are of increasing interest, especially in medicine.

We might also classify systems by the imaging mechanism. In other words, how is the image or data set formed? Included in this list (see Table III) are simple refraction and reflection, along with the important optical effects of interference and diffraction. Some imaging systems, however, make use of less obvious physical mechanisms, including scattering and shadow casting. Perhaps the least obvious mechanism is what we shall designate as modulation imaging. In this technique, the imaging system actively modulates the properties of the object being imaged in a space-dependent manner. Examples include the important medical modality of magnetic resonance imaging (MRI) and the lesser-known method of photothermal imaging, which originated with Alexander Graham Bell.

Table III. CLASSIFICATION BY IMAGING MECHANISM

The next dichotomy to consider is direct vs. indirect imaging. By direct imaging we mean any method where the initial data set is a recognizable image. In indirect imaging, on the other hand, a data-processing or reconstruction step is required to obtain the image. Examples of direct and indirect imaging systems are provided in Table IV.

Table IV. DIRECT VS. INDIRECT IMAGING

Direct imaging techniques may be divided into serial-acquisition systems or scanners, in which one small region of the object is interrogated at a time, and parallel-acquisition systems where detector arrays or continuous detectors are used to capture many picture elements or pixels in the object simultaneously. Hybrid serial/parallel systems are also possible.

Perhaps the most common type of indirect imaging is tomography in all its varied forms, including the now-familiar x-ray computed tomography (CT), emission tomography such as single-photon emission computed tomography (SPECT) and positron emission tomography (PET), as well as MRI and certain forms of ultrasonic and optical imaging. In all of these methods, the data consist of a set of line integrals or plane integrals of the object, and a reconstruction step is necessary to obtain the final image.

The indirect method of coded-aperture imaging is a shadow-casting method used in x-ray astronomy and nuclear medicine. The shadows represent integrals of the object but here the path of integration depends on the shape of the aperture. As above, tomographic information can be retrieved from the data set following a reconstruction step.

Another principle that leads to specific indirect imaging systems is embodied in the van Cittert–Zernike theorem, relating the intensity distribution of an incoherent source to the coherence properties of the field it produces. Systems that exploit this theorem include the Michelson stellar interferometer and the Hanbury Brown–Twiss interferometer.

The final dichotomy we shall consider is passive vs. active imaging (see Table V). In passive imaging, measurements are made without interacting with a source. Familiar examples include ordinary photography of self-luminous sources or of a reflecting source with natural illumination as well as astronomical imaging and medical thermography. By contrast, an active imaging system supplies the radiation being imaged. Systems in this category include flash photography, transmission imaging (x rays, microscopy, etc.), radar, active SONAR and medical ultrasound.

Table V. PASSIVE VS. ACTIVE IMAGING

OBJECTS AND IMAGES AS VECTORS

As we have just seen, many different physical entities can serve as objects to be imaged. In most cases, these objects are functions of one or more continuous variables. In astronomy, for example, position in the sky can be specified by two angles, so the astronomical object being imaged is a scalar-valued function of two variables, or a two-dimensional (2D) function for short. In nuclear medicine, on the other hand, the object of interest is the three-dimensional (3D) distribution of some radiopharmaceutical, so mathematically it is described as a 3D function. Moreover, if the distribution varies with time—not an uncommon situation in nuclear medicine—then a 4D function is required (three spatial dimensions plus time).

Even higher dimensionalities may be needed in some situations. For example, multispectral imagers may be used on objects where wavelength is an important variable. An accurate object description might then require five dimensions (three spatial dimensions plus time and wavelength).

Sometimes the function is vector-valued. In magnetic resonance imaging, for example, the object is characterized by the proton density and two relaxation times, so a complete object description consists of a 3D vector function of space and time.

Images, too, are often functions. A good example occurs in an ordinary camera, where the image is the irradiance pattern on a piece of film. Even if this pattern is time varying, usually all we are interested in is its time integral over the exposure time, so the most natural description of the image is as a continuous 2D spatial function. Similar mathematics applies to the developed film. The image might then be taken as the optical density or transmittance of the film, but again it is a 2D function. A color image is a vector-valued function; the image is represented by the density of the three color emulsions on the film.

Sometimes images are not functions but discrete arrays of numbers. In the camera example, suppose the detector is not film but an electronic detector such as a charge-coupled device (CCD). A CCD is a set of, say, M discrete detector elements, each of which performs a spatial and temporal integration of the image irradiance. The spatial integral extends over the area of one detector element, while the temporal integration extends over one frame period, typically 1/30 sec. As a result of these two integrations, the image output from this detector is simply M numbers per frame. In this example, the object is continuous, but the image is discrete. In fact, any digital data set consists of a finite set of numbers, so a discrete representation is virtually demanded.

Another example that requires a discrete representation of the image is indirect imaging such as computed tomography (CT). This method involves reconstruction of an image of one or more slices of an object from a set of x-ray projection data. Even if the original projection data are recorded by an analog device such as film, a digital computer is usually used to reconstruct the final image. Again, the use of the computer necessitates a discrete representation of the image.

In this book and throughout the imaging literature, mathematical models or representations are used for objects and images, and we need to pay particular attention to the ramifications of our choice of model. Real objects are functions, but the models we use for them are often discrete. Familiar examples are the digital simulation of an imaging system and the digital reconstruction of a tomographic image; in both cases it is necessary to represent the actual continuous object as a discrete set of numbers. A common way to construct a discrete representation of a continuous object is to divide the space into N small, contiguous regions called pixels (picture elements) or voxels (volume elements). The integral of the continuous function over a single pixel or voxel is then one of N discrete numbers representing the continuous object. As discussed in detail in Chap. 7, many other discrete object representations are also possible, but the pixel representation is widespread.

There are rare circumstances where the object to be imaged is more naturally described by a discrete set of numbers rather than by a continuous function. For example, in some kinds of optical computing systems, data are input by modulating a set of point emitters such as light-emitting diodes. If we regard the optical computer itself as a generalized imaging system, then this array of luminous points is the object being imaged. If there are N emitters in the array, we can consider the object to be defined by a set of N numbers. Even in this case, however, we are free to adopt a continuous viewpoint, treating the point emitters as Dirac delta functions. After all, the object is not a set of discrete numbers but the radiance distribution at the diode array face. In our view, then, any finite, discrete object representation is, at best, an approximation to the real world.

To summarize to this point, both objects and images can be represented as either continuous functions in some number of dimensions or as sets of discrete numbers. Discrete representations for objects are not an accurate reflection of the real world and should be used with caution, while discrete representations of images may be almost mandatory if a computer is an integral part of the imaging system.

These diverse mathematical descriptions of objects and images can be unified by regarding all of them as vectors in some vector space. A discrete object model consisting of N pixels can be treated as a vector in an N-dimensional Euclidean space, while a continuous object is a vector in an infinite-dimensional Hilbert space. We shall refer to the space in which the object vector resides as object space, denoted , and we shall consistently use the designation f to denote the object vector. Similarly, the space in which the data vector is defined will be called data space and denoted . This space will also be referred to as image space when direct imaging is being discussed.

IMAGING AS A MAPPING OPERATION

The mapping operator can be either linear or nonlinear. For many reasons, linear systems are easier to analyze than nonlinear ones, and it is indeed fortunate that we can often get away with the assumption of linearity. One common exception to this statement is that many detectors are nonlinear, or at best only approximately linear over a restricted range of inputs.

Chapter 1 provides the mathematical foundation necessary to describe objects and images as vectors and imaging systems as linear operators. A particular kind of operator will emerge as crucial to the subsequent discussions: Hermitian operators, better known in quantum mechanics than in imaging. Study of the eigenvectors and eigenvalues of Hermitian operators will lead to the powerful mathematical technique known as singular-value decomposition (SVD). SVD provides a set of basis vectors such that the mapping effect of an arbitrary (not necessarily Hermitian) linear operator reduces to simple multiplication.

If the object and the image are both continuous functions, is referred to as a continuous-to-continuous, or CC, operator. If, in addition, is linear, the relation between object and image is an integral. Similarly, if both object and image are discrete vectors, is referred to as a discrete-to-discrete, or DD, operator. If this DD operator is linear, the relation between object and image is a matrix-vector multiplication.

While both linear models, CC and DD, are familiar and mathematically tractable, neither is really a good description of real imaging systems. As noted above, real objects are continuous functions while digital images are discrete vectors. The correct description of a digital imaging system is thus a continuous-to-discrete, or CD, operator. While such operators may be unfamiliar, they can nevertheless be analyzed by methods similar to those used for CC and DD operators provided the assumption of linearity is valid.

Choice of basis When we describe objects and images as functions, which are vectors in a Hilbert space, we have many options for the basis vectors in this space. One very important basis set consists of plane waves or complex exponentials, and the resulting theory is known broadly as Fourier analysis. When a discrete sum of complex exponentials is used to represent a function, we call the representation a Fourier series, while a continuous (integral) superposition is called a Fourier transform.

A Fourier basis is a natural way to describe many imaging systems. If a spatial shift of the object produces only a similar shift of the image and no other changes, the system is said to be shift invariant or to have translational symmetry. The mapping properties of these systems are described by an integral operator known as convolution, but when object and image are described in the Fourier basis, this mapping reduces to a simple multiplication. In fact, Fourier analysis is equivalent to SVD for linear, shift-invariant systems.

When we use pixels or some other approximate representation of an object, we shall refrain from calling the expansion functions a basis since they do not form a basis for object space . Of course, any set of functions is trivially a basis for some space (the space of all linear combinations of functions in the set), but it is too easy to lose sight of the distinction between true basis functions for objects and the approximate models we construct.

DETECTORS AND MEASUREMENT NOISE

Every imaging system must include a detector, either an electronic or a biological one. Most detectors exhibit some degree of nonlinearity in their response to incident radiation. Some detectors, such as photographic film, are intrinsically very nonlinear, while others, such as silicon photodiodes, are quite linear over several orders of magnitude if operated properly. All detectors, however, eventually saturate at high radiation levels or display other nonlinearities.

Nonlinearities may be either global or local. With respect to imaging detectors, a global nonlinearity is one in which the response at one point in the image depends nonlinearly on the incident radiation at another (perhaps distant) point. An example would be the phenomenon known as blooming in various kinds of TV camera tubes. In a blooming detector, a bright spot of light produces a saturated image, the diameter of which increases as the intensity of the spot increases.

A simpler kind of nonlinearity is one in which the output of the detector for one image point or detector element depends nonlinearly on the radiation level incident on that element but is independent of the level on other detector elements. This kind of nonlinearity is referred to as a local or point nonlinearity. To a reasonable approximation, film nonlinearities are local. A local nonlinearity may be either invertible or noninvertible, depending on whether the nonlinear input-output characteristic is monotonic. If the characteristic is monotonic and known, then it can be corrected in a computer with a simple algorithm.

In imaging, the two main noise sources are photon noise, which arises from the discrete nature of photoelectric interactions, and electronic noise in detectors or amplifiers. Photon noise usually obeys the Poisson probability law and electronic noise is almost always Gaussian.

IMAGE RECONSTRUCTION AND PROCESSING

The mapping from object to image is called the forward problem: given an object and knowledge of the imaging system, find the image. We are often interested also in the inverse problem: given an image (or some other data set), learn as much as we can about the object that produced it. Note that we do not say: given the image, find the object. Except in rare and usually highly artificial circumstances, it will not be possible to determine the object exactly.

An inverse problem is fundamental to indirect imaging systems, where an image-reconstruction step is needed in order to produce the final useful image. Even in direct imaging, a post-detection processing step may be used for image enhancement. For example, it may be desirable to smooth the image before display or to manipulate its contrast. We shall refer to all such manipulations, whether for purpose of image reconstruction or enhancement, as post-processing.

With post-processing, we have three vectors to deal with: f, g and . These vectors are defined in, respectively, object space, data space and image or reconstruction space.

OBJECTIVE ASSESSMENT OF IMAGE QUALITY

In scientific or medical applications, the goal of imaging is to obtain information about the object. Aesthetic considerations such as bright colors, high contrast or pleasing form play no obvious role in conveying this information, and subjective impressions of the quality of an image without consideration of its ability to convey information can be very misleading. We adopt the view, therefore, that any meaningful definition of image quality must answer these key questions:

1. What information is desired from the image?
2. How will that information be extracted?
3. What objects will be imaged?
4. What measure of performance will be used?

We refer to the desired information as the task and the means by which it is extracted as the observer. For a given task and observer and a given set of objects, it is possible to define quantitative figures of merit for image quality. We call this approach objective or task-based assessment of image quality.

Tasks Two kinds of information might be desired from an image: labels and numbers. There are many circumstances in which we want merely to label or identify the object. For example, a Landsat image is used to identify crops and a screening mammogram is used to identify a patient’s breast as normal or abnormal.

More and more commonly in modern imaging, however, the goal is to extract quantitative information from images. A cardiologist might want to know the volume of blood ejected from a patient’s heart on each beat, for example, or a military photointerpreter might want to know the number of planes on an airfield or the area of an encampment.

Different literatures apply different names to these two kinds of task. In medicine, labeling is diagnosis and extraction of numbers is quantitation. In statistics, the former is hypothesis testing and the latter is parameter estimation. In radar, detection of a target is equivalent to hypothesis testing (signal present vs. signal absent), but determination of the range to the target is parameter estimation.

We shall use the term estimation to refer to any task that results in one or more discrete numbers, while a task that assigns the object to one of two or more categories will be called classification. Various parts of the imaging literature discuss pattern recognition, character recognition, automated diagnosis, signal detection and image segmentation, all of which fall under classification, while other parts of the literature discuss metrology, image reconstruction and quantitation of gray levels in regions of interest, all of which are estimation tasks.

This dichotomy is not absolute, however, since often the output of an estimation procedure is used immediately in a classification, for example, when features are extracted from an image and passed to a pattern-recognition algorithm. Also, both aspects may be desired from a single image; radar, for example, is an acronym for Radio Detection and Ranging, implying that we want both to detect a signal (classification) and to determine its range (estimation). Often the very same image-analysis problem may be logically cast as either classification or estimation. In functional magnetic resonance imaging (fMRI), for example, the task can be formulated as detecting a change in signal as a result of some stimulus or estimating the strength of the change.

The conceptual division of tasks into classification and estimation will serve us well when we discuss image quality in Chaps. 13 and 14, but we shall also explore there the interesting cases of hybrid estimation/classification tasks and the implications of performing one kind of task before the other.

Observers We call the means by which the task gets done, or the strategy, the observer. In spite of the anthropomorphic connotation of this term, the observer need not be a human being. Often computer algorithms or mathematical formulas serve as the observer.

In many cases the observer for a classification task is a human, a physician viewing a radiograph or a photointerpreter studying an aerial photo, for example. In these cases the label results from the judgment and experience of the human, and the output is an oral or written classification.

It is, however, becoming increasingly common in radiology and other fields of imaging to perform classification tasks by a computer. If not fully computerized diagnosis, at least computer-aided diagnosis is poised to make an important contribution to clinical radiology, and computer screening of specimen slides is a great help to overworked pathologists. Satellite images generate such an enormous volume of data that purely human classification is not practical. In these cases, the computer that either supplements or supplants the human observer can be called a ?machine observer.

A special observer known as the ideal observer will receive considerable attention in this book. The ideal observer is defined as the observer that utilizes all statistical information available regarding the task to maximize task performance (see below for measures of performance). Thus the performance of the ideal observer is a standard against which all other observers can be compared.

Estimation tasks using images as inputs are most often performed by computer algorithms. These algorithms can run the gamut from very ad hoc procedures to ones based on stated optimality criteria that have to do with the bias and/or variance of the resulting estimates.

Objects The observer’s strategy will depend on the source of the signal, that is, the parameters of the objects that distinguish one class from another if the task is classification, or the parameter to be estimated if the task is estimation. Table II lists possible sources of signal for a variety of imaging mechanisms.

Given a particular object, there will be a fixed signal and fixed background, or nonsignal component, at the input to the imaging system. Real imaging systems collect data from multiple objects, though, and there will therefore be a distribution of the signal-carrying component and background component across the full complement of objects in each class. The observer’s strategy will depend on these distributions of signal and background in the object space.

The observer must also make use of all available knowledge regarding the image-formation process, including the deterministic mapping from object space to image space described above, and knowledge of any additional sources of variability from measurement noise, to generate as complete a description of the data as possible. The more accurately the observer’s knowledge of the properties of the data, the better the observer will be able to design a strategy for performing the task.

Measures of task performance For a classification task such as medical diagnosis, the performance is defined in terms of the average error rate, in some sense, but we must recognize that there are different kinds of errors and different costs attached to them. In a binary classification task such as signal detection, where there are just two alternatives (signal present and signal absent), there are two kinds of error. The observer can assert that the signal is present when it is not, which is a false alarm or false-positive decision; or the assertion can be that the signal is not present when in fact it is, which would be a missed detection or a false-negative decision. The trade-off between these two kinds of error is depicted in the receiver operating characteristic (ROC) curve, a useful concept developed in the radar literature but now widely used in medicine. Various quantitative measures of performance on a binary detection task can be derived from the ROC curve (see Chap. 13).

Another approach to defining a figure of merit for classification tasks is to define costs associated with incorrect decisions and benefits (or negative costs) associated with correct decisions. Then the value of an imaging system can be defined in terms of the average cost associated with the classifications obtained by some observer on the images.

For an estimation task where the output is only a single number, we can define the performance in terms of an average error, but we must again recognize that there are two types of error: random and systematic. In the literature on statistical parameter estimation, the random error is defined by the variance of the estimate, while systematic error is called bias. Usually in that literature, bias is computed on the basis of a particular model of the data-generation process, but an engineer will recognize that systematic error can also arise because the model is wrong or the system is miscalibrated. Both kinds of bias will be discussed in this book.

Often more than one quantitative parameter is desired in an estimation task. An extreme example is image reconstruction where an attempt is made to assign one parameter to each pixel in the image. In this case some very subtle issues arise in attempting even to define bias, much less to minimize it. These issues get considerable attention in Chap. 15.

PROBABILITY AND STATISTICS

The figures of merit for image quality just mentioned are inherently statistical. They involve the random noise in the images and the random collection of objects that might have produced the images. In our view, any reasonable definition of image quality must take into account both of these kinds of randomness, so probability and statistics play a central role in this book.

The conventional distinction is that probability theory deals with the description of random data while statistics refers to drawing inferences from the data. From a more theoretical viewpoint, probability theory is essentially deductive logic, where a set of axioms is presented and conclusions are drawn, but these conclusions are, in fact, already inherent in the axioms. Statistics, with this dichotomy, is inductive logic, where the conclusions are drawn from the data, not just the axioms. Thus, as we shall see, we use probability theory to describe the randomness in objects and images. Statistical decision theory is the tool we use in the objective assessment of images formed by an imaging system for some predefined task.

Bayesians, frequentists and pragmatists We expect that our readers will have some previous acquaintance with basic definitions of probability and the associated operational calculus. There are, however, many subtleties and points of contention in the philosophy of probability and statistical inference. The divisions between different schools, classified broadly as frequentists and Bayesians, are profound and often bitter. We do not propose to enter seriously into this fray, but we cannot avoid adopting a point of view and an operational approach in this book. In this section we give a short summary of what this point of view is and why we have adopted it for various problems in image science.

Of the various definitions of probability given in App. C, perhaps the most intuitively appealing is the one that defines the probability of an event as its relative frequency of occurrence in a large number of trials under identical conditions. Because of the historical origins of probability theory, the concept of probability as relative frequency is usually illustrated in terms of games of chance; e.g., the probability of a coin showing heads is the relative number of times it does so in a very large number of trials. In an optical context, it is easy to conceive of an experiment where the number of photoelectric events in some detector, say a photomultiplier, is recorded for many successive one-second intervals. The limit of the histogram of these counts as the number of trials gets very large can be regarded as the probability law for the counts.

To a Bayesian, probability is a measure of degree of belief, an element of inductive logic and not a physical property or an observable quantity. To illustrate this concept and its relation to frequency, we let some well-known Bayesians speak for themselves:

We … regard probability as a mathematical expression of our degree of belief in a certain proposition. In this context the concept of verification of probabilities by repeated experimental trials is regarded as merely [emphasis added] a means of calibrating a subjective attitude. (Box and Tiao, 1992)

… long-run frequency remains a useful notion for some applications, as long as we remain prepared to recognize that the notion must be abandoned if it contradicts our degree of belief …. (Press, 1989)

The essence of the present theory is that no probability, direct, prior, or posterior, is simply a frequency. (Jeffreys, 1939)

The interpretation we shall intend in this book when we use the word probability will depend somewhat on context. We are pragmatists — many times a relative-frequency interpretation will serve our needs well, but we shall not hesitate to use Bayesian methods and a subjective interpretation of probability when it is useful to do so. In particular, we can often present certain conditional probabilities of the data that we can be “reasonably certain” (a measure of belief, to be sure) would be verified by repeated experiments. Other times, as we shall argue below, there is no conceivable way to experimentally verify a postulated probability law, and in those cases we must be content to regard the probabilities in a Bayesian manner.

Our way of resolving this ambivalence will be presented below after we have seen in more detail how some probabilities of interest in imaging can be interpreted as frequencies and others cannot.

Conditional probability of the image data A statistical entity that plays a major role in the description of the randomness in an image is the conditional probability (or probability density) of an image data set obtained from a particular object. Since we usually refer to a data set as g and an object as f, this conditional probability is denoted pr(g|f). Here each component of the image vector g refers to a particular measurement; for example, the component gm could be the measured gray level in the mth pixel on an image sensor. The object f, on the other hand, is best conceptualized as a vector (or function) in an infinite Hilbert space.

The conditional probability pr(g|f) is the basic description of the randomness in the data, and it is paramount to compute it if we want to give a full mathematical description of the imaging system. Moreover, all inferences we wish to draw about the object f, whether obtained by frequentist or Bayesian methods, require knowledge of pr(g|f).

Fortunately, pr(g|f) is usually quite simple mathematically. For example, with so-called photon-counting imaging systems, it follows from a few simple axioms (enumerated in Chap. 11) that each component of g is a Poisson random variable and that different components are statistically independent. As a second example, if there are many independent sources of noise in a problem, the central-limit theorem (derived in Chap. 8) allows us to assert that each component of g is a normal or Gaussian random variable, and often the statistical independence of the different components (conditional on a fixed f) can be justified as well. In these circumstances, we see no difficulty in regarding pr(g|f) as a relative frequency of occurrence. We could easily repeat the image acquisition many times with the same object and accumulate a histogram of data values gm for each sensor element m. If we have chosen the correct model for the imaging system and all experimental conditions (including the object) are held constant, each of these histograms should approach the corresponding marginal probability pr(gm|f) computed on the basis of that model. If we can also experimentally verify the independence of the different values of gm, we will have shown that the histogram, in the limit of a large number of data sets, is well described by the probability law pr(g|f), which in this case is just the product of the individual pr(gm|f). Of course, this limiting frequency histogram will depend on the nature of the object and properties of the imaging system, but the mathematical form should agree with our calculation.

The conditional probability pr(g|f