New Techniques in Digital Holography -  - E-Book

New Techniques in Digital Holography E-Book

0,0
139,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

A state of the art presentation of important advances in the field of digital holography, detailing advances related to fundamentals of digital holography, in-line holography applied to fluid mechanics, digital color holography, digital holographic microscopy, infrared holography, special techniques in full field vibrometry and inverse problems in digital holography

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 404

Veröffentlichungsjahr: 2015

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Introduction

1 Basic Fundamentals of Digital Holography

1.1. Digital holograms

1.2. Back-propagation to the object plane

1.3. Numerical reconstruction of digital holograms

1.4. Holographic setups

1.5. Digital holographic interferometry

1.6. Quantitative phase tomography

1.7. Conclusion

1.8. Bibliography

2 Digital In-line Holography Applied to Fluid Flows

2.1. Examples of measurements in flows

2.2. The fractional-order Fourier transform

2.3. Digital in-line holography with a sub-picosecond laser beam

2.4. Spatially partially coherent source applied to the digital in-line holography

2.5. Digital in-line holography for phase objects metrology

2.6. Bibliography

3 Digital Color Holography For Analyzing Unsteady Wake Flows

3.1. Advantage of using multiple wavelengths

3.2. Analysis of subsonic wake flows

3.3. Analysis of a supersonic jet with high-density gradients

3.4. Analysis of a hydrogen jet in a hypersonic flow

3.5. Conclusion

3.6. Acknowledgment

3.7. Bibliography

4 Automation of Digital Holographic Detection Procedures for Life Sciences Applications

4.1. Introduction

4.2. Experimental protocol

4.3. General tools

4.4. Automated 3D detection

4.5. Application

4.6. Conclusions

4.7. Bibliography

5 Quantitative Phase-Digital Holographic Microscopy: a New Modality for Live Cell Imaging

5.1. Introduction

5.2. Cell imaging with quantitative phase DHM

5.3. High-content phenotypic screening based on QP-DHM

5.4. Multimodal QP-DHM

5.5. Resolving neuronal network activity and visualizing spine dynamics

5.6. Perspectives

5.7. Acknowledgments

5.8. Bibliography

6 Long-Wave Infrared Digital Holography

6.1. Introduction

6.2. Analog hologram recording in LWIR

6.3. Digital hologram recording in LWIR

6.4. Typical applications of LWIR digital holography

6.5. Conclusions: future prospects

6.6. Bibliography

7 Full Field Holographic Vibrometry at Ultimate Limits

7.1. Introduction

7.2. Heterodyne holography

7.3. Holographic vibrometry

7.4. Conclusion

7.5. Bibliography

List of Authors

Index

First published 2015 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd

27-37 St George’s Road

London SW19 4EU

UK

www.iste.co.uk

John Wiley & Sons, Inc.

111 River Street

Hoboken, NJ 07030

USA

www.wiley.com

© ISTE Ltd 2015

The rights of Pascal Picart to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2014958258

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-84821-773-7

Introduction

Holography, the brilliant idea from Dennis Gabor [GAB 48], became “digital” in the early 1970s with the pioneering works of Goodman, Hua and Kronrad [GOO 67, HUA 71, KRO 72]. It took until 1994, for “digital holography” based on array detectors to come about [SCH 94], as a consequence of the important developments in two sectors of technology: microtechnological procedures have made the creation of image sensors with numerous miniaturized pixels possible, and the rapid computational treatment of images has become accessible with the appearance of powerful processors and an increase in storage capacities. From 1994, holography found a new life in the considerable stimulation of research efforts. About 20 years later, digital holography appears to be a mature topic, covering a wide range of areas such as three-dimensional (3D) imaging and display systems, computer-generated holograms, integral imaging, compressive holography, digital phase microscopy, quantitative phase imaging, holographic lithography, metrology and profilometry, holographic remote sensing techniques or full-field tomography. In addition, besides the visible light classically used, light sources including coherent to incoherent and X-ray to terahertz waves can be considered. Thus, digital holography is a highly interdisciplinary subject with a wide domain of applications: biomedicine, biophotonics, nanomaterials, nanophotonics, and scientific and industrial metrologies.

Thus, as actors of this boom, it seemed convenient that we propose a book devoted to special techniques in digital holography. The coauthors aim to establish a synthetic stat of the art of important advances in the field of digital holography. We are interested in detailing advances related to fundamentals of digital holography, in-line holography applied to particle tracking and sizing, digital color holography applied to fluid mechanics, digital holographic microscopy as new modality for live cell imaging and life science applications, long-wave infrared holography, and special techniques in full-field vibrometry with detection at the ultimate limits.

The book is organized into seven chapters. Chapter 1 introduces the basic fundamentals of digital holography, the recording of digital holograms, demodulation techniques to separate the diffraction orders, algorithms to reconstruct the complex object wave, and basic principles of holographic interferometry and phase tomography. Chapter 2 discusses the use of in-line holography for the study of seeded flows; the recent developments permit us to apply this technique in many industrial or laboratory situations for velocimetry, particle size measurement or trajectography. In Chapter 3, the coauthors present new approaches in three-color holography for analyzing unsteady flows. Special techniques to visualize and quantitatively analyze flows up to Mach 10 are presented. The in-line approach based on Wollaston prisms will be discussed and compared to the holographic Michelson arrangement. Chapter 4 is devoted to automation of digital holographic detection procedures for life sciences applications. With the use of partial spatially coherent light sources, the use of a reduced coherence source is of interest for reducing the measurement noise; typical applications are detailed. The coauthors describe specific tools linked to the numerical propagation that are indispensable to process the information correctly, avoid numerical effects and make easier the further processing. The automated 3D detection methods based on propagation matrices with both a local and a global approach are discussed and illustrated on concrete applications. Chapter 5 is devoted to applications of quantitative phase digital holographic microscopy in cell imaging. The most relevant applications in the field of cell biology are summarized. Recent promising applications obtained in the field of high content screening are presented. In addition, the important issue concerning the development of multimodal microscopy is addressed and illustrated trough concrete examples, including combination with fluorescence microscopy, Raman spectroscopy and electrophysiology. Chapter 6 presents digital holography in the long-wave infrared domain. Technology related to sensors and light sources are presented and digital holographic infrared interferometry is detailed and applied to high-amplitude displacements of industrial aeronautic structures. Examples of non-destructive testing (NDT) are also provided. Chapter 7 presents new techniques in the field of vibration measurement; combined off-axis and heterodyne digital holography experiments are presented. In particular, techniques based on high speed and ultimate sensitivity are described. Examples related to life sciences are presented and detailed.

This book is intended for engineers, researchers and science students at PhD and Master’s degree level, and will supply them with the required basics for entering the fascinating domain of digital holography.

I.1. Bibliography

[GAB 48] GABOR D., “A new microscopic principle” Nature, vol. 161, pp. 777–778, 1948.

[GOO 67] GOODMAN J.W., LAWRENCE R.W., Applied Physics Letters, vol. 11, pp. 77–79, 1967.

[HUA 71] HUANG T.S., Proceedings of the IEEE, vol. 159, pp. 1335–1346, 1971.

[KRO 72] KRONROD M.A., MERZLYAKOV N.S., YAROSLAVSLII L.P., Soviet Physics Technical Physics, vol. 17, pp. 333–334, 1972.

[SCH 94] SCHNARS U., JUPTNER W., Applied Optics, vol. 33, pp. 179–181, 1994.

Introduction written by Pascal PICART.

1

Basic Fundamentals of Digital Holography

The idea of digitally reconstructing the optical wavefront first appeared in the 1960s. The oldest study on the subject dates back to 1967 with the article published by Goodman in Applied Physics Letters [GOO 67]. The aim was to replace the “analog” recording/decoding of the object by a “digital” recording/decoding simulating diffraction from a digital grating consisting of the recorded image. Thus, holography became “digital”, replacing the silvered support with a matrix of the discrete values of the hologram. Then, in 1971, Huang discussed the computer analysis of optical wavefronts and introduced for the first time the concept of “digital holography” [HUA 71]. The works presented in 1972 by Kronrod [KRO 72] historically constituted the first attempts at reconstruction by the calculation of an object coded in a hologram. At that time, 6 h of calculation was required for the reconstruction of a field of 512 × 512 pixels with the Minsk-22 computer, the discrete values being obtained from a holographic plate by 64-bit digitization with a scanner. However, it took until the 1990s for array detector-based digital holography to materialize [SCH 94]. In effect, there have been important developments in two sectors of technology: since this period, microtechnological processes have resulted in charge coupled device (CCD) arrays with sufficiently small pixels to fulfill the Shannon condition for the spatial sampling of a hologram; the computational treatment of images has become accessible largely due to the significant improvement in microprocessor performance, in particular their processing units as well as storage capacities.

The physical principle of digital holography is similar to that of traditional holography. However, the size of the pixels in an image detector (CCD or complementary metal oxyde semiconductor (CMOS)) is clearly greater than that of the grains of a traditional photographic plate (typically 2–3 μm, compared with some 25 nm). These constraints impose to take into account certain parameters (pixel area, number of pixels and pixel pitch) which were more or less clear in an analog holography.

This chapter, as an introduction to advanced methods detailed in other chapters, aims at describing the different aspects related to digital holography: the principle of light diffraction, how to record a digital hologram and color holograms, algorithms to reconstruct digital holograms, an insight into the different holographic configurations, special techniques to demodulate the hologram, the basic principle of digital holographic interferometry and a brief discussion on tomographic phase imaging.

1.1. Digital holograms

A digital hologram is an interferometric mixing between a reference wave and a wave from the object of interest. This section presents the basic properties related to a digital hologram.

1.1.1. Interferences between the object and reference waves

Figure 1.1 illustrates the basic geometry for recording a digital hologram. An object wave is coherently mixed with a reference wave, and their interferences are recorded in the recording plane H. In digital holography, the recording is performed by using a pixel matrix sensor.

Figure 1.1.Free space diffraction, interferences and notations. For a color version of this figure, see www.iste.co.uk/picart/digiholography.zip

Consider an extended object illuminated with a monochromatic wave. This object diffracts a wave to the observation plane localized at a distance The surface of the object generates a wavefront which will be denoted as :

[1.1]

The amplitude A0 describes the reflectivity/transmission of the object and phase ψ0 is related to its surface and shape or thickness and refractive index. Because of the natural roughness of the object, ψ0 is a random variable, uniformly distributed over [−π,+π]. The diffracted field UO at distance d0, and at spatial coordinates (X,Y) of the observation plane, is given by the propagation of the object wave to the recording plane. In the observation plane, this wave can be simply written as:

[1.2]

here aO is the modulus of the complex amplitude and φO is its optical phase. Since the object is naturally rough, the diffracted field at distance d0 is a speckle pattern [DAI 84, GOO 07].

Let us consider Ur, the complex amplitude of the reference wavefront, at the recording plane. We have:

[1.3]

where ar is the modulus and φr is the optical phase. The reference wavefront usually comes from a small pinhole: thus, it is a spherical divergent wave, impacting the plane with a non-zero incident angle. Considering (xs, ys,zs) the coordinates of the source point in the hologram reference frame (zs<0), the optical phase of the reference wave can be written in the paraxial approximations by [GOO 72, GOO 05]:

[1.4]

This optical phase can also be written as:

[1.5]

[1.6]

This equation can also be written as:

[1.7]

Equations [1.6] and [1.7] constitute what is classically called the digital hologram. It includes three orders: the 0-order is composed of terms , the +1 order is the term and the −1 order is the term , also called the twin image. Generally, the +1 order is of interest because it is related to the initial object, whereas the −1 order exhibits some symmetry that is due to the hermitic property of the Fourier operator. Figure 1.2 shows a digital hologram and a zoom on one of its part.

Figure 1.2.Fine structure of a digital hologram: a) digitally recorded hologram and b) zoom showing micro fringes and speckle grains

As can be seen, the microstructure of a digital hologram is composed of micro fringes, on the one hand, and light grains, on the other hand. These light grains are speckles that are due to the random nature of the light reflected from the object [GOO 85, DAI 84]. Note that in the case where the object is transparent and non-diffusing, the speckle nature of the hologram may disappear.

1.1.2. Role of the image sensor

1.1.2.1. Spatial sampling and Shannon conditions

[1.8]

1.1.2.2. Low-pass filtering

The digital hologram effectively recorded by the sensor is not simply described by equation [1.6]. Indeed, we must take into account the active surface of pixels, which induces a local spatial integration. So, the recorded hologram at point (npx, mpy) was given to be written as [PIC 08]:

[1.9]

with the even pixel function:

[1.10]

From equation [1.9], the basic effect can be understood: since the pixel provides local integration of the micro fringes, the consequence is a blurring of these fringes. Qualitatively, this means that the spatial resolution will deteriorate and that the pixel induces a low-pass filtering to the digital hologram.

1.1.2.3. Effect of the exposure time

During the recording of the hologram, the pixel receives light for a certain duration, called the exposure time T. The total energy received by the sensor is such that [KRE 96]:

[1.11]

1.1.2.4. Recording digital color holograms

The first digital color holograms appeared in the 2000s with the advent of color detectors. Yamaguchi showed the applicability of digital color holography to the color reconstruction of objects [YAM 02]. Since then, numerous applications have been developed, particularly in the domain of contactless metrology: flow analysis in fluid mechanics [DEM 03, DES 08, DES 12], surface profilometry by two-color microscopy [KUM 09, KUH 07, MAN 08], three-color digital holographic microscopy (DHM) even with low coherence [DUB 12] and multidimensional metrology of deformed objects [KHM 08, TAN 10a, TAN 10b, TAN 11]. There are different approaches for recording digital color holograms, in particular for simultaneously recording the three colors. The simplest method consists of using a monochromatic detector and recording the colors sequentially. This method was proposed by Demoli in 2003 [DEM 03] and is only adapted to the case of objects which vary slowing in time. Figure 1.3 illustrates the different recording strategies. The first possibility consists of using a chromatic filter organized in a Bayer mosaic (Figure 1.3(a)). However, in such a detector, half of the pixels detect green, and only a quarter detect red or blue [YAM 02, DES 11]. The spatial color filter creates holes in the mesh, and therefore a loss of information, which translates into a loss of resolution. For example, Yamaguchi used a detector with 1,636 × 1,238 pixels of size 3.9 × 3.9 μm2 [YAM 02], and his results had a relatively low spatial resolution. The number of pixels for each color was 818 × 619, and the pixel pitch was 7.8 μm. The second possibility consists of using three detectors organized as a “tri-CCD”, the spectral selection being carried out by a prism with dichroic layers (Figure 1.3(b)). Such a detector guarantees a high spatial resolution and a spectral selectivity compatible with the constraints of digital color holography. Of course, the relative adjustment of the three sensors must be realized with high precision. For example, Desse developed a type of holographic color interferometry for use in fluid mechanics, with three detectors of 1,344 × 1,024 pixels of size 6.45 μm × 6.45 μm [DES 11]. The third possibility consists of using a color detector based on a stack of photodiodes [TAN 10a, TAN 10b, DES 08, DES 11] (http://www.foveon.com, Figure 1.3(c)). The spectral selectivity is relative to the mean penetration depth of the photons in the silicon: blue photons at 425 nm penetrate to around 0.2 μm, green photons at 532 nm to around 2 μm and red photons at 630 nm to around 3 μm. Thus, the construction of junctions at depths at around 0.2, 0.8 and 3.0 μm gives the correct spectral selectivity for color imaging. However, the spectral selectivity is not perfect, as green photons may be detected in the blue and red bands, but the architecture guarantees a maximum spectral resolution since the number of effective pixels for each wavelength is that of the entire matrix. For example, [TAN 10b] uses a stack of photodiodes with 1,060 × 1,414 pixels of size 5 × 5 μm2. One last possibility consists of using a monochromatic detector combined with spatial chromatic multiplexing (Figure 1.3(d)). Each reference wave must have different separately adjusted spatial frequencies according to their wavelengths. The complexity of the experimental apparatus increases with the number of colors. For two-color digital holography, it is acceptable; for three colors, it becomes prohibitive. A demonstration of this approach is given in [PIC 09, MAN 08, KUH 07] and [TAN 11].

Figure 1.3.Recording digital color holograms. For a color version of this figure, see www.iste.co.uk/picart/digiholography.zip

1.1.3. Demodulation of digital holograms

Equations [1.6] and [1.7] describe the digital hologram. The +1 order is of interest because it includes the object wave through term Note that the −1, , is the complex conjugate of and that it includes also the full information on the object wave. The demodulation of the digital hologram consists of retrieving the +1 order from the recording of H. There are mainly two ways to perform demodulation: using slightly off-axis geometry at the recording, or using phase-shifting [CUC 99b]. These approaches are detailed in the next sections.

1.1.3.1. Off-axis holograms

Off-axis geometry introduces a spatial carrier frequency and demodulation restores the full spatial frequency content of the wavefront, i.e. In equation [1.5], the phase of the reference wave includes the carrier spatial frequencies of the hologram (u0,v0). When (u0,v0) ≠ (0,0), there is a slight tilt between the two waves and holography is off-axis. Practically, the different diffraction terms encoded in the hologram (zero-order wave, real image and virtual image) are propagating in different directions, enabling their separation for reconstruction. This configuration was the one employed for the first demonstration of a fully numerical recording and reconstruction holography [SCH 94, COQ 95]. In practice, reconstruction methods based on off-axis configuration usually rely on Fourier methods to filter one of the diffraction terms contained in the hologram (Ur*UO or UrU*O) [CUC 00]. This concept was first proposed by Takeda et al. [TAK 82] in the context of interferometric topography. The method was later extended for smooth topographic measurements for phase recovery [KRE 86] and generalized for the use in DHM with amplitude and phase recovery [CUC 99a].

According to equations [1.3]–[1.6], in the spatial frequency spectrum, a three-modal distribution is related to the three diffraction orders of the hologram (FT and FT-1 means, respectively, Fourier transform and inverse Fourier transform):

[1.12]

where C0 is the Fourier transform of the zero-order and C1 is the Fourier transform of the +1 order. If the three orders are well separated in the Fourier plane, the +1 order can be extracted from the Fourier spectrum. Figure 1.4 illustrates the spectral distribution in the Fourier domain of the digital hologram. The spatial frequencies (u0,v0) localize the useful information and they must be adjusted to minimize the overlapping of the three diffraction orders. By applying a bandwidth-limited filter (Δu × Δv width) around the spatial frequency (u0,v0), and after filtering and inverse two-dimensional (2D) Fourier transform, we get the object complex amplitude:

[1.13]

where the symbol * means convolution and h(x,y) is the impulse response corresponding to the filtering applied in the Fourier domain.

Figure 1.4.Spectral distribution of orders and spectral filtering

The impulse response of the filter is such that:

[1.14]

The spatial resolution is then related to 1/Δu and 1/Δv, respectively, in the x-y axis. In addition, the phase recovered with equation [1.13] includes the spatial carrier modulation that has to be removed. This may be achieved by multiplying O+1 by exp[−2iπ(u0x+v0y)].

Note that a filter having a circular bandwidth (instead of a rectangular bandwidth) can also be used [CUC 99a]. In that case, the impulse response of the filter is proportional to a J0 Bessel function.

Then, the optical object phase at the hologram plane can be estimated from relation:

[1.15]

and the object amplitude by:

[1.16]

In equations [1.15] and [1.16], ℑm [...] and ℜe [...], respectively, mean the imaginary and real parts of the complex value.

The main advantage of this approach is its capability of recovering the complex object wave through only one acquisition. Thus, there is no time spent heterodyning or moving mirrors and the influence of vibrations is greatly reduced. However, as the diffraction terms are spatially encoded in the hologram, this one shot capability potentially comes at the cost of usable bandwidth (filter with width Δu × Δv). In addition, the frequency modulation, induced by the angle between the reference and the object waves, has to guarantee the separation of the information contained in the different diffraction terms that are encoded in the hologram while carrying a frequency compatible with the sampling capacity of digital detectors.

However, in the field of microscopy, the microscope objective usually allows us to properly adapt the object wave field to the sampling capacity of the camera. Definitively, the lateral components of the wave vector k x or y are divided by the magnification factor M of the microscope objective. Practically, when a standard camera with pixels at a few microns is used, microscope objective with magnification larger than ×20 makes possible obtaining diffraction-limited resolution even when high numerical apertures (NAs) are considered [MAR 05]. It should also be mentioned that the numerical reconstruction of the object wavefront, particularly its propagation, represents a breakthrough in modern optics and specifically in microscopy [MAR 13]. Indeed, in addition to the possibility to achieve off-line autofocusing [LAN 08, LIE 04a, LIE 04c, DUB 06a] and to extend the depth of focus [FER 05], these numerical reconstruction procedures permit us to mimic complex optical systems as well as to compensate for aberrations [COL 06a, COL 06b], distortions and experimental noise leading to the development of various simplified and robust interferometric configurations able to quantitatively measure optical path lengths with ultrahigh resolution [MAR 13, LEE 13], in practice down to the subnanometer scale [KUH 08], depending on the wavelength and other parameters including the integration time.

1.1.3.2. Phase-shifting digital holography

In contrast to off-axis digital holography (Fourier domain), the complex amplitude of the object wave can be directly extracted by using phase-shifting methods in the temporal domain [CRE 88, DOR 99]. This approach was described by Yamaguchi in 1997 [YAM 97, YAM 01a, YAM 01b] and leads to the reconstruction of an image free from the zero-order and of the twin image (−1 order). Consider the hologram equation written as:

[1.17]

Basically, in equation [1.17] we should consider three unknowns: the offset term ar2+aO2, the modulation term 2araO and the phase of the cosine function, φO–φr. So, with at least three values for H, we should be able to solve these three unknowns. This can be done by shifting the phase in the cosine function, by adding in the holographic interferometer a phase modulator. Practically, a piezoelectric transducer (PZT) is used (although other methods do exist) [CRE 88, DOR 99]. The PZT is stamped to a mirror and applying a small voltage to the PZT has to slightly move the mirror as a consequence, and thus to shift the optical phase. With at least three positions of the mirror, the object wave field can be recovered. The robustness of the method increases with increasing the number of phase-shifted holograms. Consider a phase-shifted hologram with a phase-shift being an integer division of 2π, i.e. 2π/P, with P an integer. We have:

[1.18]

[1.19]

and the amplitude is calculated by:

[1.20]

If the reference wave is plane or spherical, that is free from aberrations, the phase ϕO(x,y) may be determined without ambiguity and compensated. The complex wave may be evaluated and the object may be directly reconstructed. Using the conjugate complex wave, we may calculate the twin image.

[1.21]

1.1.3.3. Parallel phase-shifting

In the technique of phase-shifting, both the single-shot and real-time capability of digital holography are lost because of the sequential recording of holograms. The four holograms are sequentially recorded by using reference waves with different phase retardations, such as 0, π/2, π and 3π/2. Although the phase-shifting method achieves noiseless images, it is useless for the instantaneous measurement of moving objects. Even though off-axis digital holography is one candidate for instantaneously obtaining only the first-order diffracted wave, it has some drawbacks: a high-resolution image sensor is required to record spatial carrier fringes and the spatial bandwidth has to be judiciously occupied (see Figure 1.4). In parallel phase-shifting digital holography, the four kinds of phase-shifting are simultaneously carried out for the reference wave in each segment consisting of 2 × 2 pixels of the image sensor in the recording hologram; thus, it implements four phase-shifting processes by using a spatial division-multiplexing technique. The four holograms required for the phase-shifting interferometry are numerically generated from a hologram recorded with the reference wave. The recording process of the technique is schematically illustrated in Figure 1.5 [AWA 06a, AWA 6b].

Figure 1.5.Implementation of parallel phase-shifting digital holography, phase-shifting array device and the distribution of the reference wave for parallel four-step phase-shifting (from [AWA 06a])

A phase-shifting device array is placed in the reference beam in the holographic interferometer. The array device is a segmented array with a 2 × 2 cell configuration that generates the periodic four-step phase distributions 0, π/2, π and 3π/2. The array device can be implemented by using a glass plate with a periodic four-step thickness. The array device is imaged onto the image sensor so that the phase distribution of the reference wave at the image sensor plane corresponds with the arrangement of pixels in the image sensor. The size of the imaged cells at the image sensor is the same as that of the pixels. Thus, the image sensor captures a hologram recorded with the reference wave containing the four-step phase distributions. The pixels containing the same phase-shift are extracted from the recorded hologram. For each phase-shift, the extracted pixels are relocated in another 2D image at the same addresses at which they were located before being extracted. The values of the pixels not relocated in the 2D image are simply linearly interpolated by using the adjacent pixel values in the reconstruction process. By carrying out this relocation and interpolation for the four phase-shifts, four holograms H1, H2, H3 and H4 are obtained. Then, the amplitude and phase of the complex object field can be calculated using the conventional algorithm [1.21].

1.1.3.4. Heterodyne digital holography

In a heterodyne digital holographic scheme, the reference beam is dynamically phase-shifted with respect to the object field. This shift produces time-varying interferograms at the sensor plane. Generally, the phase-shift is linear in time (frequency shift). The hologram in the detector plane results from the interference of the object wave with the δf-shifted reference wave, as described in equation [1.22]:

[1.22]

[1.23]

Combining off-axis holography with heterodyning permits us to reach the shot-noise detection and to achieve the ultimate sensitivity of digital holography [ATL 07, ATL 08, GRO 07, GRO 08].

1.2. Back-propagation to the object plane

The previous sections have discussed on the basics of digital hologram recording and demodulation. In order to discuss the digital reconstruction of the object wave at the object plane (and not necessarily at the sensor plane), this section presents the basics of the scalar diffraction of light. Algorithms used to back-propagate the object field estimated at the sensor plane are based on this approach.

1.2.1. Monochromatic spherical and plane waves

[1.24]

where U(P) is the complex amplitude at the observation point P(x, y, z) and v is the frequency of the light wave. Let us begin with the definition of a spherical wave. If the point source of a spherical wave is at the origin of a Cartesian coordinate system, the complex amplitude of a spherical wave can, therefore, be expressed by [GOO 72, COL 70, YAR 85]:

[1.25]

We note that the amplitude is proportional to the inverse of the distance between the point source and r, the observation point. When the center of the spherical wave is at point (xc, yc, zc), instead of the origin, the expressions are identical, with r substituted for:

[1.26]

For a plane wave propagating in a homogeneous medium, the wavefront is perpendicular to the propagation direction. The plane wave can be written as:

[1.27]

Figure 1.6 illustrates the concept of spherical and plane waves. Figure 1.6(a) shows a spherical wavefront with center A, that emits a divergent spherical wavefront Σ (see [1.25]). In a homogeneous medium, rays are perpendicular to Σ, and the wave is deformed when propagating to the right (to the left for a convergent wavefront). When the point source tends to infinity, the spherical wave tends to a plane wave, as illustrated in Figure 1.6(b). In this case, the rays become parallel and the beam propagates without any deformation.

Figure 1.6.Illustration of spherical and plane waves. For a color version of this figure, see www.iste.co.uk/picart/digiholography.zip

1.2.2. Propagation equation

The wave aspect of light is described by the classical theory of electromagnetism, by the Maxwell’s equations [BOR 99, GOO 72, LAU 10, YAR 85]. In this chapter, we consider the case of a homogeneous medium. After some mathematics, Maxwell’s equations can be reduced to this propagation equation:

[1.28]

where E is the electric field and c is the velocity of light in the medium. Operator is the Laplacian operator. Note that [1.28] is also valid for the magnetic field B.

1.2.3. Angular spectrum transfer function

Substituting [1.24] into [1.28], we obtain an equation which is independent of time t, known as the Helmholtz equation:

[1.29]

This equation can be solved in the Fourier domain. We suppose that z is the distance between the initial and observation planes, and that U(x,y,0) and U(x, y, z) are the respective complex amplitudes of these two planes. Moreover, in the frequency space, their spectral functions are G0(u,v) and Gz(u,v), respectively, (u,v) being the spatial frequencies associated with the spatial coordinates (x, y). These two functions are defined by:

[1.30]

[1.31]

The demonstration will not be provided in this chapter; a general solution to the differential equation can be expressed with the Fourier components of U(x,y,0) and U(x, y, z) according to:

[1.32]

Then the complex field at distance z can be obtained by:

[1.33]

So, we have a relation between the spectrum of the wave in the initial plane and that obtain in the observation plane. This relation shows that, in the frequency space, the spectral variation in complex amplitude caused by the propagation of light over the distance z is represented by its multiplication by a phase-delay factor:

[1.34]

According to the theory of linear systems, the process of diffraction is a transformation of the light field across an optical system, as the phase-delay factor can be interpreted as a transfer function in the frequency space. This interpretation of the propagation of light is called the propagation of the angular spectrum and the associated transfer function [1.34] is called the angular spectrum transfer function. Figure 1.7 illustrates this approach.

Figure 1.7.Scheme of the diffraction by the angular spectrum

Figure 1.7 means that that the field U(x,y,z) can be considered as a superposition of plane waves of amplitude Gz(u,v)dudv propagating in a direction whose cosines are From the diffraction of the angular spectrum, [1.34] means that the elementary waves satisfying are attenuated by the propagation, i.e. all the components satisfying this relation only exist in a zone very close to the initial plane. These components of the angular spectrum are, therefore, called “evanescent waves”. As the components of the observation plane must satisfy the relation i.e. propagation in free space can be considered as an ideal low-pass filter of radius 1/λ in the frequency space. Consequently, on the condition that we can obtain the spectrum of U(x,y,0), the spectrum in the observation plane, U(x, y, z) can be expressed by relation [1.32]. Using the direct and inverse Fourier transforms (FT and FT-1), the diffraction calculation process can be described as:

[1.35]

1.2.4. Kirchhoff and Rayleigh–Sommerfeld formulas

There also exist two more solutions to the Helmholtz equation: Kirchhoff’s formula and that of Rayleigh–Sommerfeld. Using the coordinates shown in Figure 1.8 which represents the relationship between the initial plane and the observation plane, these two formulas are written in the same mathematical expression [GOO 05]:

[1.36]

where

[1.37]

and θ is the angle between the normal at point (X,Y,0), and the vector MP from point (X,Y,0) to point (x, y, d0) (see Figure 1.8), K(θ) is called the obliquity factor and its three different expressions correspond to three different formulations [GOO 05].

Figure 1.8.Relation between the initial diffraction plane and the observation plane. For a color version of this figure, see www.iste.co.uk/picart/digiholography.zip

Even though there exist certain inconsistencies [GOO 05, BOR 99], Kirchhoff’s formula gives results in remarkable agreement with experiment, and it is for this reason that it is widely applied in practice. Furthermore, since the angle θ is often small in experimental configurations, the obliquity factors of the three formulations are roughly equal to unity. Thus, the Kirchhoff, Rayleigh–Sommerfeld and angular spectrum formulas are considered as equivalent representations of diffraction. The derivations of the Kirchhoff and Rayleigh–Sommerfeld approaches are presented in detail in [GOO 05]. Readers who would like to go into these aspects in greater detail are invited to familiarize themselves with these appraoches.

1.2.5. Fresnel approximation and Fresnel diffraction integral

The equations proposed previously are complex and this is due to the presence of a square root in the complex exponentials. In practice, problems of diffraction quite often concern paraxial propagation, and to simplify the theoretical analysis, we generally use Fresnel’s approximation. Let d0 be the diffraction distance, and expanding the square root in [1.34] to the first order leads to:

[1.38]

Given that expression [1.35] can be written in the form of a convolution (* means convolution):

[1.39]

Substituting [1.38] into [1.39] and knowing that the inverse Fourier transform of G(u,v) is an analytic function, we have:

[1.40]

In [1.40], we recognize a convolution of U(x,y,0) with the impulse response of free space propagation that will be denoted as h(x, y, d0):

[1.41]

Equation [1.41] can also be written as:

[1.42]

Equation [1.42] constitutes Fresnel’s diffraction integral. Note that this approximation consists of replacing spherical wavelets (see [1.25]) by quadratic waves (parabolic surface approximation). Developing the quadratic terms in the exponential of [1.42] leads us to:

[1.43]

Thus, with the exception of multiplicative phase and amplitude factors which are independent of X and Y, we can calculate the function U(x, y, d0) by carrying out the Fourier transform of:

[1.44]

This transformation must be evaluated at the frequencies to guarantee a correct spatial scale in the observation plane. The calculation of the two Fresnel diffraction integrals is relatively simple compared to the formulas which rigorously satisfy the Helmholtz equation. In the regime of paraxial propagation, this approximation is relatively precise. By defining the Fresnel transfer function [GOO 05] as:

[1.45]

The Fresnel approximation can be expressed by:

[1.46]

This expression is analogous to the angular spectrum formulation [1.35], but the difference is related to the different transfer functions of the two formulas.

The next section discusses the use of the theoretical basics of wave propagation to numerically reconstruct the object wave at the object plane (which is not necessarily the same as the recording plane).

1.3. Numerical reconstruction of digital holograms

1.3.1. Discrete Fresnel transform

1.3.1.1. Algorithm

[1.47]

Note that in off-axis holography, the different diffraction terms encoded in the hologram (zero-order wave, real image and virtual image) are propagated in different directions, thus enabling their separation for reconstruction. This means that equation [1.47] can be directly used with an off-axis hologram (replace UO by H in [1.47]) to calculate the propagated field at distance dr. In this case, the reconstructed field appears as illustrated in Figure 1.9.

Figure 1.9.Structure of the reconstructed field of view calculated from an off-axis hologram by using the discrete Fresnel transform

Figure 1.10.Diagram of the reconstruction with the discrete Fresnel transform

In addition, the sampling of the quadric phase that is multiplied by the input data (H or UO) must fulfill the Shannon condition. This means that the minimal distance drmin that can be put into the algorithm must fulfill this relation [MAS 03, MAS 99, LI 07]:

[1.48]

Thus, the discrete Fresnel transform cannot be calculated for distance shorter than

1.3.1.2. Spatial resolution in the reconstructed plane

The computation of the reconstructed field using a finite number of pixels induces a truncate effect. Mathematically, we have to consider the filtering function of the 2D discrete Fourier transform which limits the achievable spatial resolution in the reconstructed plane. It is given by [PIC 08]:

[1.49]

1.3.1.3. Effect of defocus and depth of focus

Although digital holography is not a conventional imaging method, it exhibits some similarities with classical imaging. Especially, the reconstructed images include a depth of focus. The perfect focus is obtained if the spatial resolution reaches its theoretical limits. The contribution to the degradation of the spatial resolution will not be discussed in this section and the readers may have a look at [YAM 01a, PIC 08, PIC 12]. However, to determine the focal depth of the reconstructed image, we can set the width of the defocusing function as having to be approximately equal to ρx. If the perfect image distance is di, noting the full depth of focus on both sides of the perfect image plane is given by:

[1.50]

Thus, the focal depth in digital holography is proportional to the square of the angular aperture of the sensor as seen from the object [YAM 01a].

Figure 1.11.Reconstructed images in and out the depth of focus

1.3.1.4. Effect of zero-padding

Figure 1.12.Illustration of the effect of zero-padding

1.3.2. Reconstruction with convolution

1.3.2.1. Basic algorithm

Figure 1.13.Diagram of the reconstruction with convolution and the angular spectrum transfer function

At any distance dr from the recording plane, the reconstructed object field can be calculated according to the algorithm in Figure 1.13, in which UO