Unconventional Optical Imaging for Biology -  - E-Book

Unconventional Optical Imaging for Biology E-Book

0,0
142,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Optical imaging of biological systems has undergone spectacular development in recent years, producing a quantity and a quality of information that, just twenty years ago, could only be dreamed of by physicists, biologists and physicians. Unconventional imaging systems provide access to physical quantities - phase, absorption, optical index, the polarization property of a wave or the chemical composition of an object - not accessible to conventional measurement systems. To achieve this, these systems use special optical setups and specific digital image processing to reconstruct physical quantities. This field is also known as computational imaging. This book presents various non-conventional imaging modalities developed for the biomedical field: wave front analysis imaging, digital holography/tomography, optical nanoscopy, endoscopy and singlesensor imaging. Experimental setups and reconstruction algorithms are presented for each modality.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 437

Veröffentlichungsjahr: 2024

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.


Ähnliche


SCIENCES

Image, Field Director – Laure Blanc-Féraud

Imagery in Life Sciences, Subject Head – Françoise Peyrin

Unconventional Optical Imaging for Biology

Coordinated by

Corinne Fournier

Olivier Haeberlé

First published 2024 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUKwww.iste.co.uk

John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USAwww.wiley.com

© ISTE Ltd 2024The rights of Corinne Fournier and Olivier Haeberlé to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s), contributor(s) or editor(s) and do not necessarily reflect the views of ISTE Group.

Library of Congress Control Number: 2023940645

British Library Cataloguing-in-Publication DataA CIP record for this book is available from the British LibraryISBN 978-1-78945-132-0

Introduction

Corinne FOURNIER1 and Olivier HAEBERLÉ2

1Laboratoire Hubert Curien, CNRS, IOGS, Télécom Saint-Étienne, Université Jean Monnet, Saint-Étienne, France

2IRIMAS, Université de Haute-Alsace, Mulhouse, France

I.1. Context

Optical imaging in biological systems has undergone spectacular developments in recent years, which have resulted in providing a quantity and a quality of information that, until about 20 years ago, was merely a dream for physicists, biologists or physicians.

The extraordinary progress that has been made can be explained through a combination of contributions in the field of the physics of image formation, in instrumentation with the arrival of new sensors with remarkable performance, and finally by the extraordinary computing power and memory capacity of current computers. To illustrate these remarks, we can refer to two spectacular examples (not addressed in this book).

In fluorescence microscopy, the diffraction barrier has been “broken” in a spectacular way thanks to the developments of STED (Stimulated Emission Depletion) microscopy first, and followed by the invention of pointillist microscopies. STED microscopy utilizes the temporal properties of the fluorescence phenomenon, which is not instantaneous, to cause a stimulated emission. The latter is controlled by the system, being spatially, temporally, spectrally and directionally dissociable from the spontaneous fluorescence emission. Proposed as early as 1994, this technique has only recently become widespread, with the emergence of inexpensive and high-performance lasers and detectors. Pointillist microscopies make use of the statistical properties of fluorescence emission, which exhibits a random nature. By processing large quantities of individual images, with few fluorescent spots, it is possible to detect and locate them individually, and thus to reconstruct an image with improved resolution. These techniques have now even made imaging of living specimens possible; this is mainly due to the quality of fluorescence labelings, the right photon yield of fluorophores, but especially the extreme sensitivity of modern cameras, and the speed of image processing allowed by current computers.

Another field that has undergone remarkable development is imaging through scattering media. A large body of techniques has been developed or improved in recent years. The most astounding is perhaps the one that makes use of scattering matrices. It is based on the fact that, in an imaging system, the Green’s matrix (set of Green’s functions linking each source emitting light to each detector pixel) contains the information accessible in linear scattering processes. Modern experimental means (sensors of large dimension and sensitivity, spatial light modulators with many degrees of freedom, and powerful computers) now contribute to experimentally characterize scattering media by accurately measuring the scattering matrix. Impressive operations until recently unimaginable can now be performed; these include the characterization of surface or volume scattering, separation of the contributions of single and multiple scattering and even focusing through a scattering medium, which paves the way to both transmission and reflection imaging.

This book is concerned with other areas of biological imaging that have also been driven, like these two examples, by advances in instrumentation and computer science, and that have led to making possible concepts that were until recently purely theoretical.

I.2. Unconventional imaging

Unconventional imaging, as opposed to conventional imaging, allows us to access physical quantities (such as the opacity, the refractive index, the property of wave polarization, the chemical composition of an object and so on) not directly accessible by optical systems, in which sensors can only measure information in terms of intensity.

The use of particular optical setups and computer processing for the captured images/signals has enabled the reconstruction of physical quantities. This is also referred to as computational imaging. Typical unconventional imaging modalities are: polarimetry, interferometry, holography and phase imaging, as well as hyperspectral imaging. Unconventional imaging can also be used for the purposes of miniaturizing, reducing the costs or improving the quantitativity of conventional imaging systems. This type of imaging requires the co-design of the optical system, sensors and signal and image processing algorithms. A large variety of information obtained through unconventional imaging allows for improved detection, quantitative characterization and classification of imaged objects. These systems are used in many fields of biological imaging, namely endoscopy, microscopy, skin or through-skin imaging and fast phenomena imaging.

I.3. Book contents

In this book, we focus on various unconventional imaging modalities developed for the biomedical field; these include wavefront analysis imaging, digital holography, optical nanoscopy, endoscopy and single-pixel imaging. The first six chapters are complementary and address phase-imaging.

Chapter 1 presents quantitative phase microscopy using a wavefront analyzer. Wavefront analyzers are particular devices that are capable of measuring a wave without making use of holography (more details in the following chapters). In this chapter, the notion of wave phase is first recalled, followed by an introduction to the notion of phase object in the field of biology. The particular modality of quadriwave lateral shearing interferometry is described in detail and applied to solve major problems in biological imaging, such as dry mass measurements, the study of fast biological phenomena and birefringence measurements.

Chapter 2 presents digital holography through a detailed presentation of the principle of interferometry used in holography, based on superposing a reference wave onto the wave diffracted by the object. Introduced by Gabor in electron microscopy, holography is in fact known in particular for its applications in optics. The most commonly used holographic configurations are described. Finally, the digital processing of the obtained data, in order to extract the characteristics of the measured wavefront, in amplitude and phase, is detailed. This chapter serves as an introduction to Chapter 4, which deals with digital holography, in the configuration known as in-line holography, as well as to Chapter 5, which presents tomographic diffractive microscopy.

Chapter 3 is concerned with a general methodology for numerical reconstruction using an approach based on “inverse problems”. This approach leads to estimating the parameters of interest of imaged objects from unconventional images. It is presented in a very general way and thus can be applied to different modalities of unconventional imaging. This chapter presents the simple case of likelihood maximization between data and linear imaging models, and then addresses the problem of the nonlinear phase reconstruction (phase retrieval) with or without taking into account the a priori on the reconstructed objects. This chapter contains examples of in-line digital holography in connection with the next chapter.

Chapter 4 is dedicated to in-line holographic microscopy. It presents the experimental configurations of holographic microscopy used in the biomedical field along with their specificities. The problem of numerical reconstruction is addressed by comparing different historical and state-of-the-art approaches with experimental images. As a follow-up to Chapter 3, approaches based on “inverse problems” are used and their potential for holographic microscopy is discussed.

Chapter 5 gives an extension of holography, with application to transmission microscopy. Tomographic diffractive microscopy is a technique coupling holography with scanning light illumination on the specimen or with a rotation thereof. This technique allows for much more information to be acquired, which results in a 3D reconstruction of the distribution of optical indices in the observed specimen. The refractive index (in refraction and absorption) has become a contrast allowing for many applications on specimens without preparation (no coloring, no fluorescent tagging), which now finds applications in many fields (such as the study of stem cells, yeasts, bacterial growth and so on).

Chapter 6 describes another phase microscopy technique, known as white light interferometric microscopy. Based on a particular configuration (Linnick interferometer), it differs from the techniques described in the previous chapters in that it is a reflection technique. The demodulation of the reflected signal leads to unequaled precision of measurement, namely in the nanometric range. However, the lateral resolution remains that of a conventional optical microscope. A recent extension involves combining microsphere-assisted microscopy with interferometry, paving the way for surface nanometrology. These techniques are now also used for biological imaging, in particular cellular or tissue imaging.

Chapter 7 addresses white light and endoscopic fluorescence imaging. Following a presentation of the principles of endoscopy, particular emphasis is given to the problem of the 3D reconstruction of hollow organs observed through this technique. Endoscopic images have specific characteristics, requiring dedicated approaches, namely the low contrast of the structures to be observed, delicate fluorescent tagging, strong distortions of the obtained images, and induced by the characteristics of the optical system utilized and the hollow geometry of the organs. This results in specific difficulties in performing the 3D image registration.

Finally, Chapter 8 is dedicated to single-pixel imaging. Thanks to the optical modulation of scenes by known patterns and advanced digital reconstruction algorithms, it is possible to reconstruct an image from a series of point measurements (on a single pixel). This type of imaging has been developed considerably over the last 15 years and makes it possible to design efficient and low-cost systems for different modalities such as infrared, terahertz or fluorescence imaging. This chapter presents the instrumental aspects and the mathematical foundations of single-pixel computational imaging. In addition, it addresses conventional numerical reconstruction techniques and also the latest approaches that use deep learning.

The purpose of this book is thus to introduce the reader to these new unconventional imaging techniques through a detailed presentation of their principles and implementations, supported by illustrations of application examples drawn mainly from the studies of the various research groups involved in their development.

1Quantitative Phase Microscopy Using Wavefront Analysis

Serge MONNERET1, Julien SAVATIER1 and Pierre BON2

1Aix-Marseille Université, CNRS, Centrale Marseille, Institut Fresnel, France

2Université de Limoges, CNRS, XLIM, France

1.1. Introduction

The observation of transparent samples using optical techniques may seem by definition not very obvious, since these samples have very little influence on the transmission of light that travels therethrough. Consequently, it is necessary to employ more complex solutions than simple imaging in order to reveal the structures of such samples. Interest in such techniques is particularly heightened in biological imaging, for which the observation of living biological cells placed in an aqueous medium leads to very low contrasts, whereas the observation of subcellular structures is of major relevance for the understanding of living organisms. The first techniques developed were based on the use of interferometry to create contrast, but at the cost of relatively complex systems and/or not allowing measurements (contrast increase only). We will present here a technique based on a wavefront measurement, producing a map of the optical thickness of the imaged sample resulting in inferring a set of measurements of interest.

1.2. Description of the principles used in phase imaging

We consider an infinite translucent medium (e.g. glass with a refractive index n0 = 1.5) when a sinusoidal monochromatic light wave of wavelength λ0 passes through, as shown in Figure 1.1. Consider that this block of glass possesses two internal zones. The first one, in green in the figure, is absorbing and has the same index n0. The second one, in blue, is transparent but of index n = n0 + Δn. If this object is imaged, the green zone that made photons disappear by absorption will create a darker zone than the rest of the image and will consequently be detectable by creating an absorption contrast. On the other hand, the blue zone will not be visible because the photons that traveled through will all reach the detector. Nevertheless, since the index of refraction is different in the blue zone from that of the surrounding medium, these photons will not have taken the same time to reach this detector and will arrive later or earlier than the others according to the sign of Δn.

Figure 1.1.Amplitude and phase in optical imaging1

Let us call Δ the spatial shift of the wave due to passing through the entire blue zone (Figure 1.1). In this simple one-dimensional case, the wave incident on the sample can be written as A cos (k0x) exp(-iωt) where k0 = 2π/λ0 is the wavenumber and ω is the angular frequency. The output wave will then take the form:

[1.1]

The value of Φ is called the phase of the wave, hence imaging techniques that generate contrast from the spatial distribution of the parameter Δ are called phase imaging. It should be noted that the value of Δ can be identical for a low index and large thickness zone, or another zone of higher refractive index but smaller thickness. Consequently, it is the optical thickness (product of the refractive index and the geometrical thickness) which defines Δ, and which will thus be the contrast parameter in phase imaging. Figure 1.2 illustrates the importance, but also the limitations, of phase imaging: it is impossible to decorrelate the optical index from the thickness. Furthermore, a thinner object with a higher refractive index than another can ultimately produce the same phase shift.

Figure 1.2.Wave surface deformation when traversing an inhomogeneous object

(source: Bon 2011)

The value of the time shift associated with Δ is tiny (3 fs for every 10 μm crossed for a relative refractive index variation of 0.1) and no current conventional detector can take this time shift into account. Consequently, if they are completely translucent, the red and orange inserts of the sample of Figure 1.2 will be totally undetectable in an imaging system such as a microscope equipped with a conventional camera. On the other hand, making the wave from the sample interfere with another reference wave leads to the creation of an interference contrast that is this time sensitive not to local absorption, but to the parameter Δ and thus to the local optical thickness. Zernike’s phase contrast techniques (Zernike 1935), now conventional, as well as Nomarski’s contrast techniques (Nomarski 1955), also called differential interferential contrast (DIC), are all based on this general principle. Holographic microscopy is probably the most successful version of this principle: it allows access to quantitative information, but requires dedicated microscopes (Cuche et al. 1999).

Consider now a monochromatic plane wave of wavelength λ0 passing through a phase object considered here as non-diffracting. Its scalar electromagnetic field can be written, after traversal of the object and in a reference plane perpendicular to its mean wave vector:

[1.2]

with a(x, y) the amplitude of the field and W(x, y) the distance between the wavefront perturbed by the object and the reference plane wave. Figure 1.3 gives a representation of this surface.

Figure 1.3.Representation of the W function in a one-dimensional example

(source: Bon 2011)

When illuminating a phased object with an incident plane wave (pictured in Figure 1.2), the wavefront W after traversing the object is directly related to the optical path difference distribution that characterizes this object:

[1.3]

where h is the height of the sample, n(x, y, z) is the refractive index distribution and nmed is the refractive index of the surrounding medium.

W(x, y) = Delta(x, y) = OPD(x, y) is called the optical path difference in the literature. From now on, we will systematically use this notation in the rest of this chapter.

The knowledge of the distribution OPD(x, y) thus provides us with a complete understanding of the constitution of the object traversed by an incident plane wave. In other words, if we consider that the inhomogeneities within the sample introduce local photon delays with respect to one another, sending a plane wave on this sample will result in producing a distorted wavefront as a result of propagating within these inhomogeneities. The technique we propose here consists of simply measuring the wavefront after the sample perturbation, using a wavefront sensor, and then deducing the optical thickness mapping of this sample knowing beforehand the shape of the incident wavefront. This technique has been made possible by the emergence of wavefront sensors with sufficient resolution to produce useful information for biologists.

1.3. Quadriwave lateral shearing interferometry for high spatial resolution wavefront analysis

Wavefront analysis using quadriwave lateral shearing interferometry (QLSI) is a technique that can be used to meet the objectives presented above.

It is based on the measurement of the gradient of the wavefront simultaneously in two orthogonal directions. To this end, four replicas of the electromagnetic field shifted from each other in the two directions of the analysis plane (Figure 1.4) will be generated by a specific diffraction mask. The interference between these replicas will create an intensity modulation (interference fringes forming what will be called an interferogram, recorded by the sensor of a digital camera) carrying the information of the phase gradients. Post-acquisition analysis of the recorded interferograms then allows for the calculation of the shape of the incident wavefront on the diffractive mask.

Figure 1.4.Separation of the incident wavefront into four symmetric replicas along two perpendicular axes by diffraction on a modified Hartmann mask (MHM)

(source: Aknoun 2014)

1.3.1. Generation of incident field replicas

As we have just stated, quadriwave lateral shearing interferometry is based on a measurement of wavefront gradients along two perpendicular directions, from the analysis of the interference pattern produced on the sensor of a camera by four replicas of this wavefront slightly inclined along these two directions.

The optical element making possible the generation of these four diffraction orders is a sinusoidal phase grating of transmittance t(x, y) in the two spatial directions. Equation [1.4] describes this transmittance for a grating of period p along x and y; Figure 1.5 presents this theoretical phase grating (Figure 1.5(a)) as well as its diffraction orders (Figure 1.5(c)):

[1.4]

Figure 1.5.Bisinusoidal phase grating (a) and modified Hartmann grating (b). Graphs (c) and (d) represent the diffracted amplitude distribution in the different orders of these two gratings, respectively

(source: Bon 2011)

Building such a grating is nevertheless difficult. The chosen solution is based on a three-level approximation (+1; 0; 1) of the transmittance function (Figure 1.5(b)). This grating is called a modified Hartmann mask (MHM); it can be seen as the combination of a matrix of square apertures (called the Hartmann mask) and a phase checkerboard dephasing one hole out of two of π. Figure 1.5(d) shows the orders diffracted by an MHM. The choice of a duty cycle of 2/3 results in suppressing orders 3 and –3. The addition of the pure phase grating enables, by destructive interference, the suppression of all even orders, including the zeroth order representing the field not diffracted by the grating. Therefore, an MHM is a good and practical model of the optimal bisinusoidal phase grating. It enables 90% of the diffracted energy in the four desired orders to be distributed, the rest being distributed in odd orders higher than 3. Nevertheless, its main drawback is that it does not provide as much transparency as the optimal two-dimensional phase grating.

The π phase shift of the checkerboard is obtained by increasing the optical path traveled, that is, the grating is locally thicker to introduce the desired delay. Relation [1.5] links the difference in thickness e to the optical index n of the material used to create this π phase shift:

[1.5]

This relation shows the obvious dependence of the etching thickness on the usage wavelength of the grating. When the phase shift is not exactly π, a reoccurrence of the even orders is observed, and in particular of zeroth order. These orders will interfere with the useful orders and will create additional frequencies in the interferogram, degrading the signal-to-noise ratio. It is however possible to find distances between the mask and the plane of the detector, minimizing the influence of the parasitic orders (Primot and Guérineau 2000): in these planes, the interferogram is similar to that which would be obtained for a phase shift of π. In practice, when light is not monochromatic, e should be chosen so as to create a phase shift of pi for the central wavelength of the source and a mask/detector distance close to the Talbot distance of the Hartmann mask for this central wavelength (Velghe 2006).

This operating point makes it possible to obtain an achromatic assembly operating regardless of the wavelength spectrum of the source, including white light. This is a major advantage of the technique applied to microscopy, since it enables us to use the conventional illumination of the microscope to perform the measurement (halogen source sufficiently spatially coherent to generate the interferograms, used in a Köhler assembly with a reduced aperture diaphragm).

1.3.2. Determination of the incident wavefront

As we have seen above, after a few millimeters of propagation, the beams are slightly separated and then form interference fringes whose pitch is determined by the angle between the propagation directions. If the incident beam is a perfectly plane wave, the interferogram recorded by a camera is a regular two-dimensional array of light points. If the wavefront presents local modulations, this regular mesh is locally deformed. The study of these distortions using spectral analysis methods allows us to find the phase spatial gradients.

More precisely, the intensity distribution at the level of the interferogram, after propagation along a distance z after passing through the mask (placed in the plane z = 0), is given by (Bon 2009):

[1.6]

where I0 corresponds to the maximum intensity on the diffraction mask output. This equation corresponds to a perfect mask generating only the four desired diffraction orders.

This equation shows that various wavefront gradients are encoded in the form of frequency modulation around characteristic interferogram spatial frequencies, that is, the eigen spatial frequencies of the MHM.

Let us now consider an arbitrary wave arriving at the MHM at normal average incidence. If we merely consider a single dimension to simplify the equations, the intensity recorded at the detector can be written as:

[1.7]

where L is the distance between the MHM and the camera sensor. This coefficient, placed in front of the wavefront gradient term, is important: it can be used to modulate the sensitivity to gradients (the larger L is, the more sensitive the sensor, however it provides less dynamics).

The gradient along the x-direction of the wavefront can be extracted by demodulation in the Fourier space of the interferogram. We call |νint| the maximum spatial frequency of the intensity information i0. We call |νmod| the support in Fourier space of the information modulated around the carrier frequency and 1/p the fundamental frequency of the interferogram. Tpix refers to the sampling period of the matrix detector (it is assumed that the pixels are square), θ refers to the orientation of the interferogram with respect to the sensor pixel matrix and Δν = 2/αp refers to the bandwidth for the extraction of the phase information, α being a parameter used to simplify the computations. All of these notations are represented in Figure 1.6.

Figure 1.6.Definition of the various supports in Fourier space

(source: Bon 2011)

Various conditions must be met to ensure that the bandwidth is sufficient, without spectrum aliasing, to accurately reconstruct the phase and intensity signal. First, the main frequency of the grating cos(θ)/p must be lower than the Nyquist frequency 1/2Tpix in order to sample the signal correctly; it is also necessary that all the frequencies of the bandwidth around the main frequency satisfy this same Nyquist criterion. Finally, the last condition comes from the necessary non-overlap between the intensity information and the modulated phase information; this must be reflected by a non-overlapping of the red and orange domains of Figure 1.6. It should be noted that for essentially transparent samples, such as those usually employed in the field of cell biology, the energy values (especially for high spatial frequencies) will be lower in intensity than in phase.

In the end, a choice of α = 2 is often made at the level of the reconstruction algorithms to best separate the bandwidth of the frequency support from the intensity distribution of the samples. This choice requires that the phase and intensity images resulting from the reconstruction processes have a number of pixels 16 times lower than the images initially produced on the sensor plane. More precisely, with such a value of α a sensor area of 4 x 4 pixels is used to produce a local phase measurement. The lower limit that would just satisfy the Nyquist criterion would be to use 2.73 x 2.73 pixels to sample the interferogram. Some recent commercial systems utilize a reconstruction algorithm based on 3 x 3 pixel sampling (SID4 SC8, Phasics, France). Finally, equation [1.6] shows that we also have access, through the interferogram, to the gradients in the (x + y) and (x – y) directions. This redundant information can be used to increase the signal-to-noise ratio of the wavefront measurement (Velghe et al. 2005).

The most commonly used method to quantitatively reconstruct the wavefront employs the discrete Fourier transform, which leads to artifacts when the phase objects overlap the edges of the image, which is common in cell biology. To address this shortcoming, a simple approach has been proposed to avoid these artifacts, based on the duplication and antisymmetrization of the derivative data, in the direction of the derivative, before integration (Volkov et al. 2002; Bon et al. 2012a). This approach completely erases edge effects by creating continuity and a mathematical differentiability condition at the edge of the image.

It is thus possible to extract the OPD gradients from the Fourier analysis of an interferogram recorded on a CCD or CMOS sensor. The gradients are finally integrated to obtain a two-dimensional OPD map. However, since the phase information is encoded in the interferogram by a frequency modulation, the phase and intensity are independently determined. The intensity image is effectively simply extracted from the application of a low-pass filter on the interferogram (fringe descreen operation). This property offers an advantage over conventional phase contrast methods such as Nomarski/DIC or Zernike contrast, where it is very difficult to attribute the intensity modulations visible in the images either to a spatial variation of the sample absorption or to OPD gradients.

1.3.3. Wavefront sensor implementation

The technology of wavefront measurement using quadriwave lateral shearing interferometry until now has been exclusively commercialized by Phasics (Saint-Aubin, France), a company that holds the exploitation rights. The various products developed over the last 20 years have followed some of the technological evolutions of cameras (including the shift from CCD cameras to sCMOS cameras, the development of sensors for specific spectral domains such as infrared), making it possible to address ever wider application fields. Nonetheless, some applications still require very specific cameras (EMCCD in very low light, ultrafast cameras for the measurement of dynamic samples, etc.) to acquire interferograms. The now commercialized (SID4Element, Phasics, France) proposed solution (Berto et al. 2012) consists of using an imaging optical system as a relay to project the image of an MHM, at the appropriate location and magnification, inside the camera optimized for the targeted application. Consequently, it is possible to acquire interferograms under very low light to achieve phase images in full-field coherent Raman microscopy, or to use a fast camera to follow fast-evolving samples (see section 1.5.3).

1.4. Using a wavefront sensor in microscopy

Most non-tissue biological objects (isolated biological cells in culture) appear very faintly in intensity-based imaging (otherwise known as absorption imaging). In this case, when the incident light exits the sample it has undergone a phase shift only (hence the name phase objects sometimes used), and therefore its wavefront has been modified. Placing a wavefront sensor on the sample output should thus allow the optical thickness of the sample to be mapped, and thus a contrast of interest to be generated.

1.4.1. Necessary approximations

We consider a simple model for interpreting the measured difference values of the optical path, overlooking the imaging and filtering effects by the microscope objective. The first approach that we can follow for interpreting the measurements consists of considering that at first order, the measured phase shift is due only to photons slowing down when passing through the sample and that no change in direction of propagation occurs during the traversing. Any refraction phenomenon at the interfaces or diffraction by the object is not taken into account: refraction is negligible with small index differences or if the interfaces are orthogonal to the propagation of light; diffraction is visible when the objects become small relative to the resolution limit of the microscope. Under these assumptions, the wavefront sensor directly measures the optical path difference within the sample, according to the diagram in Figure 1.3.

This assumption considers that all photons propagate in the same direction, which is that of the optical axis of the system; the illumination is said to be plane wave illumination. In fact, all of the measurements will be performed following a quasi-planar wave illumination regime via a diameter restriction of the aperture diaphragm of the Köhler illumination of the microscope.

It seems that the simple measurement of the wavefront shape on the sample output provides us with a map of the optical thickness of the sample. In real life, and in particular inside a microscope, it is unrealistic to generate a perfectly plane wavefront incident on the sample. Consequently, we must also record the shape of this incident wavefront – a step that will be called “reference analysis” – which is performed at the beginning of the experiment. It is a priori not necessary to repeat this step as long as the experimental conditions are not modified (temperature, microscope settings, non-uniform conditions at the sample level). The modifications of the wavefront due to the sample, relative to this reference wavefront, will then give the local optical thickness map being sought.

1.4.2. Experimental configuration

Figure 1.7 shows the conventional experimental configuration for quantitative phase imaging by wavefront measurement using a wavefront sensor from QLSI technology. This sensor, consisting of the MHM and a CCD camera, is placed on the video output port of a microscope. In fact, all of the images obtained in the following sections will have been obtained from different commercial sensors, whose type will be specified for each case. The CCD sensor of the wavefront sensor is placed in the focal plane of the microscope tube lens so that the sample is imaged on the detector. In fact, the sensor essentially consists of a camera inside which the MHM is inserted. Consequently, its use is identical to that of a conventional camera in terms of handling and fixing on the microscope (C mount). The measurement of the incident light wavefront on the sensor (illumination beam, in green + components due to diffraction by the object, in red) is then carried out according to the methodology explained previously. It is also represented in Figure 1.8, which specifically illustrates the process of quantitative imaging of biological samples.

This figure, taken from Aknoun (2014), effectively resumes the main steps of the reconstruction process of an optical thickness map for a biological sample (here, a living COS-7-type biological cell, the lineage resulting from a kidney of a green monkey from Africa). A recording of the interferogram, which can be seen in the figure, shows the disturbing effect of the presence of inhomogeneities in the sample on the regular alignment of the interference fringes. Then, a spectral demodulation of this interferogram in Fourier space makes it possible to recover the intensity image and to obtain the phase gradients due to the sample in two perpendicular directions. Integration from these two gradients finally results in reconstructing a quantitative phase map of the object. Since the local modulation of the interferogram interfringe is only caused by the local phase shift of the wavefront, and not by its intensity, the phase and intensity measurements are completely uncorrelated. During the whole process, the commercial algorithms used (SID4Bio software, Phasics, France) take into account in an optimal way the preliminary recording of a reference interferogram, acquired from an empty part of the sample under the same conditions used for the latter. As such, only the OPD map due to the sample can be displayed, independently of the characteristics of the light beam that may vary according to the conditions under which the optical system is used. It should be noted that the native light source of the microscope (white halogen lamp, or white LED depending on the microscope used) is employed to make the images due to the achromatic character of the technique. However, it is sometimes useful to use spectral bandpass filters or LEDs of different colors to perform certain measurements in better defined spectral domains, when the application requires it (sensitivity to sample chromaticity, typically).

Figure 1.7.Conventional optical configuration for quantitative phase imaging by QLSI

1.5. Applications to biological imaging

1.5.1. High-contrast imaging without labeling

The first contribution of this imaging technique is to offer remarkable phase contrast, without halo (characteristic of Zernike phase contrast when used without apodized mask) or unidirectional gradient (characteristic of the Normarski/DIC process). Biological samples show sufficient local refractive index inhomogeneities to reveal their internal structure, even when the traversed thicknesses are small. The sensitivity in OPD, of the order of 0.1 nm, is sufficient to clearly characterize the plasma membrane, mitochondria, different types of vesicles (lysosomes, peroxisomes, endosomes, at least a good part of each of them) and of course the nucleus and chromosomes during cell division. It is also possible to distinguish components of the cytoskeleton, such as microtubules and some actin stress fibers, at least in fairly thin areas of the cell further away from the nucleus (Bon et al. 2014). This contrast quality, without even enhancing the quantification inherent to the technique, is already very useful in cell biology to appreciate the state of the cells under the microscope and the results of different treatments (drugs, temperature, illumination at different powers and wavelengths) (Figure 1.9). Similarly, by following the cellular and intracellular dynamics over time, at frequencies that can reach several Hz, the cellular metabolism can be evaluated.

Figure 1.8.Main steps for reconstructing an OPD distribution from an interferogram recorded by a wavefront sensor

(source: Aknoun 2014)

1.5.2. Dry mass measurement in living biological cells

In some specific cases, an optical path difference measurement can provide information about the amount of material present in the sample. For amorphous or crystalline solids, a linear increase in the density of the material is reflected by a linear increase in the real part of the refractive index. Similarly, in the case of a solution, increasing the mass concentration of a solute in a solvent produces a proportional increase in the solution refractive index, according to a coefficient that depends on the solute. Nevertheless, even for biological cells containing a large number of constituents (proteins, lipids, sugars, nucleic acids, etc.), it has been demonstrated that for most eukaryotic cells, considering a single and generic coefficient relating the solution refractive index to the solute mass was valid (Barer 1952).

Consequently, the cell dry mass m, defined as the sum of the masses of any non-aqueous constituent, can be directly calculated from the integral of the distribution of the cell OPD over its projected surface:

[1.8]

where mass is given in picograms, and the integral corresponding to a volume is calculated in μm3. The coefficient of 0.18 μm3.pg−1 is the most commonly used in general by the scientific community dealing with quantitative phase measurement and it can only vary for some particular cell lines. For example, it is equal to 0.19 μm3.pg−1 for red blood cells considering that this cell type contains a high hemoglobin content.

Figure 1.9.COS-7 cells in culture at 37°C. Quantitative phase image taken with the SID4 HD prototype (2560 x 2160 phase pixels), 100X, NA = 1.3. Scale bar: 10 μm

The integration of the OPD on the cell surface requires automatic segmentation of the cell edge to define this surface. This operation is facilitated by the fact that the images are of good quality and especially free of halo around the cells. Even in the thinnest areas of the cell edge, where the OPD corresponds only to the plasma membrane composed of a phospolipid bilayer folded over itself, the contrast generated is sufficient for the segmentation algorithm to correctly identify the cell edges. This algorithm, developed jointly with Frédéric Galland (Institut Fresnel, Phyti team), not only allows for this edge detection but also a calibration of the OPD levels. As a matter of fact, it includes a so-called background flattening phase, that is, the possibly local measurement of the residual OPD level between the cells (defined as the image “background”), which will then be used to calibrate the dry mass. The reference analysis step is in fact not efficient enough to ensure zero residual given the sensitivity of the technique. The developed algorithm thus allows the computation of dry mass values that are very accurate and close to biological reality (Aknoun et al. 2015).

Figure 1.10.Thirty-four-hour cell cycle monitoring of COS-7 cells

COMMENTS ON FIGURE 1.10.– (a) Quantitative phase image taken with SID4Bio, 40X NA 0.75, 37°C and 5% CO2, scale bar: 20 m. Automatically segmented cells and cell designated by a monitored. (b) Same field of view 15 h later, daughter cell b monitored. (c) Dry mass time monitoring, 1 point every 30 s. Cell a in black, daughter cell b in red. (d) Magnification of the plateau of cell a represented in (c). (e) Representation of the same points in a “projected area versus dry mass” space. (f) Images (raw on the left-hand side and filtered with a high-pass frequency filter on the right side) of cell a at five time points, denoted from 1 to 5 in insets (d) and (e).