Fluorescence Microscopy -  - E-Book

Fluorescence Microscopy E-Book

0,0
115,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

A comprehensive introduction to advanced fluorescence microscopy methods and their applications.
This is the first title on the topic designed specifically to allow students and researchers
with little background in physics to understand both microscopy basics and novel light microscopy techniques. The book is written by renowned experts and pioneers
in the field with a rather intuitive than formal approach. It always keeps the nonexpert reader in mind, making even unavoidable complex theoretical concepts readily accessible. All commonly used methods are covered.
A companion website with additional references, examples and video material makes
this a valuable teaching resource:
http://www.wiley-vch.de/home/fluorescence_microscopy/

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 757

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Related Titles

Title Page

Copyright

Preface

List of Contributors

Chapter 1: Introduction to Optics and Photophysics

1.1 Interference: Light as a Wave

1.2 Two Effects of Interference: Diffraction and Refraction

1.3 Optical Elements

1.4 The Far-Field, Near-Field, and Evanescent Waves

1.5 Optical Aberrations

1.6 Physical Background of Fluorescence

1.7 Photons, Poisson Statistics, and AntiBunching

References

Chapter 2: Principles of Light Microscopy

2.1 Introduction

2.2 Construction of Light Microscopes

2.3 Wave Optics and Resolution

2.4 Apertures, Pupils, and Telecentricity

2.5 Microscope Objectives

2.6 Contrast

2.7 Summary

Acknowledgments

References

Chapter 3: Fluorescence Microscopy

3.1 Features of Fluorescence Microscopy

3.2 A Fluorescence Microscope

3.3 Types of Noise in a Digital Microscopy Image

3.4 Quantitative Fluorescence Microscopy

3.5 Limitations of Fluorescence Microscopy

3.6 Current Avenues of Development

References

Further Reading

Recommended Internet Resources

Fluorescent spectra database

Chapter 4: Fluorescence Labeling

4.1 Introduction

4.2 Principles of Fluorescence

4.3 Key Properties of Fluorescent Labels

4.4 Synthetic Fluorophores

4.5 Genetically Encoded Labels

4.6 Label Selection for Particular Applications

4.7 Conclusions

References

Chapter 5: Confocal Microscopy

5.1 Introduction

5.2 The Theory of Confocal Microscopy

5.3 Applications of Confocal Microscopy

Acknowledgments

References

Chapter 6: Fluorescence Photobleaching and Photoactivation Techniques

6.1 Introduction

6.2 Basic Concepts and Procedures

6.3 Fluorescence Recovery after Photobleaching (FRAP)

6.4 Continuous Fluorescence Microphotolysis (CFM)

6.5 Confocal Photobleaching

6.6 Fluorescence Photoactivation and Dissipation

6.7 Summary and Outlook

References

Chapter 7: Förster Resonance Energy Transfer and Fluorescence Lifetime Imaging

7.1 General Introduction

7.2 FRET

7.3 Measuring FRET

7.4 FLIM

7.5 Analysis and Pitfalls

7.6 Summary

References

Chapter 8: Single-Molecule Microscopy in the Life Sciences

8.1 Encircling the Problem

8.2 What Is the Unique Information?

8.3 Building a Single-Molecule Microscope

8.4 Analyzing Single-Molecule Signals: Position, Orientation, Color, and Brightness

8.5 Learning from Single-Molecule Signals

Acknowledgments

Chapter 9: Super-Resolution Microscopy Interference and Pattern Techniques

9.1 Introduction

9.2 Structured Illumination Microscopy (SIM)

9.3 Spatially Modulated Illumination (SMI) Microscopy

9.4 Application of Patterned Techniques

9.5 Conclusion

9.6 Summary

Acknowledgments

References

Chapter 10: STED Microscopy

10.1 Introduction

10.2 The Concepts behind STED Microscopy

10.3 Experimental Setup

10.4 Applications

10.5 Summary

References

Index

Related Titles

Salzer, R.

Biomedical Imaging

Principles and Applications

2012

ISBN: 978-0-470-64847-6

Sauer, M., Hofkens, J., Enderlein, J.

Handbook of Fluorescence Spectroscopy and Imaging

From Single Molecules to Ensembles

2011

ISBN: 978-3-527-31669-4

Goldys, E. M.

Fluorescence Applications in Biotechnology and Life Sciences

2009

ISBN: 978-0-470-08370-3

The Editor

Prof. Dr. Ulrich Kubitscheck

Rheinische Friedrich-Wilhelms-

Universität Bonn

Institute of Physical and Theoretical

Chemistry

Wegelerstr. 12

53115 Bonn

Germany

Cover

Cover graphics was created by Max Brauner, Hennef, Germany.

Limit of Liability/Disclaimer of Warranty:

While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty can be created or extended by sales representatives or written sales materials. The Advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Library of Congress Card No.: applied for

British Library Cataloguing-in-Publication Data

A catalogue record for this book is available from the British Library.

Bibliographic information published by the Deutsche Nationalbibliothek

The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at <http://dnb.d-nb.de>.

© 2013 Wiley-VCH Verlag GmbH & Co. KGaA, Boschstr. 12, 69469 Weinheim, Germany

Wiley-Blackwell is an imprint of John Wiley & Sons, formed by the merger of Wiley's global Scientific, Technical, and Medical business with Blackwell Publishing.

All rights reserved (including those of translation into other languages). No part of this book may be reproduced in any form — by photoprinting, microfilm, or any other means — nor transmitted or translated into a machine language without written permission from the publishers. Registered names, trademarks, etc. used in this book, even when not specifically marked as such, are not to be considered unprotected by law.

Print ISBN: 978-3-527-32922-9

ePDF ISBN: 978-3-527-67160-1

ePub ISBN: 978-3-527-67161-8

mobi ISBN: 978-3-527-67162-5

oBook ISBN: 978-3-527-67159-5

Preface

What is this book?

This book is both a high-level textbook and a reference for researchers applying high-performance microscopy. It provides a comprehensive yet compact account of the theoretical foundations of light microscopy, the large variety of specialized microscopic techniques, and the quantitative utilization of light microscopy data. It enables the user of modern microscopic equipment to fully exploit the complex instrumental features with knowledge and skill. These diverse goals were approached by recruiting a collective of leading scientists as authors. We applied a stringent internal reviewing process to achieve homogeneity, readability, and a satisfying coverage of the field. Also, we took care to reduce redundancy as far as possible.

Why this book?

Meanwhile, there are numerous books on light microscopy on the market. At a closer look, however, many available books are written at an introductory level with regard to the physics behind the mostly demanding techniques, or they present review articles on advanced topics. Books introducing widespread techniques such as fluorescence resonance energy transfer, stimulated emission depletion, or structured illumination microscopy together with the required basics and theory are relatively rare. Even the basic optical theory such as the Fourier theory of optical imaging or topics such as the sine condition are seldom introduced from scratch. With this book, we tried to fill this gap.

Is this book for you?

The book is aimed at advanced undergraduate and graduate students of the biosciences and researchers entering the field of quantitative microscopy. As they are usually recruited from most natural sciences, that is, physics, biology, chemistry, and biomedicine, we addressed the book to this readership. Readers would definitely profit from a sound knowledge of physics and math. This allows diving much deeper into the material than without. However, all authors are experienced in teaching university and summer courses on light microscopy and have for many years explored the ways to present the required knowledge. Hopefully, you will find that they came upon good solutions. In case you see room for improvement or you encounter mistakes, please let me know.

How should you read the book?

Generally, there are two approaches. Students, who require an in-depth knowledge, should begin at their level of knowledge, either in Chapter 1 (Introduction to Optics and Photophysics) or Chapter 2 (Principles of Light Microscopy). Beginners should initially omit advanced topics, for example, the section on Differential Interference Contrast (Section 2.6.4). Principally, the book is readable without the “boxes”; however, they help in developing a good understanding of theory and history. Then, they should proceed through Chapter 3 (Fluorescence Microscopy), Chapter 4 (Labeling Techniques), and Chapter 5 (Confocal Microscopy). Chapters 6–10 are on advanced topics and special techniques. They should be studied according to interest and requirement.

Alternatively, readers familiar with the subject may certainly skip the introductory Chapters 1–3 and advance directly to the more specialized chapters. In order to maintain the argumentation in these chapters, we repeated certain basic topics in their introductions.

Website of the book

There is a Web site supporting this book (http://www.wiley-vch.de/home/fluorescence_microscopy/). Here, lecturers will find all figures in JPG format for use in courses and additional illustrative material such as movies.

Personal remarks on the history of this book

I saw one of the very first commercial laser scanning microscopes in the lab of Tom Jovin at the Max Planck Institute of Biophysical Chemistry in Göttingen during the late 1980s and was immediately fascinated by the images of that instrument. In the early 1990s, confocal laser scanning microscopes began to spread over biological and biomedical labs. At that time, they usually required really dedicated scientists for proper operation, filled a small laboratory, and were governed by computers as big as refrigerators. The required image processing demanded substantial investments. That was when Reiner Peters and I noticed that biologists and medical scientists needed an introduction into the physical background of optics, spectroscopy, and image analysis for coping with the new techniques. Hence, we offered a lecture series entitled “Microscopes, Lasers and Computers” at the Institute of Medical Physics and Biophysics of the University of Münster, Germany, which was very well received. We began to write a book on microscopy containing the material we had presented, which should not be as comprehensive as Jim Pawley's “Handbook” but should offer an accessible path to modern quantitative microscopy. We invested almost one year into this enterprise project, but then gave up … in view of numerous challenging research topics that kept us busy, the insight into the dimension of this task, and the reality of career requirements. We realized we could not do it alone.

In 2009, Reiner Peters, now at The Rockefeller University in New York, organized a workshop on “Watching the Cellular Nanomachinery at Work” and gathered some of the current leaders in microscopy to report on their latest technical and methodical advances. On this occasion, he noted that the book that had been in our minds 15 years ago was still missing … and contacted the speakers of his meeting. Like many years before, I was excited by the idea to create this book, and together we directly addressed the lecturers of the meeting and asked for introductory book chapters in the fields of their respective expertise. Luckily, a number of them responded positively and began the struggle for an introductory text. Unfortunately, Reiner could not keep his position as an editor of the book due to further obligations, so I had to finish our joint project on my own. Here is the result, and I hope very much the authors succeeded in transmitting their ongoing fascination for microscopy. To me, microscopy appears as a century-old tree that began another phase of growth about 40 years ago, and since then has shown almost every year a new branch with a surprising and remarkable technique offering exciting and fresh scientific fruits.

Acknowledgments

I would like to thank some people who contributed directly and indirectly to this book. First of all, I would like to name Prof. Dr. Reiner Peters. As mentioned, he invited me to the first steps to teach microscopy and to the first attempt to write this book. Finally, he launched the initiative to create this book as an edited work. Furthermore, I would like to thank all the authors, who invested a lot of their expertise, time, and energy in writing, correcting, and finalizing their respective chapters. All are much respected colleagues, and some of them became friends during this project. Also, I thank some people who were previously collaborators or colleagues and helped me to learn more and more about microscopy: Prof. Dr. Reinhard Schweitzer-Stenner, Dr. Donna Arndt-Jovin, Prof. Dr. Tom Jovin, Dr. Thorsten Kues, and Prof. Dr. David Grünwald. Likewise, I gratefully acknowledge Brinda Luiz from Laserwords, India, for excellent project management, and the commissioning editor and the project editor responsible at Wiley-VCH, Dr. Gregor Cicchetti and Anne du Guerny respectively, who sincerely supported this project and not only showed professional patience when yet another delay occurred but also pushed when required. Last but not least, I would like to thank my collaborator Dr. Jan Peter Siebrasse and my wife Martina, who patiently listened to my concerns, when still another problem occurred.

Bonn, March 2013

Ulrich Kubitscheck

List of Contributors

Roman Amberger
Heidelberg University
Applied Optics and Information Processing
Kirchhoff-Institute for Physics
Im Neuenheimer Feld 227
69120 Heidelberg
Germany
Markus Axmann
Department of New Materials and Biosystems
Max Planck Institute for Intelligent Systems
Heisenbergstrasse 3
70569 Stuttgart
Germany
Gerrit Best
Heidelberg University
Applied Optics and Information Processing
Kirchhoff-Institute for Physics
Im Neuenheimer Feld 227
69120 Heidelberg
Germany
Daniel Bélanger
Université du Québec à Montréal
Département de Chimie
case postale 8888
succursale centre-ville
Montréal
Québec H3C 3P8
Canada
and
Heidelberg University Hospital
Department of Ophthalmology
Im Neuenheimer Feld 400
69120 Heidelberg
Germany
Joerg Bewersdorf
Yale University
Department of Cell Biology
333 Cedar Street
New Haven
CT 06510
USA
and
Yale University
Department of Biomedical Engineering
333 Cedar Street
New Haven
CT 06510
USA
Christoph Cremer
Heidelberg University
Applied Optics and Information Processing
Kirchhoff-Institute for Physics
Im Neuenheimer Feld 227
69120 Heidelberg
Germany
and
The Jackson Laboratory
Institute for Molecular Biophysics
600 Main Street
Bar Harbor
Maine 04609
USA
and
Institute of Molecular Biology gGmbH (IMB)
Ackermannweg 4
55128 Mainz
Germany
Jurek W. Dobrucki
Jagiellonian University
Division of Cell Biophysics
Faculty of Biochemistry
Biophysics and Biotechnology
ul. Gronostajowa 7
30-387 Kraków
Poland
Travis J. Gould
Yale University
Department of Cell Biology
333 Cedar Street
New Haven
CT 06510
USA
Achim Hartschuh
Ludwig-Maximilians-Universität München
Center for Nanoscience (CeNS)
Physical Chemistry
Department of Chemistry
Butenandtstrasse 5-13
Gerhard-Ertl-Building
81377 Munich
Germany
Rainer Heintzmann
Institute of Physical Chemistry
Abbe Center of Photonics
Friedrich Schiller University Jena
Helmholtzweg 4
07743 Jena
Germany
and
Institute of Photonic Technology
Microscopy Research Department
Albert-Einstein Strasse 9
07745 Jena
Germany
and
King's College London
Randall Division of Cell and Molecular Biophysics
NHH, Guy's Campus
London SE1 1UL
UK
Ulrich Kubitscheck
Rheinische Friedrich Wilhelms-Universität Bonn
Institute for Physical and Theoretical Chemistry
Department of Biophysical Chemistry
Wegeler Strasse 12
53115 Bonn
Germany
Don C. Lamb
Ludwig-Maximilians-Universität München
Center for Nanoscience (CeNS)
Physical Chemistry
Department of Chemistry
Butenandtstrasse 5-13
Gerhard-Ertl-Building
81377 Munich
Germany
and
Ludwig-Maximilians-Universität München
Munich Center for Integrated Protein Science (CiPSM)
Butenandtstrasse 5-13
D-81377 Munich
Germany
and
University of Illinois at Urbana-Champaign
Department of Physics
1110 West Green Street
Urbana, IL 61801
USA
Josef Madl
Great Lakes Energy Institute
Institute of Biology II and Center for Biological Signaling Studies (BIOSS)
79104 Freiburg
Germany
Nikolaus Naredi-Rainer
Ludwig-Maximilians-Universität München
Center for Nanoscience (CeNS)
Department of Chemistry
Physical Chemistry
Butenandtstrasse 5-13
Gerhard-Ertl-Building
81377 Munich
Germany
Gerd Ulrich Nienhaus
Karlsruhe Institute of Technology (KIT)
Institute of Applied Physics and Center for Functional Nanostructures
Wolfgang-Gaede-Strasse 1
D-76131 Karlsruhe
Germany
and
University of Illinois at Urbana-Champaign
Department of Physics
1110 West Green Street
Urbana, IL 61801
USA
Karin Nienhaus
Karlsruhe Institute of Technology (KIT)
Institute of Applied Physics and Center for Functional Nanostructures
Wolfgang-Gaede-Street 1
D-76131 Karlsruhe
Germany
Patrina A. Pellett
Yale University
Department of Cell Biology
333 Cedar Street
New Haven
CT 06510
USA
and
Yale University
Department of Chemistry
225 Prospect Street
New Haven
CT 06510
USA
Reiner Peters
Rockefeller University
1230 York Avenue
NY 10065 New York
USA
Jens Prescher
Ludwig-Maximilians-Universität München
Center for Nanoscience (CeNS)
Department of Chemistry
Physical Chemistry
Butenandtstr. 5-13
Gerhard-Ertl-Building
81377 Munich
Germany
Gerhard J. Schütz
Vienna University of Technology
Institute of Applied Physics
Wiedner Hauptstrasse 8-10
1040 Wien
Austria
Fred S. Wouters
University Medicine Göttingen
Laboratory for Molecular and Cellular Systems
Department of Neuro- and Sensory Physiology
Centre II
Physiology and Pathophysiology
Humboldtallee 23
37073 Göttingen
Germany

1

Introduction to Optics and Photophysics

Rainer Heintzmann

In this chapter, we first introduce the properties of light as a wave by discussing interference, which explains the laws of refraction, reflection, and diffraction. We then discuss light in the form of rays, which leads to the laws of lenses and the ray diagrams of optical systems. Finally, the concept of light as photons is addressed, including the statistical properties of light and the properties of fluorescence.

For a long time, it was believed that light travels in straight lines, which are called rays. With this theory, it is easy to explain brightness and darkness, and effects such as shadows or even the fuzzy boundary of shadows due to the extent of the sun in the sky. In the sixteenth century, it was discovered that sometimes light can “bend” around sharp edges by a phenomenon called diffraction. To explain the effect of diffraction, light has to be described as a wave. In the beginning, this wave theory of light – based on Christiaan Huygens' (1629–1695) work and expanded (in 1818) by Augustin Jean Fresnel (1788–1827) – was not accepted. Poisson, a judge for evaluating Fresnel's work in a science competition, tried to ridicule it by showing that Fresnel's theory would predict a bright spot, now called the Poisson's spot, in the middle of a round dark shadow behind a disk object, which he obviously considered wrong. Another judge, Arago, then showed that this spot is indeed seen when measurements are done carefully enough. This was a phenomenal success of the wave description of light. Additionally, there was also the corpuscular theory of light (Pierre Gassendi *1592, Sir Isaac Newton *1642), which described light as particles. With the discovery of Einstein's photoelectric effect, the existence of these particles, called photons, could not be denied. It clearly showed that a minimum energy per such particle is required as opposed to a minimum strength of an electric field. Such photons can even directly be “seen” as individual spots, when using a camera with a very sensitive film imaging a very dim light distribution or modern image intensified or emCCD cameras.

Since then, the description of light maintained this dual (wave and particle) nature. When it is interacting with matter, one often has to consider its quantum (particle) nature. However, the rules of propagation of these particles are described by Maxwell's wave equations of electrodynamics, identifying oscillating electric fields as the waves responsible for what we call light. The wave-particle duality of light still surprises with interesting phenomena and is an active field of research (known as quantum optics). It is expected that exploiting related phenomena will form the basis of future devices such as quantum computers, quantum teleportation, and even microscopes based on quantum effects.

To understand the behavior of light, the concepts of waves are often required. Therefore, we start by introducing an effect that is observed only when the experiment is designed carefully: interference.

For understanding the basic concepts as detailed below, only a minimum amount of mathematics is required. Previous knowledge of complex numbers is not required. However, for quantitative calculations, the concepts of complex numbers will be needed.

1.1 Interference: Light as a Wave

Figure 1.1 Interference. (a) In the interferometer of Mach–Zehnder type, the beam is split equally in two paths by a beam splitter and after reflection rejoined with a second beam splitter. If the optical path lengths of the split beams are adjusted to be exactly equal, constructive interference results in the right path, whereas the light in the other path cancels by destructive interference. (b) Destructive interference. If the electric field of two interfering light waves (top and bottom) is always of opposite value (p phase shift), the waves cancel and the result is a zero value for the electric field and, thus, also zero intensity. This is termed destructive interference.

The explanation of this effect of interference lies in the wave nature of light: brightness and brightness can indeed yield darkness (called destructive interference), if the two superimposed electromagnetic waves are always oscillating in opposite directions, that is, they have opposite phases (Figure 1.1b). The frequency ν is given as the reciprocal of time between two successive maxima of this oscillation. The amplitude of a wave is given by how much the electric field oscillates while it is passing by. The square of this amplitude is what we perceive as irradiance or brightness (sometimes also referred to, in a slightly inaccurate way, as intensity) of the light. If one of the two waves is then blocked, this destructive cancellation will cease and we will get the 25% brightness, as ordinarily expected by splitting 50% of the light again in two.

Indeed, if we remove our hand and just delay one of the two waves by only half a wavelength (e.g., by introducing a small amount of gas in only one of the two beam paths), the relative phase of the waves changes, which can invert the situation, that is, we observe constructive interference (on top side) where we previously had destructive interference and darkness where previously there was light.

The aforementioned device is called an interferometer of Mach–Zehnder type. Such interferometers are extremely sensitive measurement devices capable of detecting a sub nanometer relative delay of light waves passing the two arms of the interferometer, for example, caused by a very small concentration of gas in one arm.

Sound is also a wave, but in this case, instead of the electromagnetic field, it is the air pressure that oscillates at a much slower rate. In the case of light, it is the electric field oscillating at a very high frequency. The electric field is also responsible for hair clinging to a synthetic jumper, which has been electrically charged by friction or walking on the wrong kind of floor with the wrong kind of socks and shoes. Such an electric field has a direction not only when it is static, as in the case of the jumper, but also when it is dynamic, as in the case of light. In the latter case, the direction corresponds to the direction of polarization of the light, which is discussed later.

Waves, such as water waves, are more commonly observed in nature. Although these are only a two-dimensional analogy to the electromagnetic wave, their crests (the top of each such wave) are a good visualization of what is referred to as aphase front. Thus, phase fronts in 3 dimensional electromagnetic waves refer to the surfaces of equal phase (e.g., a local maximum of the electric field). Similar to what is seen in water waves, such phase fronts travel along with the wave at the speed of light. The waves we observe close to the shore can serve as a 2D analogy to what is called a plane wave, whereas the waves seen in a pond, when we throw a stone into the water, are a two-dimensional analogy to a spherical wave.

When discussing the properties of light, one often omits the details about the electric field being a vectorial quantity and rather talks about the scalar “amplitude” of the wave. This is just a sloppy, but a very convenient way of describing light when polarization effects do not matter for the experiment under consideration. Light is called a transverse wave as in vacuum and homogeneous isotropic media, the electric field is always oriented perpendicular to the local direction of propagation of the light. However, this is merely a crude analogy to waves in media, such as sound, where the particles of the medium actually move. In the case of light, there is really no movement of matter necessary for its description as the oscillating quantity is the electric field, which can even propagate in vacuum.

The frequency ν (measured in hertz, i.e., oscillations per second, see also Figure 1.1b), at which the electric field vibrates, defines its color. Blue light has a higher frequency and energy hν per photon than green, yellow, red, and infrared light. Here, h is Planck's constant and ν is the frequency of the light. Because in vacuum, the speed of light does not depend on its color, the vacuum wavelength λ is short for blue light (∼ 450 nm) and gets longer for green (∼ 520 nm), yellow (∼ 580 nm), red (∼ 630 nm), and infrared (∼ 800 nm), respectively. In addition, note that the same wave theory of light governs all wavelength ranges of the electromagnetic spectrum from radio waves over microwaves, terahertz waves, infrared, visible, ultraviolet, vacuum-ultraviolet, and soft and hard X-rays to gamma rays.

In many cases, we deal with linear optics, where all amplitudes will have the time dependency exp(iωt), as given above. Therefore, this time-dependent term is often omitted, and one concentrates only on the spatial dependency while keeping in mind that each phasor always rotates with time.

1.2 Two Effects of Interference: Diffraction and Refraction

We now know the important effect of constructive and destructive interference of light, explained by its wave nature. As discussed below, the wave nature of light is capable of explaining two aspects of light: diffraction and refraction. Diffraction is a phenomenon that is seen when light interacts with a very fine (often periodic) structure such as a compact disk (CD). The emerging light will emerge under different angles, dependent on its wavelengths and giving rise to the colorful experience when looking at light diffracted from the surface of a CD. On the other hand, refraction refers to the effect where light rays seem to change their direction when the light passes from one medium to another. This is, for example, seen when trying to look at a scene through a glass full of water.

Even though these two effects may look very different at first glance, both of these effects are ultimately based on interference, as discussed here. Diffraction is most prominent when light illuminates structures (such as a grating) of a feature size (grating constant) similar to the wavelength of light. In contrast, refraction (e.g., the bending of light rays caused by a lens) dominates when the different media (such as at the air and glass) have constituents (molecules) that are much smaller than the wavelength of light (homogeneous media), but these homogeneous areas are much larger in feature size (i.e., the size of a lens) than in the wavelength.

To describe diffraction, it is useful to first consider the light as emitted by a pointlike source. Let us look at an idealized source, which is infinitely small and emits only a single color of light. This source would emit a spherically diverging wave. In vacuum, the energy flows outward through any surface around the source without being absorbed; thus, spherical shells at different radii must have the same integrated intensity. Because the surface of these shells increases with the square of the distance to the source, the light intensity decreases with the inverse square such that energy is conserved.

To describe diffraction, Christiaan Huygens had an ingenious idea: to find out how a wave will continue on its path, we determine it at a certain border surface and can then place virtual point emitters everywhere at this surface, letting the light of these emitters interfere. The resulting interference pattern will reconstitute the original wave beyond that surface. This “Huygens' principle” can nicely explain that parallel waves stay parallel, as we find constructive interference only in the direction following the propagation of the wave. Strictly speaking, one would also find a backwards propagation wave. However, when Huygen's idea is formulated in a more rigorous way, the backward propagating wave is avoided. Huygens' principle is very useful when trying to predict the scenario when a wave hits a structure with feature size comparable to the wavelength of light, for example, a slit aperture or a diffraction grating. In Figure 1.4, we consider the example of diffraction at a grating with the lowest repetition distance D. D is designated as the grating constant. As is seen from the figure, circular waves corresponding to Huygens' wavelets originate at each aperture and they join to form new wave fronts, thus forming plane waves oriented in various directions. These directions of constructive interference need to fulfill the following condition (Figure 1.4):

with N denoting the integer (number of the diffraction orders) multiples of wavelengths λ to yield the same phase (constructive interference) at angle α with respect to the incident direction. Note that the angle α of the diffracted waves depends on the wavelength and thus on the color of light. In addition, note that the crests of the waves form connected lines (indicated as dashed-dotted line), which are called phase fronts or wave fronts, whereas the dashed lines perpendicular to these phase fronts can be thought of as corresponding to the light rays of geometrical optics.

A CD is a good example of such a diffractive structure. White light is a mixture of light of many wavelengths. Thus, illuminating the CD with a white light source from a sufficient distance will cause only certain colors to be diffracted from certain places on the disk into our eyes. This leads to the observation of the beautiful rainbow-like color effect when looking at it.

Huygens' idea can be slightly modified to explain what happens when light passes through a homogeneous medium. Here, the virtual emitters are replaced with the existing molecules inside a medium. However, contrary to Huygens' virtual wavelets, in this case, each emission from each molecule is phase shifted with respect to the incoming wave (Figure 1.5). This phase shift depends on the exact nature of the material. It stems from electrons being wiggled by the oscillating electric field. The binding to the atomic nuclei will cause a time delay in this wiggling. These wiggling electrons constitute an accelerated charge that will radiate an electromagnetic wave.

Even though each scattering molecule generates a spherical wave, the superposition of all the scattered waves from atoms at random positions will only interfere all constructively in the forward direction. Thus, each very thin layer of molecules generates another parallel wave, which differs in phase from the original wave. The sum of the original sinusoidal wave and the interfering wave of scattered light will result in a forward-propagating parallel wave with sinusoidal modulation, however, lagging slightly in phase (Figure 1.5). In a dense medium, this phase delay is continuously happening with every new layer of material throughout the medium, giving the impression that the wave has “slowed down” in this medium. It can also be seen as an effectively modified wavelength λmedium inside the medium. This change in the wavelength is conveniently described by the refractive index (n):

with the wavelength in vacuum λvacuum.

This is a convenient way of summarizing the effect a homogeneous medium has on the light traveling through it. Note that the temporal frequency of vibration of the electric field does not depend on the medium.

However, the change in wavelength may itself be dependent on the frequency of the light n(ν) (and thus on the color or wavelength of the light); an effect that is called dispersion.

Now that we understand how the interference of the waves scattered by the molecules of the medium can explain the change in effective wavelength, we can use this concept to explain the effect of refraction, the apparent bending of light rays at the interface between different media.

where α1 and α2 are the angles between the direction of propagation of the plane wave and the line orthogonal to the medium's surface called the surface normal.

Figure 1.6 Refraction as an effect of interference. This figure derives Snell's law of diffraction from the continuity of the electric field over the border between the two different materials.

This is Snell's famous law of refraction, which forms the foundation of geometrical optics.

We can now move to the concept of light rays as commonly drawn in geometrical optics. Such a ray represents a parallel wave with a size small enough to look like a ray, but large enough to not show substantial broadening by diffraction. Practically, the beam emitted by a laser pointer can serve as a good example.

Box 1.2 Polarization

As described above, light in a homogeneous isotropic medium is a transverse electromagnetic wave. This means that the vector of the electric field is perpendicular to the direction of propagation. Above, we simplified this model by introducing a scalar amplitude, that is, we ignored the direction of the electric field. In many cases, we do not care much about this direction; however, there are situations where the direction of the electric field is important.

One example is the excitation of a single fluorescent dye that is fixed in orientation, for example, being integrated rigidly into the cell membrane and perpendicular to the membrane. Such a fluorescent dye has an orientation of the oscillating electric field of the light wave, with which it can best be excited, which is referred to as its transition dipole moment. If the electric field oscillates perpendicularly to this direction, the molecule cannot be excited.

When referring to the way the electric field in a light wave oscillates, one specifies its mode of polarization.

For a wave of a single color, every vector component of the electric field Ex, Ey, and Ez, oscillates at the same frequency, but each component can have an independent phase and magnitude. This means that we can use the same concept of the complex-valued amplitude to describe each field vector component.

There are a variety of possible configurations, but two of them are noteworthy: linear and circular polarization. If we consider a homogeneous isotropic medium, the electric field vector is always perpendicular to the direction of propagation. Therefore, we can orient the coordinate system at every spatial position such that the field vector has no Ez component, that is, the polarization lies in the XY-plane.

If the Ex and Ey components now oscillate simultaneously with no phase difference, we obtain linear polarization (Figure 1.7). This means that there is a specific fixed orientation in space toward which the electric field vector points. It is worth noting that the directions that differ by 180° are actually identical directions except for a global phase shift of the oscillation by π.

Figure 1.7 Various modes of polarization. Shown here are the paths that the electric field vector describes. (a,b) Two different orientations of linearly polarized light: (a) linear along X and (b) linear 45° to X. (c) An example of left circular polarization. (d) An example of elliptically polarized light. Obviously, other directions and modes of polarization (e.g., right circular) are also possible.

In contrast, let us consider the other extreme in which the Ex and Ey components are 90° out of phase, that is, when Ex has a maximum, Ey is 0, and vice versa. In this case, the orientation of the electric field vector describes a circle, and thus, this is termed circular polarization.

Intermediate phase angles (not 0°, 90°, 180°, or 270°) are called elliptic polarization. Note that we again obtain linear polarization, if the field vectors are 180° out of phase (Figure 1.7).

We now discuss a number of effects where polarization of light plays a significant role.

Materials such as glass will reflect a certain amount of light at their surface, even though they are basically 100% transparent once the light is inside the material. At perpendicular incidence, about 4% of the incident light gets reflected, while at other angles, this amount varies. More noticeably, under oblique angles, the amount and the phase of the reflected light strongly depend on the polarization of the input light. This effect is summarized by the Fresnel reflection coefficients (Figure 1.8). There is a specific angle at which the component of the electric field, which is parallel (p-component) to the plane where the incident and reflected beams lie, is entirely transmitted. This angle is called the Brewster angle (Figure 1.8a). Thus, the reflected light under this angle is 100% polarized in the direction perpendicular to the plane of the beams (s-polarization from the German term senkrecht for perpendicular). In addition, note that the Fresnel coefficients for the glass–air interface predict a range of total internal reflection (Figure 1.8b).

Figure 1.8 An example of the reflectivity of an air–glass interface (a) as described by the Fresnel coefficients dependent on the direction of polarization. Parallel means that the polarization vector is in the plane that the incident and the reflected beam would span, whereas perpendicular means that the polarization vector is perpendicular to this plane. The incident angle of 0° means the light is incident perpendicular to the interface. The Brewster angle is the angle at which the parallel polarization is perfectly transmitted into the glass (no reflected light), and (b) depicts the corresponding situation on the glass to air interface, which leads to the case of total internal reflection for a range of supercritical angles.

There are crystalline materials that are not isotropic, that is, their molecules or unit cells have an asymmetry that leads to different refractive indices for different polarization directions (birefringence). Especially noteworthy is the fact that an input beam entering this material will usually be split into two beams in the birefringent crystal, traveling into different directions inside the material. By cutting crystal wedges along different directions and joining them together, one can make a Wollaston or a Normaski prism, where too the beams leaving the crystal will be slightly tilted with respect to each other for p- and s-polarization. Such prisms are used in a microscopy mode called differential interference contrast (DIC) (Chapter 2).

For the concept of a complex amplitude, we introduced the phasor diagram as an easy way to visualize it. If we want to extend this concept to full electric field vectors and the effects of polarization, we need to draw a phasor for each of the oscillating electric field components. The relative phase difference (the angle of the phasors to the real axis) determines the state of polarization.

When dealing with microscope objectives of high performance, light will often be focused onto the sample under very high angles. This usually poses a problem for methods such as polarization microscopy because for some rays, the polarization will be distorted for geometrical reasons and because of the influence of Fresnel coefficients. Especially affected are the rays at high angles positioned at 45° to the orientation defined by the linear polarization. This effect is, for example, seen when trying to achieve light extinction in a DIC microscope (without a sample): a “Maltese cross” becomes visible, stemming from such rays with insufficient extinction.

There are ways to overcome this problem of high-aperture depolarization. One such possibility is an ingenious design of a polarization rectifier based on an air-meniscus lens (Inoué and Hyde, 1957). Another possibility to correct the depolarization is the use of appropriate programmable spatial light modulators in the conjugate back focal plane of an objective.

1.3 Optical Elements

In this section, we consider the optical properties of several optical elements: lenses, mirrors, pinholes, filters, and chromatic reflectors.

1.3.1 Lenses

Here we will analyze a few situations to understand the general behavior of lenses. In principle, we could use Snell's law to calculate the shape of an ideal lens to perform its focusing ability. However, this would be outside the scope of this chapter and we assume that a lens with refractive index n and radii of curvature R1 and R2 (each positive for a convex surface) focuses parallel light to a point at the focal distance f as given by Hecht (2002)

with d denoting the thickness of the lens measured at its center on the optical axis. The above equation is called the lensmaker's equation for air. If the lenses are thin and the radii of curvature are large, the term containing d/(R1R2) can be neglected, yielding the equation for “thin” lenses:

This approximation is usually made and the rules of geometrical optics as stated below apply.

The beauty of geometrical optics is that one can construct ray diagrams and can graphically work out what happens in an optical system. In Figure 1.9a, it can be seen how all the rays parallel to the optical axis focus at the focal distance of the lens, as this is the definition of the focus of a lens. The optical axis refers to the axis of symmetry of the lenses as well as to the general direction of propagation of the rays. A spherically converging wave is generated behind the lens, which then focuses at the focal point of the lens. In Figure 1.9b, it can be seen that this is also true for parallel rays entering the lens at an oblique angle, as they also get focused in the same plane. What can also be seen here are two more basic rays used for geometrical construction of ray diagrams. The ray going through the center of a thin lens is always unperturbed. This is easily understood, as the material is oriented at the same angle on its input and exit side. For a thin lens, this “slab of glass” has to be considered as infinitely thin, thus yielding no displacement effect of this central ray and we can draw the ray right through the center of the lens.

Figure 1.9 Focus of a lens under parallel illumination. (a) Illumination parallel to the optical axis. (b) Parallel illumination beams tilted with respect to the optical axis, leading to a focus in the same plane but at a distance from the optical axis.

The other important ray used in this geometrical construction is the ray passing through the front focal point of the lens. In geometrical optics of thin lenses, lenses are always symmetrical; thus, the front focal distance of a thin lens is the same as the back focal distance.

The principle of optical reciprocity states that one can always retrace the direction of the light rays in geometrical optics and get the identical paths of the rays. Obviously, this is not strictly true for any optical setup. For example, absorption will not lead to amplification when the rays are propagated backward.

However, from this principle, it follows that if an input ray parallel to the optical axis always goes through the image focal point on the optical axis, a ray (now directed backward) going through such a focal point on the optical axis will end up being parallel to the optical axis on the exit side of the lens.

Parallel light is often referred to as coming from sources at infinity, as this is the limiting case scenario when moving a source further and further away. Hence, a lens focuses an image at infinity to its focal plane because each source at an “infinite distance” generates a parallel wave with its unique direction. Thus, a lens will “image” such sources to unique positions in its focal plane. The starry night sky is a good example of light sources at almost infinite distance. Therefore, telescopes have their image plane at the focal plane of the lens.

In a second example (Figure 1.10), we now construct the image of an object at a finite distance. As we consider this object (with its height S) to be emitting the light, we can draw any of its emitted rays. The trick now is to draw only those rays for which we know the way they are handled by the lens:

A ray parallel to the optical axis will go through the focus on the optical axis (ray 1).

A ray going through the front focal point on the optical axis will end up being parallel to it (ray 2).

A ray going through the center of the lens will pass through it unperturbed (ray 3).

Figure 1.10 A single lens imaging an object as an example for drawing optical ray diagrams. Ray 1 starts parallel and will thus be refracted toward the back focal point by the lens, ray 2 goes through the front focal point and will thus be parallel, and ray 3 is the central ray, which is unaffected by the lens. By knowing that this idealized lens images a point (the tip of S) to a point (the tip of –MS), we can infer the refraction of rays such as ray 4.

Furthermore, we trust that lenses are capable of generating images, that is, if two rays cross each other at a certain point in the space, all rays will cross at this same point. Thus, if such a crossing is found, we are free to draw other rays from the same source point through this same crossing (ray 4).

In Figure 1.10, we see that the image is flipped over and has generally a different size (MS) compared to the original, with M denoting the magnification. We find two conditions for similar triangles as shown in the figure. This leads to the description of the imaging properties of a single lens:

where Do is the distance from the lens to the object in front of it, Di is the distance from the lens to the image, and f is the distance from the lens to its focal plane (where it would focus parallel rays).

Using the aforementioned ingredients of optical construction, very complicated optical setups can be treated with ease. However in this case, it is often necessary to consider virtual rays that do not physically exist. Such rays, for example, are drawn backward through a system of lenses, ignoring them completely with the sole purpose of determining where a virtual focal plane would lie (i.e., the rays after this lens system look like they came from this virtual object). Then the new rays can be drawn from this virtual focal plane, adhering to the principal rays that are useful for the construction of image planes in the following lenses.

1.3.2 Metallic Mirror

In the examples of refraction (Figure 1.5), we considered the so-called dielectric materials where the electrons are bound to their nuclei. This leads to a phase shift of the scattered wave with respect to the incident wave. For metals, the situation is slightly different, as the valence electrons are essentially free (in a so-called electron gas). As a result, they always oscillate such that they essentially emit a 180° phase shifted wave, which is destructive along the direction of propagation without really phase-shifting the incident wave. The reason for this 180° phase shift is that the conducting metal will always compensate for any electric field inside it by moving charges. Thus, at an interface, there is no transmitted wave, but only a reflected wave from the surface. Huygens' principle can also be applied here to explain the law of reflection, which states that the wave will leave the mirror at the same angle to the mirror normal but reflected on the opposite side of the normal.

1.3.3 Dielectric Mirror

With such multilayer coatings, an effect opposite to reflection can also be achieved. The usual 4% reflection at each air–glass and glass–air surface of optical elements (e.g., each lens) can be significantly reduced by coating the lenses with layers of well-defined thickness and refractive indices, the so-called antireflection (AR) coating. When viewed at oblique angles, such coated surfaces often have a blue oily shimmer, as one might have seen on photography camera lenses.

1.3.4 Pinholes

These are, for example, used in confocal microscopy to define a small area transmitting light to the spatially integrating detector. Typically, these pinholes are precisely adjustable in size. This can be achieved by translating several blades positioned behind each other. However, an important application of a pinhole is to generate a uniformly wide clean laser beam of Gaussian shape. For more details, see Section A.2

Pinholes can be bought from several manufacturers, but it is also possible to make a pinhole using a kitchen tin foil. The tin foil is tightly folded a number of times (e.g., four times). Then a round sharp needle (e.g., a pin is fine, but do not use a syringe) is pressed into that stack, penetrating a few layers, but not going right through it. After removing the needle, the tin foil is unfolded, and by holding it in front of a bright light source, the smallest pinhole in the series of generated pinholes can be identified by eye. This is typically a good size for being used in a beam expansion telescope for beam cleanup.

1.3.5 Filters

There are two different types (and combinations thereof) of filters. Absorption filters (sometimes called color glass filters) consist of a thick piece of glass in which a material with strong absorption is embedded. An advantage is that a scratch will not significantly influence the filtering characteristics. However, a problem is that the spectral edge of the transition between absorption and transmission is usually not very steep and the transmission in the transmitted band of wavelengths is not very high. Note that when one uses the term “wavelengths” sloppily, as in this case, one usually refers to the corresponding wavelength in vacuum and not the wavelength inside the material. Therefore, high-quality filters are always coated on at least one side with multilayer structure of dielectric materials. These coatings can be tailored precisely to reflect exactly a range of wavelength while effectively transmitting another range. A problem here is that, as such coatings work owing to interference, there is an inherent angular and wavelength dependence. In other words, a filter placed at a 45° angle will transmit a different range of wavelengths than that placed at normal incidence. In a microscope, the position in the field of view in the front focal plane of the objective lens will define the exact angle under which the light leaves the objective. Because such filters are typically placed in the space between objective and tube lens (the infinity space), this angle is also the angle of incidence on the filter. Thus, there could potentially be a position-dependent color sensitivity owing to this effect. However, because the fluorescence spectra are usually rather broad and the angular spread for a typical field of view is in the range of only ±2°, this effect can be completely neglected for fluorescence microscopy.

It is important to reduce background as much as possible. Background can stem from the generation of residual fluorescence or Raman excitation even in glass. For this reason, optical filters always need to have their coated side pointing toward the incident light. Because the coatings usually never reach completely to the edge of the filter, one can determine the coated side by careful inspection. When building home-built setups, one also has to assure that no light can possibly pass through an uncoated bit of a filter, as this would have disastrous effects on the suppression of unwanted scattering.

1.3.6 Chromatic Reflectors

Similarly, chromatic reflectors are designed to reflect a range of wavelength (usually shorter than a critical wavelength) and transmit another range. Such reflectors are often calleddichroic mirrors, which is a confusing and potentially misleading term, as the physical effect “dichroism”, which refers to a polarization-dependent absorption, has nothing to do with it. Chromatic reflectors are manufactured by multilayer dielectric coating. The comments above about the wavelength and angular dependence apply equally well here; even more so because for 45° incidence, the angular dependence is much stronger and the spectral steepness of the edges is much softer than for angles closer to normal incidence. This has recently lead microscope manufacturers to redesign setups with the chromatic reflectors operating closer to normal incidence (with appropriate redesign of the chromatic reflectors).

1.4 The Far-Field, Near-Field, and Evanescent Waves

At a distance of several wavelengths from the center, this amplitude can well be described by a superposition of plane waves, each with the same wavelength. However, when it is close to the source, this approximation cannot be made. We will also need waves of lateral wavelengths components that are smaller than the total wavelength to synthesize the above r−1-shaped amplitude distribution. This, however, is possible only by using an imaginary value for the axial (z) component of the wave vector, which then forces these waves to decay exponentially with distance (hence the name “evanescent” components). Surprisingly with this trick, a complete synthesis of the amplitude A remains possible in the right-hand side of the space around next to the source, for instance. This synthesis is called the Weyl expansion. The proximity of the source influences the electric field in its surrounding. A spherically focused wave in a homogeneous medium will remain a far-field wave with a single wavelength and generate quite a different amplitude pattern as compared to the above emitted wave, which again shows the importance of the near-field waves in the vicinity of sources of light, inhomogeneities, or absorbers.

Another important point is that the near field does not dissipate energy. The oscillating electrical field can be considered as “bound” to the emitter or inhomogeneity, as long as it remains in a homogeneous medium with no absorbers present.

A few examples of such near-field effects are given as follows: