124,99 €
This book brings together experts in the field who present material on a number of important and growing topics including lighting, displays, solar concentrators. The first chapter provides an overview of the field of nonimagin and illumination optics. Included in this chapter are terminology, units, definitions, and descriptions of the optical components used in illumination systems. The next two chapters provide material within the theoretical domain, including etendue, etendue squeezing, and the skew invariant. The remaining chapters focus on growing applications. This entire field of nonimaging optics is an evolving field, and the editor plans to update the technological progress every two to three years. The editor, John Koshel, is one of the most prominent leading experts in this field, and he is the right expert to perform the task.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 533
Veröffentlichungsjahr: 2012
Table of Contents
COVER
IEEE PRESS
TITLE PAGE
COPYRIGHT PAGE
DEDICATION
PREFACE
COVER
CONTRIBUTORS
GLOSSARY
PARAMETERS
ACRONYMS
CHAPTER 1 INTRODUCTION AND TERMINOLOGY
1.1 WHAT IS ILLUMINATION?
1.2 A BRIEF HISTORY OF ILLUMINATION OPTICS
1.3 UNITS
1.4 INTENSITY
1.5 ILLUMINANCE AND IRRADIANCE
1.6 LUMINANCE AND RADIANCE
1.7 IMPORTANT FACTORS IN ILLUMINATION DESIGN
1.8 STANDARD OPTICS USED IN ILLUMINATION ENGINEERING
1.9 THE PROCESS OF ILLUMINATION SYSTEM DESIGN
1.10 IS ILLUMINATION ENGINEERING HARD?
1.11 FORMAT FOR SUCCEEDING CHAPTERS
CHAPTER 2 ÉTENDUE
2.1 ÉTENDUE
2.2 CONSERVATION OF ÉTENDUE
2.3 OTHER EXPRESSIONS FOR ÉTENDUE
2.4 DESIGN EXAMPLES USING ÉTENDUE
2.5 CONCENTRATION RATIO
2.6 ROTATIONAL SKEW INVARIANT
2.7 ÉTENDUE DISCUSSION
CHAPTER 3 SQUEEZING THE ÉTENDUE
3.1 INTRODUCTION
3.2 ÉTENDUE SQUEEZERS VERSUS ÉTENDUE ROTATORS
3.3 INTRODUCTORY EXAMPLE OF ÉTENDUE SQUEEZER
3.4 CANONICAL ÉTENDUE-SQUEEZING WITH AFOCAL LENSLET ARRAYS
3.5 APPLICATION TO A TWO FREEFORM MIRROR CONDENSER
3.6 ÉTENDUE SQUEEZING IN OPTICAL MANIFOLDS
3.7 CONCLUSIONS
APPENDIX 3.A GALILEAN AFOCAL SYSTEM
APPENDIX 3.B KEPLERIAN AFOCAL SYSTEM
CHAPTER 4 SMS 3D DESIGN METHOD
4.1 INTRODUCTION
4.2 STATE OF THE ART OF FREEFORM OPTICAL DESIGN METHODS
4.3. SMS 3D STATEMENT OF THE OPTICAL PROBLEM
4.4 SMS CHAINS
4.5 SMS SURFACES
4.6 DESIGN EXAMPLES
4.7 CONCLUSIONS
CHAPTER 5 SOLAR CONCENTRATORS
5.1 CONCENTRATED SOLAR RADIATION
5.2 ACCEPTANCE ANGLE
5.3 IMAGING AND NONIMAGING CONCENTRATORS
5.4 LIMIT CASE OF INFINITESIMAL ÉTENDUE: APLANATIC OPTICS
5.5 3D MIÑANO–BENITEZ DESIGN METHOD APPLIED TO HIGH SOLAR CONCENTRATION
5.6 KÖHLER INTEGRATION IN ONE DIRECTION
5.7 KÖHLER INTEGRATION IN TWO DIRECTIONS
APPENDIX 5.A ACCEPTANCE ANGLE OF SQUARE CONCENTRATORS
APPENDIX 5.B POLYCHROMATIC EFFICIENCY
ACKNOWLEDGMENTS
CHAPTER 6 LIGHTPIPE DESIGN
6.1 BACKGROUND AND TERMINOLOGY
6.2 LIGHTPIPE SYSTEM ELEMENTS
6.3 LIGHTPIPE RAY TRACING
6.4 CHARTING
6.5 BENDs
6.6 MIXING RODS
6.7 BACKLIGHTS
6.8 NONUNIFORM LIGHTPIPE SHAPES
6.9 ROD LUMINAIRE
ACKNOWLEDGMENTS
CHAPTER 7 SAMPLING, OPTIMIZATION, AND TOLERANCING
7.1 INTRODUCTION
7.2 DESIGN TRICKS
7.3 RAY SAMPLING THEORY
7.4 OPTIMIZATION
7.5 TOLERANCING
INDEX
Copyright © 2013 by the Institute of Electrical and Electronics Engineers. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data:
Koshel, R. John, author.
Illumination engineering : design with nonimaging optics / John Koshel.
pages cm
ISBN 978-0-470-91140-2 (hardback)
1. Optical engineering. 2. Lighting. I. Title.
TA1520.K67 2013
621.36–dc23
2012020167
To my family
Dee, Gina, Abe, Lucy, Tanya, Fred, Frankie, and Trudy
PREFACE
This book was started some time ago. At first I thought to write an introductory book in the field of illumination engineering, but the book has since evolved to have more breadth. The original title was “Advanced Nonimaging/Illumination Optics,” but that title just did not convey the intent of the publication. I felt it important to have both “nonimaging” and “illumination” in the final title. Illumination, as discussed in the first chapter, indicates a light distribution from a source as used or detected by an observer. “Nonimaging” denotes that the imaging constraint is not required, but rather, as will be discussed in detail, pertains to the efficient transfer of radiation from a source to a target. The difference is subtle, but in the simplest sense, illumination demands an observer, while nonimaging optics strives to obtain a desired distribution and/or efficiency. Thus, I struggled for a better title, but just prior to submission, I came upon “Illumination Engineering: Design with Nonimaging Optics.” Of course, not all illumination systems use nonimaging design principles (e.g., a number of projection systems use Köhler illumination, which is based on imaging principles). Additionally, not all illumination systems must have an “observer,” but they have a target (e.g., solar power generation uses a photovoltaic cell as the target). Therefore, the terms “illumination” and “nonimaging” are a bit broad in their presentation herein. Such a broad interpretation is appropriate because, as will be seen, most nonimaging principles, such as the edge-ray, have imaging as the base of their design methods. Also, for illumination, consider that the source is “illuminating” the target rather than providing “illumination” for an observer. Thus, the major focus of this book is the use of nonimaging optics in illumination systems.
While the field of illumination is old, only recently have individuals started researching the utility of nonimaging optics to provide a desired distribution of radiation with high transfer efficiency. Increasingly, our society faces environmental and energy use issues, so optimally designed illumination/nonimaging optics are especially attractive. Additionally, solar power generation uses a number of nonimaging optics design methods to provide the power that we can use to drive our illumination systems. In essence, we have the potential “to have our cake and eat it too”: with nonimaging/illumination optics, we can create and use the electrical power we use in our daily lives. Nonimaging optics will only gain in importance as the field and technology advance. One might say the illumination field is comparable to the lens design field of the early 20th century, so there is large room for improvement.
There is a wide breadth of topics that could be considered for this book, from design methods to sources to applications to fabrication. Ten years ago most of these topics were barely within the literature, with only one book actually addressing the topic of nonimaging optics. There was a bit more literature in the illumination field. However, the illumination books primarily dealt with the application of lighting, which limited their scope to design methods and suggestions rather than theoretically developed design principles. Fortunately, this dearth of literature is decreasing, especially with the burgeoning growth of solid-state lighting and solar power generation. This book introduces a number of topics that have not been pursued much in the literature while also expanding on the fundamental limits. In the first chapter, I present a discussion of the units, design method, design types, and a short history. In the second chapter, I focus solely on the topic of étendue through its conservation, and its extension to the skew invariant. Étendue is the limit of what is possible with optics; therefore, it is imperative that the reader understand the term and what it implies. There is a wealth of theory presented in that chapter, including a number of proofs, while I also use an example to build the reader’s understating of étendue. The proofs are geared to readers who come from varied physics backgrounds, from radiometry to thermodynamics to ray tracing. Chapter 3, by Pablo Benítez, Juan Carlos Miñano, and José Blen, continues the treatment of étendue by looking at its squeezing into a desired phase space. Juan Carlos Miñano, Pablo Benítez, Aleksandra Cvetkovic, and Rubén Mohedano continue by using methods to develop freeform optics that provide high efficiency into a desired distribution. Next, two application areas are presented: solar concentrators by Julio Chaves and Maikel Hernández (Chapter 5) and lightpipes (Chapter 6) by William Cassarly and Thomas Davenport. Chapter 5 investigates nonimaging optics that provide high efficiency and high uniformity at the solar cell with less demanding tolerances, which are necessary in light of tracking limitations. Chapter 6 highlights lightpipes and lightguides that are used frequently in our lives without us even knowing it. Lightpipes and lightguides appear in our car dashboards, laptop displays, indicator lights on a wide range of electronics, and so forth. Finally, this book ends with a lengthy discussion of sampling requirements (i.e., spatial distribution pixelization), ray tracing needs (i.e., ray and distribution sampling), optimization methods, and tolerancing. Also note that some of the material is repeated from one chapter to another, in particular, the concept of étendue is virtually in all chapters and the topic of freeform optics is prevalent in both Chapters 4 and 5. I did not want to limit the presentations of the chapter writers, so in some sense, each chapter is self-contained. However, as alluded to above, étendue is the driving force herein. By no means do I think this book presents the complete story. There are numerous areas that are not addressed herein: source modeling, projectors, color, fabrication, and measurement. Therefore, it is expected that future editions or new volumes will be released to meet the demands. There are other sources of literature that can be sought to address some of these needs, but I expect future editions/volumes of this book will expand greatly upon the available literature. I welcome suggestions from the readers on what should be added in future editions and/or volumes.
A number of individuals have helped with the writing of this book. First and foremost have been my employers as I was writing this book. I started while at Lambda Research Corporation, but finished while I was at Photon Engineering, LLC. Ed Freniere and Rich Pfisterer, at those companies, respectively, encouraged the writing of the book. This book would not have been possible without access to the optical analysis software from each of these two firms. I also received encouragement and feedback from individuals at the College of Optical Sciences at the University of Arizona. Dean Jim Wyant and Professor José Sasián encouraged me while providing feedback on the material. Additionally, I need to thank the individuals at Wiley/IEEE Press and Toppan Best-set Premedia Limited who had to persevere through the slow process of my writing: Taisuke Soda, Mary Hatcher, Christine Punzo, and Stephanie Sakson. All strongly enticed me to finish on time, someday, before the Sun burned out—so thank you for enduring my continued tardiness. My biggest sources were my students at Optical Sciences, where I teach a dual undergraduate-graduate course on illumination engineering. The students helped by being the first to see some of the material—finding errors and typos while challenging me to convey my points better. I have had over 50 students since this became a for-credit course, and around 70 more when it was a seminar, no credit course. My students learned that 95% of the time when I asked a question, the answer was étendue. However, although I told my students I was still learning how to apply the concepts of étendue, or in other words, I was still learning étendue—they likely did not believe me. I believe étendue is a fickle entity, that to fully grasp it is a life-long task. I may never fully understand all of its nuances, but I feel this book assists me and I trust the readers on this journey. Future volumes/editions will expand upon the topic of étendue, especially how it is applied to applications. Finally, I need to thank my family for the many days and nights I was not able to do anything. I dedicate this effort to all of you.
R. John Koshel
The cover shows two designs entered in the first Illumination Design Problem of the 2006 International Optical Design Conference.* The goal was to transform the emitted light from a square emitter into a cross pattern with the highest efficiency possible. Bill Cassarly (see Chapter 6) developed a method based initially on imaging principles, while Julio Chaves developed a solely nonimaging approach that uses rotationally asymmetric transformers: that is, an asymmetric lightpipe array for the outer regions and a bulk lightpipe in the central region. In 2010, the second IODC Illumination Design Problem was presented, and in 2014 will be the third competition. I encourage the readers to consult the literature to learn more about these design challenges.
Note
* P. Benítez, 2006 IODC illumination design problem, SPIE Proc. of the Intl. Opt. Des. Conf. 2006 6342, 634201V (2006). Society of Photo-Optical Instrumentation Engineers, Bellingham, WA.
CONTRIBUTORS
Pablo Benítez, Universidad Politécnica de Madrid, Cedint, Madrid, Spain, and LPI, Altadena, California
José Blen, LPI Europe, SL, Madrid, Spain
William Cassarly, Synopsys, Inc., Wooster, Ohio
Julio Chaves, LPI Europe, SL, Madrid, Spain
Aleksandra Cvetkovic, LPI Europe, SL, Madrid, Spain
Maikel Hernández, LPI Europe, SL, Madrid, Spain
R. John Koshel, Photon Engineering, LLC, Tucson, Arizona, and College of Optical Sciences, the University of Arizona, Tucson, Arizona
Juan C. Miñano, Universidad Politécnica de Madrid, Cedint, Madrid, Spain, and LPI, Altadena, California
Rubén Mohedano, LPI Europe, SL, Madrid, Spain
GLOSSARY
Term
Description
First Use
2D
Two-dimensional
Chapter 1
3D
Three-dimensional
Chapter 1
BEF
Brightness-enhancing film
Chapter 6
BRDF
Bidirectional reflectance distribution function
Chapter 1
BSDF
Bidirectional surface distribution function
Chapter 1
BTDF
Bidirectional transmittance distribution function
Chapter 1
CAD
Computer-aided design
Chapter 1
CCFL
Cold cathode fluorescent lamp
Chapter 6
CGS
Centimeter-gram-second unit system
Chapter 1
CIE
Commission Internationale de L’Éclairage
Chapter 1
CPC
Compound parabolic concentrator
Chapter 1
CPV
Concentrating photovoltaic
Chapter 4
D
Downward direction
Chapter 4
DCPC
Dielectric compound parabolic concentrator
Chapter 6
DNI
Direct normal irradiance
Chapter 4
DSMTS
Dielectric single-mirror two-stage concentrator
Chapter 5
DTIRC
Dielectric total internal reflection concentrators
Chapter 5
ECE
Economic Commission of Europe
Chapter 1
ED
Edge-ray design
Chapter 1
ES
Étendue squeezing
Chapter 3
FMVSS
Federal motor vehicle safety standards
Chapter 1
FOV
Field of view
Chapter 1
FWHM
Full width, half maximum
Chapter 6
H
Horizontal direction
Chapter 3
It also designates the origin in the horizontal direction
Chapter 4
HCPV
High-concentration photovoltaic
Chapter 4
IR
Infrared
Chapter 1
L
Leftward direction
Chapter 4
LCD
Liquid crystal display
Chapter 2
LED
Light-emitting diode
Chapter 1
MKS
Meter-kilogram-second unit system
Chapter 1
NERD
Nonedge-ray design
Chapter 1
NURBS
Nonuniform rational b-spline
Chapter 4
PMMA
Poly(methyl methacrylate) or acrylic plastic
Chapter 6
POE
Primary optical element
Chapter 3
PV
Photovoltaic
Chapter 4
R
Rightward direction
Chapter 4
RGB
Red–green–blue (color diagrams)
Chapter 6
RMS
Root mean square
Chapter 1
RXI
Refraction–reflection-TIR concentrator
Chapter 3
SAE
Society of Automotive Engineers
Chapter 1
SI
Système Internationale
Chapter 1
SLM
Spatial light modulator
Chapter 2
SMS
Simultaneous multiple surfaces
Chapter 1
SOE
Secondary optical element
Chapter 3
TED
Tailored edge-ray design
Chapter 1
TERC
Tailored edge-ray concentrator
Chapter 5
TIR
Total internal reflection
Chapter 1
U
Upward direction
Chapter 4
UHP
Ultra-high pressure (arc lamp)
Chapter 3
V
Vertical direction
Chapter 3
It also designates the origin in the vertical direction
Chapter 4
XR
Reflection–refraction concentrator
Chapter 4
XX
Reflection–reflection concentrator
Chapter 3
CHAPTER 1
INTRODUCTION AND TERMINOLOGY
John Koshel
This chapter introduces the reader to a number of terms and concepts prevalent in the field of illumination optics. I establish the units basis that is used throughout this book. The fields of nonimaging and illumination optics have a fundamental basis on these units; therefore, it is demanded that the reader be well versed in units and how to design, analyze, and measure with them. Next, I give an overview of the field and important parameters that describe the performance of an illumination system. The next chapter on étendue expands upon this treatment by introducing terms that are primarily focused on the design of efficient illumination systems.
Until recently the field of optical design was synonymous with lens or imaging system design. However, within the past decade, the field of optical design has included the subfield of illumination design. Illumination is concerned with the transfer of light, or radiation in the generic sense,* from the source(s) to the target(s). Light transfer is a necessity in imaging systems, but those systems are constrained by imaging requirements. Illumination systems can ignore the “imaging constraint” in order to transfer effectively the light. Thus, the term nonimaging optics is often used. In the end, one may classify optical system design into four subdesignations:
Imaging Systems
.
Optical systems with the imaging requirement built into the design. An example is a focal-plane camera.
Visual Imaging Systems
.
Optical systems developed with the expectation of an overall imaging requirement based upon integration of an observation system. Examples include telescopes, camera viewfinders, and microscopes that require human observers (i.e., the eye) to accomplish imaging.
Visual Illumination Systems
.
Optical systems developed to act as a light source for following imaging requirements. Examples include displays, lighting, and extend to the illuminator for photocopiers.
Nonvisual Illumination Systems
.
Optical systems developed without the imaging criterion imposed on the design. Examples include solar concentrators, optical laser pump cavities, and a number of optical sensor applications.
The latter two systems comprise the field of illumination engineering. Imaging systems can be employed to accomplish the illumination requirements, but these systems are best suited for specific applications. Examples include critical and Köhler illumination used, for example, in the lithography industry, but as this book shows, there are a number of alternative methods based on nonimaging optics principles. This book focuses on these nonimaging techniques in order to transfer light effectively from the source to the target, but imaging principles are used at times to improve upon such principles. Additionally, I place no requirement on an observer within the system, but as you will discover, most illumination optics are designed with observation in mind, including the human eye and optoelectronic imaging, such as with a camera. To neglect the necessary visualization and its characteristics often has a detrimental effect on the performance of the illumination system. This last point also raises the subjective perception of the illumination system design. This factor is not currently a focus of this book, but it is discussed in order to drive the development of some systems.
I use the remainder of this chapter to discuss:
A short history of the illumination field
The units and terminology for illumination design and analysis
The important factors in illumination design
Standard illumination optics
The steps to design an illumination system
A discussion of the difficulty of illumination design, and
The format used for the chapters presented herein.
Note that I typically use the terms illumination and nonimaging interchangeably, but, in fact, illumination is a generic term that includes nonimaging and imaging methods for the transfer of light to a target.
The history of the field of illumination and nonimaging optics is long, but until recently it was mostly accomplished by trial and error. Consider Figure 1.1, which shows a timeline of the development of sources and optics for use in the illumination field [1]. Loosely, the field of illumination optics starts with the birth of a prevalent light source on earth—the Sun. While the inclusion of the Sun in this timeline may at first appear facetious, the Sun is becoming of increasing importance in the illumination and nonimaging optics communities. This importance is borne out of daylighting systems, solar thermal applications, and solar energy generation. The use, modeling, and fabrication of sources is one of the largest components of the field of illumination design. Increasingly, LED sources are supplanting traditional sources since LEDs provide the potential for more efficient operation, color selection, long lifetimes, and compact configurations. It is only in the past 60 years that nonimaging optical methods have been developed. The illumination industry, both in design and source development, is currently burgeoning, so vastly increased capabilities are expected in the next few decades.
Figure 1.1 A timeline of the history of illumination and nonimaging optics. On the left is the approximate date of inception for the item on the right. Items listed in blue are illumination optic design concepts, those leading with LED are new solid-state lighting sources, and the remainder is other types of sources.
As with any engineering or scientific discipline, the use of units is imperative in the design and modeling of illumination systems. It is especially important to standardize the system of units in order to disseminate results. There are essentially two types of quantities used in the field of illumination:
Radiometric Terms
.
Deterministic quantities based on the physical nature of light. These terms are typically used in nonvisual systems; and
Photometric Terms
.
Quantities based on the human visual system such that only visible electromagnetic radiation is considered. This system of units is typically used in visual systems.
Radiometric and photometric quantities are connected through the response of eye, which has been standardized by the International Commission on Illumination (Commission Internationale de L′Éclairage; CIE) [2, 3]. Both of these set of terms can be based on any set of units, including English and metric; however, standardization at the Fourteenth General Conference on Weights and Measures in 1971 on the metric system is defined by the International System (Système Internationale; SI) [4]. The units for length (meter; m), mass (kilogram; kg), and time (second; s) provide an acronym for this system of units: MKS. There is an analogous one that uses the centimeter, gram, and second, denoted as CGS. This book uses the MKS standard for the radiation quantities, though it often makes use of terms, especially length, in non-MKS units, such as the millimeter. In the next two subsections, the two set of terms are discussed in detail. In the section on photometric units, the connection between the two systems is presented.
Radiometry is a field concerned with the measurement of electromagnetic radiation. The radiometric terms as shown in Table 1.1 are used to express the quantities of measurement* [5]. The term radiant is often used before a term, such as radiant flux, to delineate between like terms from the photometric quantities; however, the accepted norm is that the radiometric quantity is being expressed if the word radiant is omitted. Additionally, radiometric quantities are often expressed with a subscript “e” to denote electromagnetic. Omission of this subscript still denotes a radiometric quantity.
TABLE 1.1 Radiometric Terms and their Characteristics
Radiometric quantities are based on the first term in the table, radiant energy (Qe), which is measured in the SI unit of joules (J). The radiant energy density (ue) is radiant energy per unit volume measured in the SI units of J/m3. The radiant flux or power (Φe or Pe) is the energy per unit time, thus it is measured in the SI unit of J/s or watts (W). There are two expressions for the radiant surface flux density, the radiant exitance and the irradiance. The radiant exitance (Me) is the amount of flux leaving a surface per unit area, while the irradiance (Ee) is the amount of flux incident on a surface per unit area. Thus, the exitance is used for source emission, scatter from surfaces, and so forth, while irradiance describes the flux incident on detectors and so forth. Both of these terms are measured in SI units of watts per square meter (W/m2). Radiant intensity (Ie) is defined as the power radiated per unit solid angle, thus it is in the SI units of watts per steradian (W/sr). Note that many describe the radiant intensity as power per unit area (i.e., radiant flux density), but this is expressly incorrect.* Most texts use the term of intensity when irradiance is the correct terminology, but some excellent texts use it correctly, for example, Optics by Hecht [6]. Modern texts in the field of illumination use it correctly [7]. For a thorough discussion on this confusion, see the article by Palmer [8]. Finally, the radiance (Le) is the power per unit projected source area per unit solid angle, which is in the SI units of watts per meter squared per steradian (W/m2/sr). In Sections 1.4–1.6, the quantities of irradiance, intensity, and radiance, respectively, are discussed in more detail.
When one of the quantities listed in Table 1.1 is provided as a function of wavelength, it is called a spectral quantity. For example, when the intensity has a spectral distribution, it is called the spectral radiant intensity. The notation for the quantity is modified with either a λ subscript (e.g., Ie,λ) or by denoting the quantity is a function of wavelength (e.g., Ie(λ)). The units of a spectral quantity are in the units listed in Table 1.1, but are per wavelength (e.g., nm or μm). In order to compute the value of the radiometric quantity over a desired wavelength range, the spectral quantity is integrated over all wavelengths
(1.1)
where h(λ) is the filter function that describes the wavelength range of importance, fe is the radiant quantity (e.g., irradiance), and fe(λ) is the analogous spectral radiant quantity (e.g., spectral irradiance).
The photometric terms are applied to the human visual system, so only the visible spectrum of 360–830 nm adds to the value of a term. Due to the variability of the human eye, a standard observer is used, which is maintained by the CIE. Table 1.2 shows the analogous quantities to that of the radiometric terms of Table 1.1. The term luminous is used before the term, such as luminous flux, to delineate between radiometric and photometric quantities. Additionally, radiometric quantities are expressed with a subscript “ν” to denote visual.
TABLE 1.2 Photometric Terms and Their Characteristics
Luminous energy (Qν) is measured in the units of the talbot (T), which is typically labeled as lumen-s (lm-s). The luminous energy density (uν) is in the units of T/m3. Once again, lm-s is typically used for the talbot. The luminous flux is provided in the units of lumens (lm). The two luminous flux surface density terms are for a source, the luminous exitance (Mν), and for a target, the illuminance (Eν). Both terms have the units of lux (lx), which is lumens per meter squared (lm/m2). The luminous intensity (Iν) is measured in candela (cd), which is lumens per unit steradian (lm/sr). Note that the candela is one of the seven SI base units [9]. The definition for the candela was standardized in 1979, which per Reference [9] states that it is “the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540 × 1012 hertz and that has a radiant intensity in that direction of 1/683 watt per steradian.” This definition may appear arbitrary, but it is established on previous definitions, thus provides a degree of consistency over the lifetime of the candela unit. The standardization of the candela for luminous intensity provides further reason of the need to correct the misuse of the terms intensity and irradiance/illuminance. The luminance (Lν) is the photometric analogy to the radiometric radiance. It is in the units of the nit (nt), which is the lumens per meter squared per steradian (lm/m2/sr). There are several other photometric units for luminance and illuminance that have been used historically. Table 1.3 provides a list of these quantities. Note that these units are not accepted SI units, and are in decreasing use. In Sections 1.4–1.6, the quantities of illuminance, intensity, and luminance, respectively, are discussed in more detail.
TABLE 1.3 Alternate Units for Illuminance and Luminance
Unit
Abbreviation
Form
Illuminance
Foot-candle
fc
lm/ft
2
Phot
ph
lm/cm
2
Luminance
Apostilb
asb
cd/ π/m
2
Foot-lambert
fL
cd/ π/ft
2
Lambert
L
cd/π/cm
2
Stilb
sb
cd/cm
2
As with radiometric terms, spectral photometric quantities describe the distribution of the quantity as a function of the wavelength. By integrating the spectral luminous quantity over wavelength with a desired filter function, one finds the total luminous quantity over the desired spectral range
(1.2)
Conversion between radiometric and photometric units is accomplished by taking into account the response of the CIE standard observer. The functional form is given by
(1.3)
where fν(λ) is the spectral photometric quantity of interest, fe(λ) is the analogous spectral radiometric term, and K(λ) is the luminous efficacy, which is a function of wavelength, λ, and has units of lm/W. The luminous efficacy describes the CIE observer response to visible electromagnetic radiation as a function of wavelength. The profile of K(λ) is dependent on the illumination level, because of the differing response of the eye’s detectors. For example, for light-adapted vision, that is, photopic vision, the peak in the luminous efficacy occurs at 555 nm. For dark-adapted vision, that is, scotopic vision, the peak in the luminous efficacy is at 507 nm.*Equation (1.3) is often rewritten as
(1.4)
where V(λ) is the luminous efficiency,† which is a unitless quantity with a range of values between 0 and 1, inclusive, and C is a constant dependent on the lighting conditions.‡ For photopic vision, C = Cp = 683 lm/W, and for scotopic vision C = Cs = 1700 lm/W. Note that the constant for photopic conditions is in agreement with the definition of the candela as discussed previously. The difference between the two lighting states is due to the response of the cones, which are not saturated for typical light-adapted conditions, and rods, which are saturated for light-adapted conditions. The lumen is realistically defined only for photopic conditions, so for scotopic cases the term “dark lumen” or “scotopic lumen” should be used. Light levels between scotopic and photopic vision are called mesopic, and are comprised of a combination of these two states. Figure 1.2 shows the luminous efficiency of the standard observer for the two limiting lighting conditions as a function of wavelength. Note that while similar, the response curves do not have the same shape. Light-adapted vision is broader than dark-adapted vision.
Figure 1.2 Luminous efficiency for photopic and scotopic conditions.
To calculate the luminous quantity when the spectral radiant distribution is known, one must integrate over wavelength using the luminous efficacy and the filter function as weighting terms. Using Equations (1.1) and (1.3), we arrive at
(1.5)
where the limits of integration are set to 0 and ∞, since the functional forms of K(λ) and h(λ) take into account the lack of response outside the visible spectrum and the wavelength range(s) of interest, respectively. Alternatively, one can determine the radiometric quantity with knowledge of the analogous photometric spectral distribution; however, only the value over the visible spectrum is calculated.
Luminous or radiant intensity describes the distribution of light as a function of angle—more specifically a solid angle. The solid angle, dΩ, in units of steradians (sr) is expressed by a cone with its vertex at the center of a sphere of radius r. As shown in Figure 1.3, this cone subtends an area dA on the surface of the sphere. For this definition, the solid angle is given by
(1.6)
where, in reference to Figure 1.3, θ is the polar angle labeled as θ0 and ϕ is the azimuthal angle around the central dotted line segment. By integrating the right-hand side of Equation (1.6) over a right-circular cone, one finds
(1.7)
where θ0 is the cone half angle as shown in Figure 1.3. A cone that subtends the entire sphere (i.e., θ0 = π) has a solid angle of 4π and one that subtends a hemisphere (i.e., θ0 = π/2) has a solid angle of 2π. Intensity is a quantity that cannot be directly measured via a detection setup, since detectors measure flux (or energy). A direct conversion to flux density is found by dividing through by the area of the detector while assuming the distribution is constant over the detector surface, but intensity requires knowledge of the angular subtense of the detector at the distance r from the source point of interest. Additionally, one must account for the overlap of radiation for an extended source when measurements are made in the near field. In the far field, where caustics are negligible, the intensity distribution can be inferred immediately from the flux density.
Figure 1.3 Geometry of sphere that defines solid angle.
Illuminance and irradiance are the photometric and radiometric quantities respectively for the surface flux density on a target.* The material presented in this section is also applicable to the radiant and luminous exitances. These terms describe the spatial distribution of light since they integrate the luminance or radiance over the angular component. Detectors operate in this mode in the sense that the power incident on the detector surface is dependent on its area.†Figure 1.4 depicts the measurement of the flux density at a distance of r from a uniform point source by a detector of differential area dA. The subtense angle, θ, is between the normal to the detector area and the centroid from the source to the projected area of the detector area. The latter is simply the line segment joining the source (S) and the center of the detector (T), . Using Equation (1.6), the detector area subtends an elemental solid angle from the point S of
(1.8)
Figure 1.4 Measurement of the irradiance on a target area of dA at a distance r from a point source of flux output dΦ. The target is oriented at θ with respect to the line segment joining points S and T.
where the differential projected area, dAproj, will be discussed in detail in Section 1.6, Equation (1.10). Note that for the detector oriented at π/2 with respect to the line segment , the elemental solid angle is 0. Conversely, for an orientation along the line joining the two entities, the elemental solid angle is at a maximum since the cosine term is equal to 1.
To find, for example, the irradiance due to a point source, we substitute for dA in its expression from Table 1.1 with that from Equation (1.8). The resulting equation contains an expression for the intensity, I = dΦ/dΩ, which can be substituted for,
(1.9)
Thus, for a point source, the flux density incident on the target decreases as a function of the inverse of the distance squared between the two objects, which is known as the inverse-square law. The cosine factor denotes the orientation of the target with respect to the line segment .
Unlike experimental measurement of the intensity, it is easy to measure the flux areal density. With knowledge of the detector area and the power measured, one has determined the flux density at this point in space. This detection process integrates over all angles, but not all detectors measure uniformly over all angles, especially due to Fresnel reflections at the detector interface. Thus, special detector equipment, called a cosine corrector, is often included in the detection scheme to compensate for such phenomena. By measuring the flux density at a number of points in space, such as over a plane, the flux density distribution is determined. Additionally, unlike the intensity distribution, which remains constant with the distance from the optical system, the irradiance distribution on a plane orthogonal to the nominal propagation direction evolves as the separation between the optical system and plane is changed. Simply said, near the optical system (or even extended source), the rays are crossing each other such that local spatial (i.e., irradiance) distribution evolves with distance. In the far field, where the crossing of rays (e.g., caustics) are negligible, the irradiance distribution has the form of the intensity distribution. Thus, one is in the far field when these two distributions have little difference between their shapes. This point is discussed in more detail in Chapter 7.
Luminance and radiance are fundamental quantities of any emitter. As is shown in the next chapter, these terms are conserved in a lossless system, which drives the ultimate limit of the performance of system design. Another term often used for luminance or radiance is brightness; however, within the vision community, brightness assumes the response for an actual observer to a prescribed area of an object* [10]. The radiance distribution† from a source describes the emission from each point on its surface as a function of angle. Thus, knowing the radiance distribution, one can ascertain the propagation of radiation through a known optical system. The result is that the intensity and flux density distributions, and for that matter, the radiance distribution can be determined at arbitrary locations in the optical system. In conclusion, radiance provides the best quantity to drive the design process of an illumination system for two reasons:
An accurate source model implies that an accurate model of an illumination system can be done. The process of developing an accurate source model is a focus of the next chapter.
Radiance is conserved, which provides the limit on system performance while also providing a comparison of the performances of different systems. The implications of conservation of radiance are presented in detail in Chapter 2.
The radiance distribution is measured as a function of source flux per projected unit area (i.e., spatial considerations), dAs,proj, per unit solid angle (i.e., angular considerations), dΩ. The projected area takes into account the orientation of the target surface with respect to the source orientation, which is shown in Figure 1.5.‡ Pursuant to this figure, the elemental projected area is given by
(1.10)
where dAs is the actual area of the source and θ is the view angle. Substituting Equation (1.10) into the expression for the radiance
(1.11)
Figure 1.5 Depiction of dAs,proj of source element dAs along a view direction θ.
In connection with radiance, there are two standard types of distributions: Lambertian and isotropic. These radiance distributions are described in more detail in the next two subsections.
When a source or scatterer is said to be Lambertian, it implies its emission profile does not depend on direction,
(1.12)
where r is the vector describing the points on the surface, θ is the polar emission angle, and ϕ is the azimuthal emission angle. Thus, only the dependence on the areal projection modifies the radiance distribution. Substituting Equation (1.12) into Equation (1.11) while using Equation (1.6) for the definition of the elemental solid angle and then integrating over angular space, one obtains
(1.13)
The solution of this gives
(1.14)
where the “lam” subscript denotes a Lambertian source. The exitance for a Lambertian source or scatterer is always the product of π and its radiance, Ls. Similarly, one can integrate over the area to determine the intensity
Integrating provides
(1.15)
where D is the surface of the source and
(1.16)
Thus, the intensity for a Lambertian source changes as a function of the cosine of the view angle. For a spatially uniform planar emitter of area As, the intensity distribution is
(1.17)
Figure 1.6 shows the normalized intensity profile for an emitter described by Equation (1.17). Note that as the angle is increased with respect to the normal of the surface (i.e., θ = 0 degrees), less radiation is seen due to the projection characteristics. In other words, as the polar angle increases, the projected area decreases, resulting in a reduction in intensity. It is an important point that while Lambertian means independent of direction, it does not preclude the reduction in intensity due to the projection of the emission area.
Figure 1.6 Plot of normalized intensity profiles for Lambertian and isotropic spatially uniform planar sources.
An isotropic source or scatterer is also said to be uniform, but such a source accounts for its projection characteristics. Thus, the cosine dependencies shown in Equations (1.15) and (1.17) are compensated within the radiance function by
(1.18)
The exitance and intensity for such a source are
(1.19)
and
(1.20)
where the “iso” subscripts denote isotropic and Is is as defined in Equation (1.16). For the spatially uniform planar emitter of area As, the intensity distribution is given by
(1.21)
Therefore, an isotropic source has twice the exitance compared with an analogous Lambertian source. It also has a constant intensity profile as a function of angle for a planar emitter. The normalized isotropic intensity profile of Equation (1.21) is also shown in Figure 1.6. Realistically, only point sources can provide isotropic emission. There are no independent sources that provide isotropic illumination, but the integration of tailored optics and an emitter can provide such.
The transfer of light from the source to the target typically has two important parameters: transfer efficiency and the distribution at the target. Transfer efficiency is of particular importance due to increasing needs of energy efficiency—that is, electricity costs are increasing, and concerns over environmental effects are rising. However, often counter to efficiency requirements is the agreement of the achieved distribution to the desired one. For example, the desire for uniformity can easily be achieved by locating a bare source a great distance from the target plane, albeit at the expense of efficiency. Therefore, in the case of uniformity, there is a direct trade between these two criteria such that the illumination designer must develop methods to provide both through careful design of the optical system. Lesser criteria include color, volume requirements, and fabrication cost. As an example, the design of a number of illumination optics is driven by the reduction of cost though the process time to manufacture the overall system. So not only must the designer contend with trades between efficiency and uniformity, but also cost constraints and other important parameters. The end result is a system that can be quickly and cheaply fabricated with little compromise on efficiency and uniformity. The selection and functional form of the important criteria for an illumination design provide a merit function that is used to compare one system with another. The development of a merit function based on the material presented here is discussed in detail in Chapter 7.
Transfer efficiency, ηt, is defined as the ratio of flux at the target, Φtarget, to that at the input, which is most often the flux emitted from the source, Φsource:
(1.22)
This simple definition includes all emission points, all angles, and the entire spectrum from the source. Criteria based on spatial positions, angular domain, and spectrum can be used to select limited ranges of the emission from the source. The spectral radiance, Le,λ, or spectral luminance, Lv,λ, describe terms* that replace the flux components in Equation (1.22),
(1.23)
where htarget and hsource are filters denoting the functional form for position (r, which has three spatial components such as x, y, and z), angle (Ω, which is the solid angle and composed of two components such as θ, the polar angle, and ϕ, the azimuthal angle), and spectrum (λ) for the target and source, respectively. Ltarget,λ and Lsource,λ denote the spectral radiance or luminance for the target and source, respectively. Note that htarget and hsource almost always have the same form for the spectrum, but the position and angular aspects can be different. These filter and radiance functions can be quite complex analytically, thus they are often approximated with experimental measurements or numerical calculations. The end result is that solving Equation (1.23) can be rather cumbersome for realistic sources and target requirements, but it provides the basis of a driving term in illumination design, étendue, and the required conservation of this term. It is shown in Chapter 2 that the étendue is related to radiance, thus Equation (1.23) expresses a fundamental limit and thus efficiency for the transfer of radiation to the target from the source.
The uniformity of the illumination distribution defines how the modeled or measured distribution agrees with the objective distribution. The illumination distribution is measured in at least one of three quantities:
Irradiance or illuminance: measured in flux/unit area
Intensity: measured in flux/steradian, or
Radiance or luminance: measured in flux/steradian/unit area.
For the radiometric quantities (i.e., irradiance, radiant intensity, and radiance), the unit of flux is the watt (see Table 1.1). For the photometric quantities (i.e., illuminance, luminous intensity, and luminance) the unit of flux is the lumen (see Table 1.2). These quantities are described in depth in Sections 1.3–1.6. Other quantities have been used to express the illumination distribution, but they are typically hybrids or combinations of the three terms provided above.
Uniformity is determined by comparing the sampled measurement or model with the analogous one of the goal distribution. Note that the irradiance/illuminance and intensity distributions can be displayed with two orthogonal axes, while the radiance and luminance quantities require a series of depictions to show the distribution. There are a multitude of methods to determine the uniformity, including:
Peak-to-valley variation of the distribution, Δ
f
:
(1.24)
Variance of the distribution compared with the goal,
σ
2
:
(1.25)
Standard deviation of the distribution,
σ
: the square root of the variance provided in
Equation (1.25)
.
where the f terms denote the selected quantity that defines uniformity (e.g., irradiance or intensity). The model and goal subscripts denote the sampled model and goal distributions, respectively. The terms i and j are the counters for the m by n samples, respectively, over the two orthogonal axes. The term root mean square (RMS) deviation is found by taking the square root of an analogous form of the bias-corrected variance expressed in Equation (1.25). For the RMS variation, σrms, the factor in front of the summation signs in Equation (1.25) is replaced with 1/nm.* In all cases, a value of 0 for the uniformity term (e.g., Δf or σ2) means that the uniformity, or agreement with the target distribution, is perfect for the selected sampling. The choice of the uniformity metric is dependent on the application, accuracy of the modeling, and ease of calculation. The RMS variation of the selected quantity is the standard method of calculating the uniformity of a distribution.
There is a multitude of types of optics that can be used in the design of an illumination system. These types can essentially be broken down into five categories: refractive optics (e.g., lenses), reflective optics (e.g., mirrors), total internal reflection (TIR) optics (e.g., lightpipes), scattering optics (e.g., diffusers), and hybrid optics (e.g., catadioptric Fresnel elements or LED pseudo collimators). In the next five subsections, I discuss each of these optic types in more detail. Each of these optic types has different utility in the field of illumination design, and this book uses separate chapters to discuss and delineate these. As a general rule of thumb, reflective optics provide the most “power” to spread light, but at the expense of tolerance demands and typical higher absorption losses. Refractive optics allow more compact systems to be built, but at the expense of dispersion, an increased number of elements, and higher fabrication costs due to alignment issues and postproduction manufacturing demands. TIR optics provide in theory the optimal choice, except when one includes potential leakage due to both scattering and a lack of fulfilling the critical-angle condition at all interfaces. Scatter is not typically employed as a method to provide a critical illumination distribution, but, rather, a means to make the distribution better match uniformity goals at the expense of efficiency. Scatter is also used for subjective criteria, such as look and feel of the optical system. Hybrid optics that employ reflection, refraction, scatter, and TIR are becoming more prevalent in designs since they tend to provide the best match in regards to optimizing goals.
Refractive optics are a standard tool in optical design, primarily used for imaging purposes, but they can also be used in illumination systems. There is a variety of refractive optics used in illumination systems. In fact, the types of refractive optics are too numerous to discuss here, but they range from standard imaging lenses to arrays of pillow optics to protective covers. Examples of refractive optics in use in illumination systems include: (i) singlet lens for projector headlamp, (ii) pillows lens array for transportation applications, (iii) nonimaging Fresnel lens for display purposes, and (iv) a protective lens for an automotive headlight. The first three examples are using the refraction to assist in obtaining the target illumination distribution, but (iv) has minimal, if not negligible, impact on the distribution of light at the target. The primary purpose of the latter lens is to protect the underlying source and reflector from damage due to the environment.
Image-forming, refractive optics are not optimal for illumination applications in the sense that they do not maximize concentration, which is defined in the next chapter. The reason for the limitation is due to aberrations as the f-number is decreased. This topic for imaging systems has been investigated in detail [11]. In Reference [11], it is pointed out that the theoretical best is the Luneberg lens, which is a radial gradient lens shaped as a perfect sphere and has a refractive index range from 1.0 to 2.0 inclusive. This lens is impossible to manufacture due to the index range spanning that of vacuum to high-index flint glass, such as Schott LaSF35. Realistic lenses, such as an f/1 photographic objective or an oil-immersed, microscope objective, are examples of imaging lenses that provide the best performance from a concentration point of view. These two examples do not provide optimal concentration, and they also suffer in that their sizes are small, or in other words, while their f/# might be small, accordingly the focal length and thus diameter of the clear aperture are also small. However, there are cases when nonoptimal imaging, refractive systems are preferred over those that provide optimal concentration. An example is a pillow lens array to provide homogeneity over a target. Each of the pillow lenses creates an aberrated image of the source distribution on the target plane, thus the overlap of these distributions due to the lens array can be used to create an effective uniform distribution. Additionally, nonimaging lenses are an option, but this type of lens almost always involves other phenomena, especially TIR, to accomplish their goals. These optics are treated in Section 1.8.5 on hybrid optics. Refractive, nonimaging lenses use, in part, imaging principles to great effect to transfer the light from the source to the target. Two examples are the nonimaging Fresnel lens, which has been discussed in the literature [12], and the catadioptric lens used to collimate as best as possible the output from an LED. The latter is a focus of Chapter 7 on optimization and tolerancing of nonimaging optics. Nonimaging, refractive designs are making great use of tailoring, where tailoring is defined as the designation of the optical surface to provide a prescribed distribution at the target in order to meet design goals. Tailoring is discussed further in the next chapter and the succeeding chapters in this book. The systems and principles presented here and others are described in more depth in Chapters 4–6.
Refractive optics use materials of differing indices of refraction, n, to alter the propagation path of light from the source to the target. Refraction, displayed in Figure 1.7, is governed by the law of refraction, often called Snell’s law,
(1.26)
where θ and θ′ are the angles of incidence and refraction, respectively, and n and n′ are the indices of refraction in source (i.e., object) and target (i.e., image) spaces, respectively. In other words, the unprimed notation denotes the quantities prior to refraction, and the primed notation indicates the quantities after refraction. In illumination systems, three-dimensional (3D) ray tracing is needed, thus the vector form is preferred,
(1.27)
where a is the surface normal into the surface as shown in Figure 1.7, and r and r′ are the unit vectors depicting the rays. Equation (1.27) also implies that r, r′, and a are coplanar, which is a necessary condition for Snell’s law. However, this equation is not tractable to ray tracing, thus a different form is employed. First, find the cross product of the surface normal a with the two sides of Equation (1.27) to give after rearrangement of the terms
(1.28)
Figure 1.7 Schematic of refraction, reflection, and total internal reflection at an optical surface. The incident ray vector is shown by r in index n. The refracted ray vector is r′ in index n′. The reflected and TIR ray vectors are r″ in index n. For TIR to occur n > n′, and θ ≥ θc.
Next, we note that a·r = cos θ and a·r′ = cos θ′, which upon substitution gives
(1.29)
Finally, this expression is rewritten in component form with the use of the direction cosines r = (L, M, N) in incident space, r′ = (L′, M′, N′) in refraction space, and a = (aL, aM, aN). In component form, Snell’s law for ray-tracing purposes is written
(1.30)
Note that the refraction angle θ′ must be found prior to the determination of the direction cosines after refraction. Snell’s law as given in Equation (1.26) is used for this purpose
(1.31)
Substitution of Equation (1.31) into Equation (1.30) and diving through by n′ provides the ray path after refraction for the purposes of numerical ray tracing. The only caveat to the implementation of Equation (1.30) is that r = (L, M, N), r′ = (L′, M′, N′), and a = (aL, aM, aN) are all unit vectors.
Reflective optics are also a standard tool in optical design, but admittedly imaging design uses them less than refractive ones. In illumination design, reflective optics have been more prevalent than refractive ones. The primary reasoning is that it is easier to design the optics that provide optimal concentration between the input and output apertures, and the tolerance demands for nonimaging systems are less in comparison to imaging ones. There is a wealth of reflective optics in use in illumination designs from conic reflectors, such as spherical, parabolic, elliptical, and hyperbolic, to edge-ray designs (EDs) [13], such as the compound parabolic concentrator (CPC), to tailored edge-ray designs (TEDs) [13, 14]. The conic reflectors typically involve imaging requirements, and illumination designs use these properties to capture the light emitted by a source. Nonimaging designs use the edge-ray principle to transfer optimally the light from source to target. EDs are optimal in two dimensions (2D), called troughs, but 3D designs, called wells, do not transfer some skew rays from the source to target. ED assumes a constant acceptance angle with uniform angular input, but TED uses a functional acceptance angle. Thus, TED accommodates realistic sources while providing tailoring of the distribution at the target dependent on requirements and the source characteristics. Systems employing the edge-ray principle are based on their parent, conic designs, thus there are two classes: elliptical and hyperbolic.
Tailored designs are currently at the forefront of reflector design technology in the nonimaging optics sector. Note that tailored designs started within the realm of reflective illumination optics, but they are now present in all sectors of nonimaging optics, from refractive to hybrid. Tailored reflectors use one of two methods: discrete faceted or continuous/freeform reflectors. The former is comprised of pseudo-independent, reflective segments to transfer the light flux. The freeform design can be thought of as a faceted design but made up of an infinite number of individual segments. The designs are typically smooth, although discontinuities in the form of cusps (i.e., the first derivative is discontinuous) are sometimes employed, but steps are not allowed since that is the feature of faceted designs. While tailored designs employ optics principles in order to define their shape, the chief goal is to maintain a functional acceptance angle profile. Recently, applications that are not based entirely on the optics of the problem have dictated that the edge-ray can limit performance of the overall system. These systems use interior rays to motivate the reflector development, thus this domain is called nonedge-ray reflector design (NERD) [15]. One application is optical pumping of gain material, which introduces some imaging properties in order to obtain more efficient laser output at the expense of transfer efficiency [16].
Examples of reflective illumination designs include: (i) luminaire for room lighting, (ii) faceted headlight, (iii) faceted reflector coupled to a source for projection and lighting applications (i.e., MR16), and (iv) a reflector coupled to an arc source for emergency warning lighting. For luminaires for architectural lighting, the shape of the reflector is either a conic or an arbitrary freeform surface. In both of these cases, there is minimal investment in the design of the reflector, but rather a subjective look and feel is the goal of the design process. In recent years, luminaire designs for specific architectural applications, such as wall-wash illumination, has integrated tailoring in order to provide better uniformity and a sharp cutoff in the angular distribution. Increasingly, especially with the development of solid-state lighting, luminaire reflectors employ some level of tailoring in order to meet the goals of uniformity while also maintaining high transfer efficiency. They must also meet marketing requirements that drive their appearance in both the lit and unlit states. For example, the headlight shown in (ii) must adhere to stringent governmental illumination standards (e.g., ECE [European] or FMVSS/SAE [US]), but must also conform to the shape of the car body and provide a novel appearance.
Specular reflectors employ either employ reflective materials, such as polished aluminum or silver; reflective materials deposited on substrates; or dielectric thin film coating stacks deposited on polished substrates. For reflective deposition, the coatings are typically aluminum, chromium, or even silver and gold. The substrate is typically an injection-molded plastic. For dielectric coatings, the substrates can be absorbing, reflective, or transmissive. Dielectric coatings on reflective substrates protect the underlying material but also can be used to enhance the overall reflectivity. Dielectric coatings placed on absorbing or transmissive substrates can be used to break the incident spectrum into two components so that the unwanted light is removed from the system. An example is a hot mirror that uses a dichroic placed on an absorbing substrate. The visible light is reflected by the coating, while the infrared (IR) light is absorbed by the substrate. In all cases, the law of reflection can be used to explain the behavior of the reflected rays. For reflection, which is also shown in Figure 1.7, the incident ray (r) angle, θ, is equal to that of the reflected ray (r″) angle, θ″,
(1.32)
Note that we update the prime notation used for refraction to a double prime notation to denote reflection. The development employed in the previous section can be used to determine the direction cosine equations for the propagation of a reflected ray by using the law of refraction with n′ = n″ = −n and r′ = −r″.* Using this formalism, Snell’s law for reflection gives θ = −θ″, and Equations (1.28) and (1.29) upon reflection give
(1.33)
and
(1.34)
The direction cosines are then written,
(1.35)
Thus, Equations (1.30) and (1.31) are essentially the same, and often optical ray-tracing software implements Equation (1.20) with the caveat that a negative index of refraction is used with an antithetical reflection vector.
TIR optics use the “frustration” of refraction in order to propagate light within the higher index material that traps the light. Examples of TIR illumination optics include (i) large-core plastic optical fibers, (ii) lightpipes, (iii) lightguides for display applications, and (iv) brightness enhancement film. TIR is not a standard in imaging design, used sparingly in components like prisms and fibers. In illumination optics, TIR optics are gaining in popularity, especially in hybrid form as discussed in Section 1.8.5. This popularity gain is due to real and perceived benefits, including higher transfer efficiency, assistance with homogenization, compact volume, and the component provides guiding along its length. Unfortunately, to turn these benefits into realistic TIR optics, it requires a sizable investment in design and fabrication.
The higher transfer efficiency is due to the 100% Fresnel reflection as long as the critical angle, θc, or greater, is satisfied. The critical angle is governed by Snell’s law
(1.36)
where this angle describes the point at which light that would refract from the higher index material (n) into a lower-index material (n
