Lithography -  - E-Book

Lithography E-Book

0,0
149,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Lithography is now a complex tool at the heart of a technological process for manufacturing micro and nanocomponents. A multidisciplinary technology, lithography continues to push the limits of optics, chemistry, mechanics, micro and nano-fluids, etc.

This book deals with essential technologies and processes, primarily used in industrial manufacturing of microprocessors and other electronic components.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 531

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Foreword

Introduction

Chapter 1. Photolithography

1.1. Introduction

1.2. Principles and technology of scanners

1.3. Lithography processes

1.4. Immersion photolithography

1.5. Image formation

1.6. Lithography performances enhancement techniques

1.7. Contrast

1.8. Bibliography

Chapter 2. Extreme Ultraviolet Lithography

2.1. Introduction to extreme ultraviolet lithography

2.2. The electromagnetic properties of materials and the complex index

2.3. Reflective optical elements for EUV lithography

2.4. Reflective masks for EUV lithography

2.5. Modeling and simulation for EUV lithography

2.6. EUV lithography sources

2.7. Conclusion

2.8. Appendix: Kramers–Krönig relationship

2.9. Bibliography

Chapter 3. Electron Beam Lithography

3.1. Introduction

3.2. Different equipment, its operation and limits: current and future solutions

3.3. Maskless photolithography

3.4. Alignment

3.5. Electron-sensitive resists

3.6. Electron–matter interaction

3.7. Physical effect of electronic bombardment in the target

3.8. Physical limitations of e-beam lithography

3.9. Electrons energy loss mechanisms

3.10. Database preparation

3.11. E-beam lithography equipment

3.12. E-beam resist process

3.13. Bibliography

Chapter 4. Focused Ion Beam Direct-Writing

4.1. Introduction

4.2. Main fields of application of focused ion beams

4.3. From microfabrication to nanoetching

4.4. The applications

4.5. Conclusion

4.6. Acknowledgements

4.7. Bibliography

Chapter 5. Charged Particle Optics

5.1. The beginnings: optics or ballistics?

5.2. The two approaches: Newton and Fermat

5.3. Linear approximation: paraxial optics of systems with a straight optic axis, cardinal elements, matrix representation

5.4. Types of defect: geometrical, chromatic and parasitic aberrations

5.5. Numerical calculation

5.6. Special cases

5.7. Appendix

5.8. Bibliography

Chapter 6. Lithography resists

6.1. Lithographic process

6.2. Photosensitive resists

6.3. Performance criteria

6.4. Conclusion

6.5. Bibliography

First published 2011 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc. Adapted and updated from Lithography published 2010 in France by Hermes Science/Lavoisier © LAVOISIER 2010

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd

John Wiley & Sons, Inc.

27-37 St George's Road

111 River Street

London SW19 4EU

Hoboken, NJ 07030

UK

USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2011

The rights of Stefan Landis to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Cataloging-in-Publication Data

Landis, Stefan.

 Lithography / Stefan Landis.

   p. cm.

 Summary: “Lithography is now a complex tool at the heart of a technological process for manufacturing micro and nanocomponents. A multidisciplinary technology, lithography continues to push the limits of optics, chemistry, mechanics, micro and nano-fluids, etc. This book deals with essential technologies and processes, primarily used in industrial manufacturing of microprocessors and other electronic components”-- Provided by publisher.

 Includes bibliographical references and index.

 ISBN 978-1-84821-202-2 (hardback)

 1. Microlithography. I. Title.

 TK7872.M3L36 2010

 621.3815'31--dc22

2010040731

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-84821-202-2

Foreword1

“An image is a pure creation of spirit.” (Pierre Reverdy)

Today, in a world of eternal representation, we are the observers of the theater of the grand image for as far as the eye can see, a theater which incessantly unfolds in the marvelous recording box that is our brain. Though we see them, the touch and even the substance of illustrations sometimes escape us completely, so much so that we can almost not differentiate between representative illusion and the physical reality of beings and things. Yet, the representation of the world in our eyes is not the same as the one that we want to transpose, to put into images. There, the reality of that which is visible is captured by our brains, which makes copies which are sometimes faithful, sometimes untrue. To produce these images we have, since the dawn of mankind, resorted to sometimes extremely complex alchemies, where invention has struggled with various materials, as a result of which we have been able to leave behind our illustrated drawings, the prints of our lives and of our societies.

For some 32,000 years man has not stopped etching, doodling, drawing, copying, painting, reproducing – for nothing, for eternity – producing millions of infinite writings and images which are the imperishable memory of his genius. How did he do it, with which materials, on what, and why? The alchemy of representation, in its great complexity, deserves to be slowed down, so that we can try to understand, for example, how today's images reach us in a kind of gigantic whirlwind, whereas 200 years ago these things were still rather sober. Or how else could we go from an image that we can look at, to an image that is difficult to see, or to one that we cannot even see with the naked eye? Whereas now we throw things away, in the past images were preciously preserved. Are the images which we do try to preserve today not the same as the ones we were preserving yesterday?

It is amongst the cavemen that that which I call the image maker can first be outlined. Collating their visions, their dreams, their beliefs on cave walls, these first imagicians undoubtedly bequeathed to us the only widely known account of this period. In their wake, we will be able to better evaluate the formal evolution of the visual representation of nature and things, this inevitable invention in which we endeavor to capture the spirit through an artefact.

Man had to train long and hard to finally tame and durably transmit the images of the world which surrounded him. The techniques employed across the ages to make and convey these images, the materials, the pigments, the bindings, the instruments and the mediums, either natural, chemical or manufactured, not only conditioned the appearance of the image itself but also its durability.

Cave paintings, coins, palaces, churches, are just some of the mediums which have left us with invaluable visual evidence of more or less remote pasts, sometimes essential for putting together the history of humanity. If we consider the manufacturing and the trading of images from the beginning, and in its totality, we can distinguish two major periods: the longest, the pre-photographic; and the post-photographic, which began in the first half of the 19th Century, and which is therefore extremely recent. Admittedly, our eyes can see but they cannot take photographs. The images that they collect are transitory fragments in a “bandwidth”, a time kept in the memory, often lost, far from any material existence, and for which any attempt at verbal transcription is on this side of reality. For other animals, sight is part of a sub-conscious effort to survive. For man, by contrast, sight is a conscious irreplaceable instrument, appreciating the outside world, which is an integral part of his own physical and mental development. For us, to see is natural. However, representing what we see calls upon a certain kind of initiation. How were the first painters of history introduced to engraving and drawing? How were they able to find or invent the tools and materials needed to succeed?

The tools, materials and shapes are precisely the three essential ingredients needed to build, needed to formalize the representation of the visible. Footprints on sand, for example, undoubtedly the first prints left by man, were already kinds of natural images of the body, and most probably were the root of the original idea to make images. The tool here was man's own foot, with its shape, using a soft and flexible material, a support able to keep an image. Thus, without any doubt, the earth and sand were among the first image mediums, even before other sketches came to cover other materials, and other surfaces.

The various attempts leading to the reproduction and spreading of visible images or texts, little by little, drove man to develop very clever techniques, sometimes born out of chance, or sometimes by increasingly elaborate research. The first stone engravings (from before 600 BC) precede, by a long time, the first examples of wood engravings (c. 200 AD), or metal engravings made by a direct method, then etchings, or the invention of typographical characters, and, finally, lithography itself, which has been, from the 19th Century onwards, a practically irreplaceable means of reproduction, and remains an essential part of the book and publicity industries, even today.

The document media have also diversified and evolved incessantly since the beginning. Stone, bone or ivory, terracotta, glass, skins, leaves, wood, parchment, paper, celluloid, vinyl, are just some of the aids bequeathed to us, with greater or lesser clarity or brittleness, the precious evidence of life and the history of mankind.

In 1796, 43 years before the invention of photography, the lithographic reproduction technique was invented by Aloïs Senefelder in Germany. Developed through the first half of the 20th Century, it brought, without question, the most important graphic revolution in the worlds of text reproduction and printed images. In this respect, we can consider two very great periods in the history of print: one, the pre-lithographic period, and the other which began with lithography in all of its forms. Here, two distinct lithographic fields start to truly develop: on one side, the advanced forms of the graphics industry (and the photolithographic industry); and, on the other side, a completely innovative form of artistic expression, now freed from the technical constraints of engraving and now able to devote itself with joy to those much freer forms of graphics, with drawings made (or transferred) directly onto the lithographic support itself. These two domains participated, together, in the technical developments which led finally to the offset printing methods used overwhelmingly today and which profit from these most advanced technologies.

As far as the photographic reproduction of images was concerned, one major challenge was the faithful reproduction of half-tones. This problem was solved in 1884 by Meisenbach, the inventor of the linear screen which was quickly applied to typographical image reproduction and then successively to photo-lithography and to offset printing. This photographic support itself already contained the seeds and the “secret” of the visibility of half-tones, incorporating the smoothness of the granular nature even of photosensitive emulsions. But to print them, it was necessary to find a way of transcribing them in a printing matrix, initially in black and white, and then later in color. An interesting characteristic is that the various screens which we have just alluded to, in particular the finest or ultra-fine (higher than 80 lines/cm) or the most recent digital grids forming an ultra-fine grid of random dots, have always tried to more or less blend in, until made invisible to the naked eye. The printed images our eyes can see are actually optical illusions. Today, if we look closely at a beautiful reproduction of an engraving by Durer, or at a painting by Vèlasquez, for example, it is impossible to distinguish the dots from the printing screens which they are made from. Already in the 19th Century, commercial chromolithography had used clever methods to create half-tones, either with the proper matrix granulation (stones or granulated metal), or by dots, drawn very finely with a feather, which simultaneously allowed the ranges and mixtures of the colors, of which there are some sublime examples. In the art field, it is nowadays necessary to use a microscope with a magnification of ×30 to determine the true nature of a printing technique.

Even in the first half of the 20th Century, we saw the first steps of a very new aid to knowledge. Indeed, 1936 and the publication of a founding article by Alan Turing, “On computable numbers with an application to the Entscheidungsproblem”, is the true starting point of the creation of programmable computers. But it was especially from the 1980s that the use of computers was democratized and, little by little, became essential to the world of information and imagery. From then on, texts and images have been created by each and everyone, with no need to be preserved in a physical, material way, but instead held on other media which we would not have dared to even imagine 30 years earlier. The image, which is still the product of another optical illusion, while keeping its own graphic originality, from now on needs no hardware support to be visible. It has its own light, can be modified at will, engraved, printed, and sent to the entire world with the single touch of a button. The image, in this case, is created in all its subtleties of color and light, not by a material screen, but by something which replaces it virtually, a succession of dots invisible to the eye (pixels) which are now at the origin of texts and images digitally recorded on our computers.

During the second half of the 20th Century, the American Jack Kilby invented the very first printed circuit (in 1958), another artefact in the service of knowledge transmission which is at the root of modern data processing, and the mass production of electronic chips with integrated transistors began not much later. For his work and his some 60 patents, Kilby received the Nobel Prize for Physics in 2000. All these circuits are used in a more or less direct way nowadays, in information recording and image handling and storage. The big family of integrated circuits and microprocessors continues to move forward, and with them has come another new technology, microscopic photolithography, which makes new plate sensitization techniques possible and, thanks to the use of masks and light beams, the engraving of circuit supports in smaller and smaller micro-relief (such as, for example, the various chip-cards with integrated circuits, whether analog or digital).

At the beginning of the third millennium, another “image” architecture was already on the horizon, in a nanosphere with still vague contours, which curiously made us swing from a visible optical illusion towards an invisible physical reality. Indeed, from micro-photolithography to polymeric nanostructured materials by nanolithographic printing, the miniaturization of three-dimensional engraved spaces took a giant leap forward. micro-dimensions are already virtually invisible to the naked eye; those of nano-dimensions will need a scanning electron microscope to be seen.

Lithography has thus exceeded the old domains of printed texts and of the “macro-image” with which we were more familiar, to reach other limits, in a new nano-imagery resolutely emerging from a dream world.

Ultra-miniaturized circuits, texts and images can, from now on, be conceived in infinitesimal spaces, and it may even be possible to think that millions of images, for example, could in the future easily be stored in less than one square meter of recording space.

However, we still know little about the stability and perennial nature of these digital media. How will the enormous mass of documentation recorded each day, all the images and mixed texts, be preserved? What will become of them in the coming centuries? We, who have already benefitted from many “recordings” of the past, also have a shared responsibility for the way in which we leave our imprints for future generations. From now on, we dare to hope, copying and the successive multiplication of documents will allow a kind of systematic and unlimited preservation of writings and images for the future.

Jörge DE SOUSA NORONHA

1 Foreword written by Jörge DE SOUSA NORONHA.

Introduction

Implications of Lithography1

The microelectronic industry is remarkable for its exponential growth over recent decades. At the heart of this success is “Moore's law”, a simple technical and economic assessment according to which it is always possible to integrate more and more functions into a circuit at reduced costs. This observation, made in the mid-1960s, has been transformed into a passionate obligation to fulfill its own prophecy, and has focused the efforts of an entire generation of microelectronics researchers and engineers.

Anyone talking about greater integration density is thinking about increasing our capacity to precisely define and place increasingly smaller components, building and using materials to support them. Lithography is succeeding in this arena, using increasingly sophisticated techniques, and is essential to the progress of the semiconductor industry because it allows a reduction in the size of patterns as well as an increase in the integration density of the integrated circuits at an economically acceptable cost.

The issue of dimension is considered so central to all microelectronic improvements that the industry calls each generation of the process, or each technological node, after a dimension which characterizes the technology; often, the half-pitch of the most dense interconnection is used. For a 45 nm technology for example, the minimum period of the interconnection pattern is 90 nm. Doubling the integration density of a circuit means decreasing its linear dimensions by 0.7: the nominal typical dimensions of advanced technologies follow one another at this rate, from 90 nm to 65 nm then 45 nm, 32 nm, 22 nm, etc.

From a very simplistic point of view, the fabrication of integrated circuits concatenates and alternates two types of processing on the wafer (Figure I.1); either:

– a functional layer is deposited by a lithographic process. The material is localized by removing the extra material in the non-selected areas (subtractive process): this is the case, for example, for contact holes through an isolating layer; or

– a specific area is defined where a technological process is locally applied, the confinement system being removed at the end of the step (additive process): this is the case for ionic implantation or localized electro-deposition.

The efficiency of the lithographic process depends on only a few fundamental parameters:

– the capability of printing even the smallest patterns, or resolution;

– the precise alignment of each layer of a circuit;

– the capacity to obtain repeatable patterns, of a controlled geometrical shape;

– the capacity to control fabrication costs as a function of the products'typology.

A greater integration density implies that the very smallest patterns must be able to be manufactured, hence the focus on ultimate resolution for lithography techniques. Patterns of just a dozen nanometers do not surprise anyone anymore, and even atomic resolutions are now achievable, with today's more sophisticated experimental conditions.

Optical lithography remains the preferred production choice. Despite inevitably being abandoned once the physical limits of the micron, and then of the 100 nm, are crossed, it remains today the preferred technique for mass production for 32 nm, thanks to the numerous innovations of the past 20 years.

In optical lithography, a polymer layer called a photosensitive resist is deposited on a wafer. This resist is composed of a matrix which is transparent to the exposure wavelength and contains photosensitive compounds. When the image of the patterns from a mask is projected onto the wafer (and onto the photosensitive resist), the areas exposed are submitted to a photochemical reaction which, if completed correctly, enables the dissolution of the resist in those areas (in the case of positive resists), or prevents dissolution (in the case of negative resists). We can therefore obtain perfectly delimited areas for which the substrate is bare, and have areas still protected by the resist, allowing a subsequent local treatment. At the end of the process, the resist is removed from the wafer. During the fabrication of integrated circuits, this step is repeated several dozen times, hence the central role of lithography in microelectronics.

Figure I.1.A localized process using lithography can be (a) subtractive (by locally removing non-functional material), or (b) additive (by forcing the local treatment of the wafer where it is required)

In order to understand simply how this technique reaches its highest resolution, we can refer to the standard formula giving the resolution, R:

in which λ is the wavelength of the exposure light, NA the numerical aperture of the projection optics and k1 a factor depending on the technological process. Each of these factors corresponds to a way of improving the image resolution.

The first transition came in the 1990s with the use of deep ultraviolet excimer lasers, first with 248 nm (with a KrF laser) and then 193 nm (with an ArF laser), and allowed feature size resolution below the 0.1 µm limit to be reached. However, this evolution required major changes in either projection optics (use of CaF2 in addition to quartz) or in the choice of the transparent matrix of the photosensitive resist.

Reducing the k1 parameter then appeared very promising. This is achieved by first improving the resist process, for example by increasing its contrast with nonlinear phenomena or by controlling the diffusion of the photosensitive compound. By optimizing illumination techniques (annular, quadripolar, etc.), it is also possible to gain resolution and process control but often by promoting certain shapes or pattern orientations.

It has been, above all, by mastering diffraction phenomena, and thus influencing the exposure light phases, that progress has been the most spectacular: it has been acknowledged that it is now possible to go beyond the Rayleigh criterion and print patterns even smaller than the exposure wavelength. From laboratory curiosities, these techniques have now become the workhorse of the microelectronics industry and are now known under the name “Resolution Enhancement Techniques”.

In a very schematic manner, and for a certain illumination and resist process, we will try to calculate what the patterns and phase-differentiated areas on a mask should be in order to achieve an image on a wafer which matches an image initially conceived by circuit designers. The reverse calculations are extremely complex and demand very powerful computers in order to obtain the result (in some cases taking up to several days, which affects the cycle time of prototypes of new circuits). In the end, the goal is to take proximity effects between close patterns (thus a combinational explosion of the calculation time) into account, by in turn taking into account the most precise possible optical models (and, as the technologies improve, it is important to not only take into account intensity and phase but also light polarization). The resulting pattern on a mask becomes particularly complex, and the cost of a mask set for a new circuit can exceed several million dollars for the most advanced technologies, which can become a major obstacle for small production volumes.

Despite this complexity, it is increasingly difficult to find a solution for arbitrary patterns (called random logic patterns, even though this term is inappropriate). The idea arose to simplify the problem by grouping patterns with the most periodicities (and therefore easier to process) and obtain the desired design on a wafer by multiple exposures. This approach, despite its significant production costs, has become common in the most advanced technologies.

Additionally, the numerical aperture (NA) of the projection tool has been studied, even though we know that an increase of the NA can only be made to the detriment of the depth of field. Of course, NA has increased over recent years, thus decreasing the size of the exposed field. This is why print patterns were “photo-repeated” by repeating the exposure of a field a few centimeters in size over the entire wafer (the tool used is called a photo-repeater or “stepper”), then the area exposed was reduced a little more by scanning a light-slit over the exposure field (using a tool called a “scanner”). Unfortunately lithography was limited by the numerical aperture, which could not exceed 1.

Researchers then returned to their old optical knowledge: by adding a layer of liquid (with a higher index than air) between the first lens of the exposure tool and the resist, the limit could be overrun. This “immersion lithography” has not been established without difficulties. The defect density generated by this process was at first high, not to mention there being an increased complexity of the lithographic tool. The conjunction of these major difficulties encountered in 157 nm lithography and the need to decrease the dimensions made this technique viable and it is starting to be used for mass production.

The next step was to increase the refraction index of the liquid to above that of water, and that of the projection systems (the lenses) to above that of quartz. However, in the case of 157 nm, this approach is blocked by major material problems, and the future of this path beyond that of the resist-water-quartz system seems highly endangered.

Many believe that a major decrease of the exposure wavelength would significantly relax the constraints that apply to lithography. Hence there has been a unique worldwide effort to develop Extreme UltraViolet lithography (EUV) using the 13.5 nm wavelength. However, despite an enormous effort during the past two decades, this technology stays blocked by major problems of source power and industrial facilities able to produce defectless masks. Initially foreseen to be introduced for 90 nm technologies, it has difficulties addressing 22 nm technologies. As a result, initially peripheral aspects, such as high numerical aperture optics, come back to the forefront, even though other technological problems are still unresolved for industrial manufacturing.

Complexity has considerably increased the cost of lithography for the fabrication of integrated circuits for the most advanced technologies. The newest immersion scanners, in addition to their environment (resist coating track, metrology) easily cost over $50 million each, and it would not be surprising if a price of $100 million was reached with EUV, hence the large amount of research into alternative technologies to optical lithography in order to either significantly decrease the cost or to address very specific applications that do not necessarily need the most advanced lithographic tools.

One alternative technique was established a long time ago: electron beam (often called “e-beam”) lithography. This technique is not limited by wavelength or by depth of field, thus making it very attractive. The absence of a mask is an additional advantage when looking at the never ending increase of mask prices, especially in the case of small volume production. The disadvantage of this technique is that pattern printing can only be achieved sequentially (the electron beam writes in the resist pixel after pixel), which does not allow high enough productivity for mass production. In addition, e-beam can no longer claim its superiority in terms of resolution and alignment precision because of the continuous progress of optical lithography. However, new projects are being developed, among which is the idea of massively multiplying the number of independently controlled beams (tens of thousands of beams is the number stated): productivity would then increase significantly, with the prospect of it being applied to small volume production. In addition to this application, electron beam lithography remains a preferred tool for research activities that can combine flexibility, dimension control and affordable price. It can also be used to precisely repair circuits (or to print specific patterns on demand), using either an electron or an ion beam.

Other alternative techniques offer interesting prospects for precise applications:

– nanoimprint lithography, similar to the techniques used to fabricate CDs or DVDs from a master. This enables nanoscale resolutions to be achieved, and could emerge as a contender technology if there were only one lithographic level. It has also been shown that this technique could be used to print three-dimensional patterns. The stacking of dozens of layers in integrated circuits is still to be demonstrated industrially, in particular in terms of alignment precision and defect density due to fabrication;

– near-field lithography is still the perfect tool when aiming for ultimate resolution (potentially positioning atoms one by one). Its current state suffers from the same intrinsic limitations as electronic lithography (small productivity) as well as a difficult setting when reaching ultimate resolutions, but this technique could open up to real prospects with tip-matrices of the millipede type;

– X-ray lithography was, for a long period after the major efforts of the 1980s, not considered adequate to become an industrial technique. Source weakness (even if synchrotrons are huge experimental systems), the difficulty of fabrication of transparent masks and the absence of reduction optics have heavily handicapped the future of this technique. However, it remains useful for specific applications (such as the LIGA technique1) given its great field depth that can be used in microsystems.

A special note should be made about self-organizing techniques. These rely on a simple fact: nature seems to be able to generate complex structures from apparently simple reactions. More specifically, local interactions can induce unexpected or even complex, emerging behaviors: this is called self-organization. Convincing examples of periodic structures generated by these techniques are regularly submitted to the scientific literature; however, it is hard to find an idea to exploit this technique in order to produce future low cost microprocessors. Thus, two directions now exist:

– the use of these phenomena to locally improve process quality. For example, the use of resists based on copolymers could help improve the line roughness of lithographic patterns; and

– the notion of “directed self-assembly” or “emplated self-assembly”, which is the most important direction for more complex structures. This is about defining and implementing limit conditions that, using local self-organization forces, could generate the complex structures desired.

Finally, it is important to remember that the fabrication cost aspect of these emerging technologies remains completely speculative, since the technical solutions to be implemented on an industrial scale are still unknown.

This focus on ultimate resolution as the connecting thread of this book should not hide other technical elements that are also critical for lithography's success. Thus, popular literature often forgets that the capacity to stack two patterns greatly contributes to the capacity to integrate many compounds in a circuit. Indeed if patterns are misaligned, an area around the pattern would have to be freed to ensure the functionality of the circuit, thus reducing the integration density (Figure I.2). Achieving alignment with a precision equal to a fraction of the minimum size of the pattern (a few nm), and measuring it, represents a challenge that lithography has so far been able to meet.

Figure I.2.The precise alignment of patterns printed at different lithographic levels influences the compound integration density of a circuit. For example, in the case of the command electrode of a transistor: (a) with significant misalignment, the command electrode of a transistor could possibly no longer control the active zone of the compound. (b) In order to avoid this situation, the electrode's size is increased. As a result, those electrodes which are close must be moved, thus inducing a degradation of the integration density

The functionality of a circuit will depend on the precision at which the patterns on the wafer are printed. Metrology is a key element in mastering the production yield, whereas the demands regarding precision, information integrity and measurement speed keep growing. Previously, optical microscopy techniques were enough to measure, in a relative way, the two critical parameters of a lithographic step, meaning the dimension of its pattern and alignment in relation to the underlying layers. As dimensions have decreased, standard optical techniques were replaced by different approaches:

– the use of an electron beam microscope (and more recently near-field techniques) enabled a natural extension to the smallest dimensions;

– light scattering of periodic patterns (for example scatterometry) gave access to more complete information on the dimensions and shape of the patterns, even though the interpretation of the results remains unsure. A move towards shorter wavelengths (for example SAXS for X-rays) opens up new perspectives (as well as some advantages, for example with substrate transparency).

However, the challenges to be fulfilled keep increasing. A relative measurement is no longer sufficient to guarantee a circuit's performance and the possibility of an absolute metrology on a nanometric scale still remains. In addition, the shape of the pattern is increasingly: a three-dimensional measurement is essential, at least when considering mass production, even if the techniques used are still in the embryonic stages. Finally, the proximity effects between patterns make the measurement indicators less representative of the complexity of a circuit: the metrology of a statistical collection of meaningful objects in a complex circuit is a field of research that is still wide open.

It is important to mention a technical field which, even if not part of lithography in the strictest sense, is connected to it to a large extent: the measurement of physical defects in a production process. Indeed, the analysis and measurement of defectivity is interesting in two different aspects:

– for defects with an identified physical signature, techniques similar to lithography can be applied because it concerns acquiring an image with optical techniques (in the broad meaning, including charged particle beams) and treating it in order to extract meaningful information; and, additionally,

– lithography is unique in the way that, in the case of the detection of a defect during this step, it is usually possible to rework the wafer and thus avoid the permanent etching of the defect into the circuit.

In conclusion, lithography has undergone several decades of unimaginable progress, by-passing unproven physical limits thanks to the ingenuity of microelectronics researchers and engineers. Even if questions emerge about the economic viability of dimension decrease at all costs, major steps forward are expected during the coming years, either in terms of the solutions reached, the integration density or capacity to produce cheap complex structures.

1 Introduction written by Michel BRILLOUËT.

1 LIGA is a German acronym for Lithographie, Galvanoformung, Abformung (Lithography, Electroplating, Molding).

Chapter 1

Photolithography1

1.1. Introduction

Since the beginning of the microelectronics industry, optical lithography has been the preferred technique for mass fabrication of integrated circuits, as it has always been able to meet the requirements of microelectronics, such as resolution and high productivity.

In addition, optical lithography has adapted to technology changes over time. Moreover, it is expected to be able to be used up to the 45 nm, 32 nm [ITR] and maybe even the 22 nm technology nodes (Figure 1.1).

The principle of this technique is to transfer the image of patterns inscribed on a mask onto a silicon wafer coated with a photoresist (Figure 1.2). The image is optically reduced by a factor M, where M is the projection optics reduction factor, which generally equals 4–5. The different elements of a lithography tool are detailed below.

However, due to the continuous decrease of chip dimensions, the tools used in optical lithography have now become very complex and very expensive. It is thus necessary to consider using low-cost alternative techniques in order to reach the resolutions forecast in the International Technology Roadmap for Semiconductors (ITRS) (Figure 1.1).

Figure 1.1.ITRS 2007 roadmap for photolithography [ITR]

An optical projection lithography tool consists of a light source, a mask (or reticle) containing the drawing of the circuit to be made and an optical system designed for projecting the reduced image of that mask onto the photoresist coated on the substrate (Figure 1.2).The purpose of this chapter is to introduce the principle and performances of optical lithography, as well as alternate techniques called “new generation” techniques.

During exposure, the resist is chemically altered, but only in the areas where light is received. It then undergoes a baking process which makes the exposed zones either sensitive or insensitive to the development step.

In the case of a “positive” photoresist, the exposed part is dissolved. There are also “negative” photoresists for which only the non-exposed zones are soluble in the developer solution. The resist is therefore structured like the patterns present on the mask: this will define the device's future process level.

Figure 1.2.Diagram of a scanner

Thus the patterns defined can then be transferred to the material underneath during an etch process step. The resist that remains after the development step is used as an etch mask: the areas protected by the resist will not be etched. This is also used for selective ion implant in the open areas. All these steps are shown in Figure 1.3.

Figure 1.3.Sequence of lithography and etch technology steps

1.2. Principles and technology of scanners

1.2.1. Illumination

Illumination consists of a source and a condenser. The source must be powerful as it settles the exposure time for a given dose; it helps determine the tool's throughput, which is a major economic factor. It must work at a wavelength for which photoresists have been optimized. Furthermore, it has to be quasi-monochromatic as the optics are only efficient within a very narrow spectral range.

In order to improve performances (such as resolution) of lithography tools, as discussed below, it is necessary to reduce the sources' wavelength. To meet these criteria, different sources were originally used, from mercury vapor lamps (436 nm g-line, 405 nm h-line and 365 nm i-line) to ultraviolet-emitting lasers and, further on to the present day, deep ultraviolet radiation at 248 nm and 193 nm. The source is followed by a condenser made of a set of lenses, mirrors, filters and other optical elements. Its role is to collect and filter the light emitted from the source and to focus it at the inlet pupil of the projection optics (Figure 1.2). This type of illumination, called “Köhler” illumination, has the characteristics of projecting the image at the lens rather than at the mask, as is the case with critical or Abbe-type illumination. This ensures good lighting uniformity on the mask.

It will be seen later that the illumination geometry (circular, annular, bipolar) of such a projection lithography system can vary to improve the imaging performance. This is the widely used concept of partial coherence which is part of the image-shaping process.

1.2.2. The mask or reticle

The mask is a critical part of the lithography tool, as the patterns defined on it are to be reproduced on the wafer. The quality of the integrated circuits directly depends on the mask set used, in terms of dimensions, flatness, drawing precision and defect control. The mask manufacturing process is an important aspect of the technology.

As stated (Figure 1.4) in the ITRS for 32 nm technology node masks (expected in 2013), it is predicted that CD uniformity (in other words the achieved size of the patterns) will have to be controlled within 1 nm and that the defect size will have to be minimized so that it does not exceed 20 nm. In addition, pattern drawing on the mask becomes increasingly complex as the diffraction limit gets closer.

Figure 1.4.Extract from the ITRS recommendations for masks (http://www.itrs.net)

These days, in order to improve the performances of lithography, Optical Proximity Corrections (OPCs) are made by optimizing the patterns' shape on the mask.

As will be mentioned later, this is part of a whole set of reticle enhancement techniques (RETs). Thus the cost of a mask becomes an important parameter that must not be neglected in the final cost of a chip. As many masks as there are levels (several dozens) are required, and this is why much effort has been put into developing new maskless lithography techniques.

The simplest masks used in lithography are binary masks. They consist of a substrate made of a material that is transparent at the exposure wavelength, typically 6 inch-long and ¼ inch-thick melted silica squares for 193 nm and 248 nm wavelengths. The patterns are etched a few dozens of nanometers into a chrome layer, which is absorbent at those wavelengths.

The mask is composed of either transparent or absorbent areas, hence the term “binary”. It is an amplitude mask, that is to say it only alters the amplitude of the wave going through it. That way, the electric field amplitude that goes through the silica does not change, whereas the field amplitude going through the chrome equals zero after the mask.

There is another type of mask that uses both the amplitude and phase of the wave in the image-shaping process: the phase shift mask (PSM). This type of mask was first introduced in 1982 to improve lithographic performances [LEV 82]. Like those of a binary mask, the patterns of a PSM are made out of chrome on a transparent melted silica substrate. In the case of a PSM, a material is added, the goal of which is to shift the phase of the incidental wave. There are two types of phase shift masks: an alternating phase shift mask for which the phase shifting material and the chrome coexist, and the attenuated phase shift mask for which the pattern is designed to attenuate the amplitude and shift the phase of the wave going through it. The attenuating PSM is typically used as an RET. How this type of mask impacts lithographic performances is explained later.

1.2.3. Projection optics

Projection lithography was developed in the 1970s along with the development of efficient refractive lenses, in other words the optical elements which use transmission. Previously, images were made by contact or by proximity with scale 1 masks. The projection reduction factor M was introduced thanks to projection lithography. Today, typically, M equals 4. Having a reduction factor greater than 1 is an advantage, as it does not require the mask patterns to be the same size as the actual printed patterns. This releases some of the constraints of the mask manufacturing process.

Since their creation, projection optics have become increasingly complex in order to improve their performance, whilst increasing their numerical aperture: they are now composed of more than 40 elements and can be up to 1 m high and with a weight of approximately 500 kg (Figure 1.5). In fact, just like the wavelength, the numerical aperture is an important parameter which, as will be studied later, preconditions the resolution of the lithography tool.

Figure 1.5.Examples of projection optics and of a typical scanner (open)

Let us introduce here the concept of a numerical aperture. The numerical aperture of a lens or an imaging device is defined as follows:

where nim is the refractive index of the medium on the object side, and θmax the maximum half angle of the light cone on the image or object side, depending on whether the numerical aperture is seen from the object or the image side, as represented in Figure 1.6. Indeed, an optical element has two numerical apertures linked to each other by the lens magnification: one on the image side and one on the object side.

The object and image numerical apertures are proportional. Their ratio equals M, the reduction factor of the projection optics:

Figure 1.6.Definition of object and image numerical apertures

When the lens is in the air, according to the relationship above, its numerical aperture is only determined by its collection angle and, therefore, it depends on its diameter. It is a genuine technological challenge for optical engineers to make high quality lenses without aberrations and transparent to the illumination wavelengths. Many improvements have been achieved in this field and it is now possible to find very efficient lenses with a very high numerical aperture (greater than 0.8). In the next chapter, it will be shown that the emergence of immersion lithography encouraged the development of even more complex lenses with higher and higher refraction indexes, leading to higher numerical apertures.

1.2.4. Repeated projection and scanning projection

A 200 mm wafer usually holds about 70 exposure fields, each one corresponding to the image of the mask. To cover a whole wafer, it is necessary to reproduce the image of the mask several times. This is called “photorepeating”.

There are two kinds of lithography tool used for the photorepeating step. The first, known as a “stepper”, reproduces the reduced image of the mask on the field. The wafer is then moved in two directions to expose the other fields. The second tool, called a “scanner”, was invented later. This is the tool used today. With this type of tool, the mask image is projected through a slit during the synchronous scanning of the mask and the substrate. It allows large dimension fields in the scanning direction without needing to change the optical system (Figure 1.6).

However, this system can produce some difficulties, such as vibration and synchronization issues between the mask and the wafer.

The typical features of the most evolved scanners are summed up in Table 1.1.

Table 1.1.Typical scanner features

1.3. Lithography processes

One should not forget that all the considerations about theoretical resolution shrinkage do not take into account the technological feasibilities of the lithographic process. In fact, the lower the resolution, the harder it is to control CDs. In practice, defocusing has a lot of impact on the patterns and makes sensitivity to other process errors higher. In the same way, a dose setting error degrades the patterns and can bring them out of specification. A process tolerance criterion is usually defined, for instance with the “on wafer” CD varying at a maximum ±10% around the target CD. This defines a focus range, the depth of focus, and the dose range, the exposure latitude. In microelectronics, an imaging process is usually determined by simultaneously changing focus and dose in order to evaluate the process depth of focus (DOF) and exposure latitude.

From these curves, the process window can be deduced, that is, the focus and dose ranges for which the CDs obtained meet the predefined specifications. Plotting the exposure latitude as a function of the DOF or the defocus gives a good representation of the coupled effects of defocus and dose on the lithography process.

The best configuration is obtained with a wide exposure latitude and a high DOF, as this ensures a larger process window. However, decreasing the dimensions makes the process window shrink. At first this problem was avoided by improving focus control or substrate flatness. Now, parameters that influence the imaging process have to be modified to get past such constraints. Improving the photoresists helped improve the process windows at first but, as the process becomes less tolerant, resolution or reticle enhancement techniques must be used.

The lithographic stack is usually made of several layers: in addition to the photoresist coating, an extra anti-reflective layer must sometimes be coated to prevent stationary waves from forming inside the resist. Also, a protective top coating is sometimes added onto the resist film to optimize coupling of the light in the photoresist.

Figure 1.7.Example of Bossung curves for a 120 nm pattern made with 193 nm lithography

1.3.1. Anti-reflective coating

The anti-reflective layer, also called a Bottom Anti-Reflective Coating (BARC), is very often an organic layer intended to minimize the reflectivity of the underlying stack. Indeed, silicon reflectivity in UV is very high, which is why light going through the resist during exposure is reflected and can interfere with the incident light, creating stationary waves inside the resist film that degrade the pattern profiles. Moreover, that layer is also used as an adhesion promoter, which makes the photoresist adhere better on the substrate during the develop and etch steps.

This layer also levels the topography of the substrate. On bare silicon without BARC, the adhesion promoter generally used is hexamethyldisilazane (HMDS).

1.3.2. Resists

Photoresists are organic polymers sensitive to the radiation of the exposure tool. During the lithography step, they undergo, consecutively:

– a spin coating on the substrate. Film thickness is determined by the spin speed and the viscosity of the resist dilution in a solvent;

– baking after coating, or a PAB (Post Apply Bake), the goal of which is to evaporate the residual solvent out of the resist film and to compact the film;

– exposure;

– baking after exposure, or PEB (post exposure bake), the aim of which is to trigger the deprotection reaction for “chemical amplification” resists. It is also used to reduce the impact of stationary waves, whichever resist type is used;

– development, carried out in a basic aqueous solution.

Most resists are made of a copolymer matrix, a photo-acid compound, a basic compound limiting the effects of acid diffusion, and a solvent affecting the resist viscosity.

Two categories of photoresist, with different chemistry and ways of working, can be distinguished:

– “novolack” resists, which contain a novolack polymer soluble in aqueous and basic media (NaOH, KOH, TMAH) and diazonapthoquinone (DNQ), a compound that is insoluble in those media. This mix is therefore not very soluble in its natural state. However, after exposure at a wavelength between 300 and 450 nm, the DNQ generates after several intermediate reactions, producing an acid compound that is soluble in a basic solution. The insulated resist is revealed by the developer. This type of resist used to be the one used for g-line and i-line generation scanners;

– acid catalyst DUV resists. With the outbreak of deep ultraviolet light emitting sources, more sensitive and less absorbent photoresists had to be developed. This was when the concept of “chemical amplification” resists was invented [ITO 97]. These resists contain a polymer matrix, protective groups that prevent the non-exposed polymer from dissolving in the developer solution, a photosensitive compound called a photo acid generator (PAG) and a basic compound, the role of which is to limit the effects of acid diffusion during the PEB. During exposure, the PAG produces an acid which, under the effect of a high temperature baking step, catalyzes a chemical reaction that suppresses the protective groups of the polymer matrix. This way, the polymer becomes soluble in a basic aqueous developer solution. This reaction is called a deprotection reaction.

Figure 1.8.Drawing of the principle of a chemical amplification positive resist

That reaction is said to be “catalytic”, as the acid regenerates itself during the reaction (Figure 1.8). Because of a longer baking step, the acid is capable of diffusing and catalyzing the deprotection of a large number of protective groups. In order to have a better lithography performance and better resistance to etching, these resists must meet strict requirements in terms of transparency, resistance to etching and substrate adhesion; this is why they are composed of different functional groups that separately meet those requirements. This type of resist has a major advantage compared to novolack resists: it is more sensitive and has a higher contrast.

Figure 1.9.Response curve of a positive resist

A photoresist can be characterized by its contrast, which represents the remaining resist thickness after development as a function of the energy provided during exposure. An example of such a curve is shown in Figure 1.9. For low energy values, the thickness remains about constant and equals the initial thickness. The energy value E0, namely “dose to clear”, is the energy over which all the resist film is removed. We can note that, around E0, the curve is linear and the slope of that line is the resist's contrast, given by:

where TE is the resist thickness, T0 the initial thickness, E the energy provided . γ, which generally varies between 4 and 6, is a criterion expressing the ability of the resist to print a quality image.

Finally, it is worth pointing out that very little acid is generated during exposure. This is why, before the deprotection reaction, some of the acid on the resist surface can be neutralized by amines in the air, thus preventing developing the whole superficial resist, which creates T profiles. This problem can be solved by filtering the clean room air or by using a protective layer on top of the resist.

1.3.3. Barrier layers or “top coating”

A barrier layer is an organic film coated directly after the resist. It can be developed in water during a pre-developing step or directly with the developer. This layer's role is to protect the photoresist from amine contamination, but it can also be used as an anti-reflective layer or Top Anti-Reflective Coating (TARC).

The next chapter explains why this layer plays such an important role in immersion lithography. However, today's resist manufacturers are seeking to formulate resists capable of working without these protective barrier layers that make the process more complex, add process steps and, furthermore, can be a major source of defects, degrading the efficiency of the fabricated device.

1.4. Immersion photolithography

1.4.1. Immersion lithography

Immersion lithography is an optical lithography technique that consists of filling the void between the projecting optic of the scanner and the wafer with a fluid that has a refraction index greater than air. This adaptation index minimizes interface refraction phenomena. In this section, we will see how this improves the performances of a “dry” 193 nm lithography tool. Immersion lithography is now said to be the next generation technique for the upcoming 45 and 32 technological nodes.

The use of an immersion fluid in optics has been known for more than 150 years in the field of microscopy. It actually began in the 1840s when Amici [BLA 82] invented the immersion technique with water, then with glycerin and cedar oil; by filling the space between the object and the glass plate with a fluid having an index similar to that of glass, Amici could observe better quality images. The index adaptation between the three media reduces interface refraction phenomena and allows more light in. In 1880, Abbe developed the first immersion objective microscope. Immersion for lithography applications was only considered 100 years later by Takanashi [TAK 84] in 1984, and in 1985 by Taberelli [TAB 85] who imagined a lithography tool in which the lens-to-wafer void is filled with a liquid with a refraction index similar to that of the resist. In 1987, Lin considered immersion as a way of increasing the depth of focus of already existing lithography tools, rather than as a way of improving resolution [LIN 87]. In 1989, the first experimental immersion tests were demonstrated. Kawata et al. [KAW 89, KAW 92] showed the printing of sub-200 nm wide lines with a laboratory lithography system based on an inverted microscope, working in the visible spectrum with oil immersion. A few years later, in 1992, Owen et al. [OWE 92] suggested widening 193 nm lithography by using a 0.7 numerical aperture optical system in an oil immersed configuration, thus bringing the numerical aperture up to 1.05; they predicted that up to 125 nm lines would be printed thanks to this technique. Apart from numerous works on interferometric immersion lithography at various wavelengths, development in the 1980s and at the beginning of the 1990s hardly progressed until the major contribution of Lin, who put immersion lithography back on track for industrial applications in 2002 [LIN 02] and demonstrated the first immersion lithography tool concept in 2003 with ASML and Nikon.

1.4.2. Resolution improvement

Since the development of the first transistor, players in the microelectronics field have wanted to increase the number of transistors per chip to improve the performances of integrated circuits. Optical lithography has a major involvement in the pursuit of that goal, as it is the pattern defining step. All the next steps depend on it. Increasing transistor density on a chip leads to diminishing the period of the patterns, therefore decreasing the resolution limit given by the Rayleigh equation: .

The main trend in lithography is to reduce the k1 constant, the wavelength, and to increase the numerical aperture of the optical device. Since the beginning of lithography, the wavelength went from the visible to ultraviolet, which is today used in manufacturing. In this way, mercury lamps were replaced by excimer lasers which allowed the wavelength to keep on decreasing. Many improvements were made to those lasers, particularly in terms of spectrum bandwidth and repetition rate increase, which allowed them to be introduced in production lithography systems. Today's manufacturing uses a wavelength of 193 nm.

The next generation technology in terms of wavelength was logically expected to be 157 nm optical lithography, targeting 100 nm and 70 nm generation components, and which, in the late 1990s, was expected to make the connection between the “new generation techniques”. However, it was given up in spite of these expectations, as the development of the technology had too many blocking points concerning the performances of existing fluorine lasers, the transparency of optical materials at that wavelength, reticles, resists, and the numerical aperture of scanners working at that wavelength [ROT 99]. The industrialization of 193 nm immersion technology made such a development useless.

Improvement in transistor integration was also achieved by an increase in the numerical aperture of scanners. The development of highly efficient projection objectives (in terms of aberrations) enabled their diameter to be increased, therefore increasing the numerical aperture. The pattern period decrease was achieved by reducing the k1 factor. This constant, which is dependent on the imaging process, dimished as the diffraction limit got closer and thanks to the use of more resolving resists allowing high quality patterns to be imaged, regardless of the diffraction limit.

Figure 1.10 shows the evolution of the k1 factor as a function over time.

Figure 1.10.Evolution of the k1 factor over the past 25 years [BRU 97]

According to the second Rayleigh equation, it is preferable to keep on improving the resolution by lowering the wavelength rather than by increasing the NA or by reducing k1, so as not to reduce the DOF which would make the process more difficult.

However, today it is becoming increasingly difficult to keep diminishing the wavelength of the UV. Indeed, currently the envisaged technology is extreme UV technology, but it needs important modifications in terms of tools (source, vacuum optical projection systems, reticles) and materials (optics, resists). There is an alternative that could allow us to get round these difficulties: immersion lithography. By introducing an index fluid between the optical system and the silicon wafer, it is possible to enhance the system's numerical aperture while keeping the exposure tools infrastructure at 193 nm. It will be seen later that this is a promising technique towards which everybody is leaning, and which is currently already being used industrially.

1.4.3. Relevance of immersion lithography

The aim of this section is to explore the capacities of immersion lithography and to describe the consequences for lithographic performance when introducing an index fluid. Special attention must be paid to the definitions given for resolution and depth of focus for immersion lithography. These two quantities are typically determined by Rayleigh equations, which express resolution and depth of focus as functions of the wavelength λ and the output numerical aperture (NA) of the projection lens. They are given by the following equations:

where R is the resolution, and DoF is depth of focus; k1 and k2 are constants determined by imaging parameters such as the illumination type, the mask type, and the resist.

NA is defined as:

where n is the refractive index of the output medium and θ the maximum half angle of the light cone exiting the lens. NA depends on the opening of the output pupil and the pupil image distance. The numerical aperture is a critical imaging parameter as, according to the previous equation, it gives a definition to the most essential system property: its resolution.

With the introduction of immersion, new expressions of resolution and DoF were calculated by Lin [LIN 04]. The resolution remains unchanged, but the depth of focus was modified:

We can see that immersion has the potential of improving resolution. Indeed, sinθ equals 1 at maximum. k1 and λ0 being fixed by the process and illumination conditions, R is ultimately limited to k1λ0 and k1λ0/n, respectively, in “dry” and “immersion” cases.