203,99 €
This book introduces researchers and students to the physical principles which govern the operation of solid-state devices whose overall length is smaller than the electron mean free path. In quantum systems such as these, electron wave behavior prevails, and transport properties must be assessed by calculating transmission amplitudes rather than microscopic conductivity. Emphasis is placed on detailing the physical laws that apply under these circumstances, and on giving a clear account of the most important phenomena. The coverage is comprehensive, with mathematics and theoretical material systematically kept at the most accessible level. The various physical effects are clearly differentiated, ranging from transmission formalism to the Coulomb blockade effect and current noise fluctuations. Practical exercises and solutions have also been included to facilitate the reader's understanding.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 518
Veröffentlichungsjahr: 2013
Table of Contents
Chapter 1: Introduction
1.1. Introduction and preliminary warning
1.2. Bibliography
Chapter 2: Some Useful Concepts and Reminders
2.1. Quantum mechanics and the Schrödinger equation
2.2. Energy band structure in a periodic lattice
2.3. Semi-classical approximation
2.4. Electrons and holes
2.5. Semiconductor heterostructure
2.6. Quantum well
2.7. Tight-binding approximation
2.8. Effective mass approximation
2.9. How good is the effective mass approximation in a confined structure?
2.10. Density of states
2.11. Fermi-Dirac statistics
2.12. Examples of 2D systems
2.13. Characteristic lengths and mesoscopic nature of electron transport
2.14. Mobility: Drude model
2.15. Conduction in degenerate materials
2.16. Einstein relationship
2.17. Low magnetic field transport
2.18. High magnetic field transport
2.19. Exercises
2.20. Bibliography
Chapter 3: Ballistic Transport and Transmission Conductance
3.1. Conductance of a ballistic conductor
3.2. Connection between 2D and 1D systems
3.3. A classical analogy
3.4. Transmission conductance: Landauer’s formula
3.5. What if the device length really does go down to zero?
3.6. A smart experiment which shows you everything
3.7. Relationship between the Landauer formula and Ohm’s law
3.8. Dissipation with a scatterer
3.9. Voltage probe measurements
3.10. Comment about the assumption that T is constant
3.11. Generalization of Landauer’s formula: Büttiker’s formula
3.12. Non-zero temperature
3.13. The integer quantum Hall effect
3.14. Exercises
3.15. Bibliography
Chapter 4: S-matrix Formalism
4.1. Scattering matrix or S-matrix
4.2. S-matrix combination rules
4.3. A simple example: the S-matrix of a Y-junction
4.4. A more involved example: a quantum ring
4.5. A final more complex example: solving the 2D Schrödinger equation
4.6. Exercises
4.7. Bibliography
Chapter 5: Tunneling and Detrapping
5.1. Introduction
5.2. Single barrier tunneling
5.3. Two coherent devices in series: resonant tunneling
5.4. Physical meaning of the terms appearing in the resonant transmission probability
5.5. Tunneling current
5.6. Resonant tunneling in the real world
5.7. Discrete state coupled to a continuum
5.8. Fano resonance
5.9. Fano resonance in a quantum-coherent device
5.10. Fano resonance in the real world
5.11. Exercises
5.12. Bibliography
Chapter 6: An Introduction to Current Noise in Mesoscopic Devices
6.1. Introduction
6.2. Ergodicity and stationarity
6.3. Spectral noise density and Wiener-Khintchine theorem
6.4. Measured power spectral density
6.5. Shot noise in the classical case
6.6. Why the shot noise formula is not valid in a macroscopic conductor
6.7. Classical example 1: a game with cannon balls
6.8. Classical example 2: cars and anti-cars
6.9. Quantum shot noise
6.10. Bibliography
Chapter 7: Coulomb Blockade Effect
7.1. Introduction
7.2. Energy balance when charging capacitors
7.3. Coulomb blockade in a two-terminal device
7.4. Coulomb blockade in a single-electron transistor
7.5. Single-electron turnstile
7.6. Coulomb blockade in the real world
7.7. Exercises
7.8. Bibliography
Chapter 8: Specific Interference Effects
8.1. Classical Lagrangian with a magnetic field
8.2. Classical Lagrangian without a magnetic field
8.3. Phase shift due to a magnetic field
8.4. Aharonov-Bohm effect in mesoscopic rings
8.5. 1D localization
8.6. Weak localization
8.7. Universal conductance fluctuations
8.8. Bibliography
Chapter 9: Graphene and Carbon Nanotubes
9.1. Introduction
9.2. Graphene band structure
9.3. Integer quantum hall effect in graphene
9.4. Carbon nanotube band structure
9.5. Carbon nanotube bandgap
9.6. Carbon nanotube density of states and effective mass
9.7. Electron transport in and quantum dots from carbon nanotubes
9.8. Exercises
9.9. Bibliography
Chapter 10: Appendices
10.1. The uncertainty principle
10.2. Crystalline lattice; some definitions and theorems
10.3. The harmonic oscillator
10.4. Stationary perturbation theory
10.5. Method of Lagrange multipliers
10.6. Variational principle
10.7. Wiener-Khintchine theorem
10.8. Binomial probability law
10.9. Random Poisson process
10.10. Transformation of the Cartesian wavevector coordinates into transverse and parallel components
10.11. Useful physical constants
Solutions to Exercises
Exercise 2.19.1
Exercise 2.19.2
Exercise 2.19.3
Exercise 2.19.4
Exercise 3.14.1
Exercise 3.14.2
Exercise 3.14.3
Exercise 3.14.4
Exercise 3.14.5
Exercise 5.11.1
Exercise 5.11.2
Exercise 5.11.3
Exercise 5.11.4
Exercise 5.11.5
Exercise 5.11.6
Exercise 7.7.1
Exercise 7.7.2
Exercise 7.7.3
Exercise 9.8.1
Exercise 9.8.2
Exercise 9.8.3
Index
First published in Great Britain and the United States in 2008 by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd6 Fitzroy SquareLondon W1T 5DXUKJohn Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USAwww.iste.co.ukwww.wiley.com© ISTE Ltd, 2008
The rights of Thierry Ouisse to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Cataloging-in-Publication Data
Ouisse, Thierry.
Electron transport in nanostructures and mesoscopic devices / Thierry Ouisse.
p. cm.
Includes bibliographical references and index.
ISBN 978-1-84821-050-9
1. Electron transport. 2. Nanostructured materials--Electric properties. 3. Nanostructures--Electric properties. 4. Mesoscopic phenomena (Physics) I. Title.
QC176.8.E4O95 2008
530.4’1--dc22
2008008768
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISBN: 978-1-84821-050-9
Matter stability and the way in which rigid crystalline or amorphous arrays of atoms can be formed are ruled y two pillars of physics: electromagnetism and quantum mechanics; nothing else, provided that we admit the existence of elementary constituents such as atom nuclei without having to derive their internal structure from the first principles (then we need to add nuclear forces to our bunch of tools). The postulates and basic equations of these two theories can be written on a couple of pages, and everything can be derived from them1. If the world was ruled by classical mechanics, it would simply be impossible to obtain stable atoms2 or stable chemical bonding to ensure the existence of matter as we all experience it in our everyday life. Thus, it is something of a misnomer to say that we are going to study quantum devices as opposed to devices which would not be quantum. Everything is ruled by quantum mechanics, from the insulating or conducting character to the color of any piece of matter or object that you can see inside the room where you are now reading this introduction (see also Figure 1.1). To understand our macroscopic world, we often feel that once we admit the existence of stable matter, we can content ourselves with using the second Newton’s law of motion and classical gravitational forces. An aeronautics engineer does not put too much quantum mechanics in his calculations, but this is certainly no longer the case if we want to justify the way in which electrons and therefore the electrical current behaves in a bulk semiconductor. Without a periodic atomic lattice and quantum mechanics, we could not find free electrons able to carry a current in a p-n junction, or in the channel of transistors which form the integrated circuits inside our computers. Thus, the reason why the devices under study in this book are called quantum is that we can straightforwardly apply to them the basic quantum effects that students are accustomed to calculating in an introductory quantum mechanics course.
Figure 1.1.The ubiquitous character of quantum mechanics
In nanostructures, electrons can be confined in potential wells narrow enough to obtain energy quantization along the confining direction. Their dimension is small enough for probing the dual wave-particle nature of the electron in a straightforward manner, because the electron wave function phase can be kept coherent over the whole device length. Thus, it becomes possible to observe wave interference effects just by measuring the average current which can be passed through such components, and particle-like properties from current noise data. As once stated by the physicist Esaki, this looks like some kind of “do-it-yourself” quantum mechanics: you are not required to become a specialist in group theory and irreducible representations, or of field-theoretic methods to get in touch with the essence of the topic (see also Figure 1.2). In addition, other specific effects, although not quantum-mechanical, are also due to reduced dimensions: if you can inject a few electrons into a nanostructure and if the capacitance between this nanostructure and the rest of the world is very small, we can probe effects which are due to charge granularity (we cannot divide the electron charge), and which are known as Coulomb blockade. Such effects are the subject of intensive research in R&D laboratories, because many people hope to put them to good use to produce new types of memories and devices that are smaller, faster and require a smaller amount of operating power. The aim of this book is to give an introduction to the basic concepts which govern the conduction mechanisms taking place in such small devices.
Figure 1.2.The quantum garage
Many (not to say most) of the phenomena described in this book usually take place at quite low temperatures, or in devices not yet (and for some of them never) used in the industry. The physics described here is not useful for understanding how industrial semiconductor devices behave in most applications right now, with the notable exception of resonant tunneling. Nevertheless, “today’s” silicon (Si) Metal-Oxide-Semiconductor Field-Effect Transistors (MOSFETs) definitely exhibit non-stationary and ballistic transport effects. Explaining these effects requires us to use some of the concepts developed in this book, even the high electric fields involved in MOSFET operation make the application of such concepts much more complicated than what is described in this introduction. At room temperature, the electron mean free path in silicon is in the 5-10 nm range, not far from the 45 nm channel length of the current CMOS technology, and integrated chips using a 32 nm process technology have already been demonstrated by the INTEL corporation in 2007. Figure 1.3 shows the picture of a 20 nm channel length prototype MOSFET produced in 2006 by LETI-CEA. Thus, even at room temperature some commercial electronic devices are close to the ballistic regime. Those industrial MOSFET’s are fabricated with an incredibly high reproducibility in order to form extremely complex integrated circuits (and as a side note such precision and reproducibility are actually far from being achieved in most research laboratories working in the realm of mesoscopic physics and nanostructures, or with semiconductors more exotic and physically more appealing than silicon). Device-modeling based on ballistic properties has thus become an active research field, even in the case of silicon devices (see, e.g., [NAT 94] for one of the pioneering Si papers).
In addition, mesoscopic effects are important in four respects:
(i) they are often of great physical significance, and give a deep and straightforward insight into some of the most striking implications of quantum mechanics (for instance, they provide unambiguous and clear demonstrations of the dual electron nature, particle and wave);
(ii) although often obtained at low temperatures or high magnetic fields they are very useful for extracting physical parameters dealing with (nano)structures actually used in applications;
(iii) some of the effects are already used in (e.g. resonant tunneling) or potentially useful for (e.g. Coulomb blockade) applications;
(iv) although still difficult to engineer, devices made from graphene or carbon nanotubes exhibit truly ballistic and quantum-coherent effects even at room temperature. Thus, it is quite possible that not only ballistic, but also quantum-coherent effects may be present in electronic applications in the near future.
Figure 1.3.A transmission electron microscope view of a planar double-gate MOSFET fabricated by LETI-CEA with a 20 nm channel length; reproduced by permission after J. Widiez et al., IEEE Transactions on nanotechnology, vol. 5, p. 643 (2006),copyright ©2006IEEE ([WID 06])
As a consequence, in most of the largest semiconductor companies, and in a very large number of university labs, intensive research work is devoted to such structures. Scarcely applied though it may seem at first sight, this field of activity is in fact the leading edge of semiconductor research.
This book is designed to be accessible to the independent reader, and to students not having a strong background in solid-state physics (e.g. issued from engineering disciplines). As a matter of fact, this book is an attempt to answer the following question: what must be taught to students starting from scratch to make them understand the bases of electron transport in mesoscopic devices? A professor placed in such a situation soon realizes that a good deal of solid-state physics and quantum mechanics is required. This explains the incorporation of chapters which are usually absent from the more specialized, already-existing books, and marks the difference between them and this. In addition, to follow the classification once given by J.M. Ziman, this book does not fall into the category of a “treatise” but into that of a “textbook”, with the purpose of introducing and explaining concepts. The text has been written with the aim of being as self-contained as possible, and is based on an oral course delivered at an international European master’s degree course involving three technical universities (GrenobleINP, EPFLausanne and Polit’oTorino). It is a deliberate choice of the author to keep in the book the spirit of the oral course, and this is the reason why the reader should not be surprised to be sometimes interpellated or hailed in a somewhat familiar way3.
Assimilating the quantum-mechanical rules summarized at the very beginning of the book suffices to derive any subsequent result, but should by no means be considered as enough to master quantum mechanics itself. Hence, and despite the fact that the text remains at an introductory level, a complete understanding of the course probably requires a minimum prior knowledge and self-maturation of the basic quantum-mechanical concepts. A reader not acquainted with this field will certainly feel the need to consult more authoritative manuals, due to the innumerable number of questions, either technical or fundamental, that a concise and incomplete presentation of quantum mechanics must arouse in any normally constituted mind. Some knowledge of solid-state and semiconductor physics certainly help as well, but all concepts useful for understanding the book can in principle be found in the book itself, and since this book is an introduction dedicated to a broad audience, maybe some of you are probably already acquainted with the required solid-state physics notions. For those who are experienced in solid-state physics it is possible to simply skip most of the reminders which make up Chapter 2. Besides, many of those reminders are not always quite rigorously demonstrated. All undemonstrated or heuristically-derived quantum-mechanical formulae can be found and are rigorously derived in a self-contained, encyclopedic textbook: [COH 77]. Solid-state physics has its self-contained book too: [ASH 76]. For bulk semiconductor physics and transport, an advanced and quite remarkable and complete textbook was written by [RID 82], but it is not essential for understanding this book. Eventually, we can find books specifically devoted to mesoscopic electron transport, which can be of great support for a better understanding or for gaining more information (the list below is not exhaustive): [BEE 91], [KEL 95], [DAT 95] and [FER 97]. The book which is the closest in spirit to this course is the one by Datta. It includes many exercises and also contains more advanced formalisms (e.g. Green’s functions) and discussions,which are not necessarily required at this introductory level. The book by Kelly presents a very large amount of data and also deals with aspects which are either more related to technological aspects or closer to the applications.
This book is an introduction and as such a number of important aspects have been omitted, mainly those which imply the use of mathematical concepts too involved to be developed in front of an audience new to the field. In particular, the reader will not find here a rigorous description of Green’s function formalism, which is necessary to include electron-electron interactions in transport modeling. A general discussion and study of many-body effects is also absent, which would be mandatory to understand a physical phenomenon such as the fractional quantum Hall effect, metal-based mesoscopic devices, carbon nanotubes operating in the 1D form of a Luttinger liquid and many others. Justice has not been done to the electron spin and its possible applications. This book could thus be given a second title: how far can we go using only independent electrons and the exclusion Pauli principle (see also Figure 1.4)? Surprising though it may seem, a good deal of nanostructure physics can still be grasped that way, but the reader will not find in this book a wealth of phenomena associated with electron-electron interactions. If they are not discouraged by this introductory text their study should constitute the next step, to be achieved by studying more specialized treatises and articles. Thus, if after studying the various chapters the student decides to read further and deeper, the main objective of this book will have been fulfilled. In the same spirit, we shall skip some difficult demonstrations which would be required for a rigorous derivation of some important solid-state physics results4. However, even if difficult theoretical techniques have been deliberately banished from the text, “the language of physics is mathematics”, and none of the chapters escape from the rule.
Figure 1.4.The quantum society and Pauli’s exclusion principle
Most exercises proposed at the end of each chapter are easy and their purpose is to provide the reader with a means of checking that they have correctly assimilated the chapter content and concepts. However, some of them require more time, and have been inserted to complete points not detailed in the main text.
Not all the sections were dealt with during the original oral course. I have put indicators at the beginning of each section:
This section is a reminder. Thus it can be skipped if the reader is already familiar with the corresponding field.
This section is essential to the book (and, quite accessorily, it may be helpful to prepare an exam). Some reminders belong to this category.
This section is not a reminder, but is not considered as essential to understand the other parts.
This section can be skipped at first reading.
[ASH 76] ASHCROFT N.W. and MERMIN N.D., Solid State Physics, Wiley, New York, 1976.
[BEE 91] BEENAKER C.W.J. and VAN HOUTEN H., Quantum Transport in Semiconductor Nanostructures, Solid State Physics 44, Academic Press, 1991.
[COH 77] COHEN-TANNOUDJI C., DIU B. and LALOË F., Quantum Mechanics, Wiley, New York, 1977.
[DAT 95] DATTA S., Electronic Transport in Mesoscopic Systems, Cambridge University Press, 1995.
[FER 97] FERRY D.K. and GOODNICK S.M., Transport in Nanostructures, Cambridge University Press, 1997.
[KEL 95] KELLY M.J., Low-dimensional Semiconductors, Oxford University Press, 1995.
[NAT 94] NATORI K., “Ballistic metal-oxide-semiconductor field effect transistor”, Journal of Applied Physics, vol. 76, no. 8, 1994, p. 4879-4890.
[RID 82] RIDLEY, B.K. Quantum Processes in Semiconductors, Clarendon Press, Oxford, 1982.
[WID 06] WIDIEZ J., POIROUX T., VINET M., MOUIS M., DELEONIBUS S., “Experimental comparison between sub-0.1µm ultrathin SOI single- and double-gate MOSFETs: performance and mobility”, IEEE Transactions on Nanotechnology, vol. 5, no. 6, 2006, p. 643-648.
1 Of course with a substantial amount of hard work and mathematics, and adding some thermodynamics. Note also that if quantum mechanical predictions can be verified with an astonishingly high precision, their interpretation was (and is) the source of thousands of scientific articles and books.
2 Classical electrons accelerated over orbits radiate electromagnetic waves and thus lose energy. Thus, bound electrons would collapse onto the atoms.
3 As you may have already noticed, the familiar way of addressing the reader began in the very first lines of this introduction.
4 Whenever this occurs, the unsatisfied reader will always be left with the possibility of consulting the more advanced textbooks or specialized articles mentioned in the bibliography.
The following is only a summary which includes the basic quantum-mechanical (QM) equations required for understanding the book. It is by no means a rigorous introduction to the topics, and if you want to go further, a wise thing to do would be to immerse yourself, e.g., in the introductory textbook by R.P. Feynman [FEY 65], and then in the book by Cohen-Tannoudji et al. [COH 77] for a while1. Besides, several formulations can be used to describe quantum mechanics, and here we shall not really make the effort of differentiating them from one another. A concise description of those different formulations can be found in [STY 02].
In classical mechanics the elementary constituents of matter are massive point particles whose movement is controlled by electromagnetic or gravitational forces. At any instant we can precisely define the particle position and, provided that at a time t we are given the position and velocity of all the system particles, we can calculate everything at any other time, and obtain well-defined trajectories (with a powerful enough computer if the particles are numerous, etc.), even if the system remains isolated. Thus, the whole picture is in principle perfectly deterministic. In quantum mechanics the situation is far more subtle. Experimentally, it appears that if we let a system evolve isolated for a while, the maximum information concerning this system that is physically accessible to human knowledge does not allow us to predict in a deterministic and unique way the result which will be obtained once we act on this system to measure some of its properties.
Figure 2.1.Quantum-mechanical interference experiment illustrating the dual wave-particle nature of the electron
The celebrated double-slit interference experiment is probably one of the more striking and meaningful illustrations of the quantum nature of matter. This effect figures in due place in almost any introduction chapter on quantum mechanics, and we shall respect this very justified habit. Interference experiments such as that illustrated by Figure 2.1 reveal that it is no longer possible to consider an entity such as an electron or a proton as a particle, and that it is not possible to consider it as a pure wave either [FEY 65]. “Identically prepared” electrons propagating through double slits exhibit interference patterns like waves [JON 61], but if we put a screen behind the plane of those slits we always obtain localized spots, as for particles [TON 89]. It is the statistical collection of a large number of such individual events which forms the interference pattern. Thus, in quantum mechanics (and in the real world) we have to assign a dual nature to electrons, whose behavior can be modeled only as a combination of both a particle and a wave. Suppress one slit and we lose the interference pattern. The wave really passes through both slits. Try to detect the electron at one of the two slits and we also lose the pattern, because the particle-like detection at one slit instantaneously reduces the extended propagating wave.
In some textbooks it is stated that quantum mechanics does not allow us, even in principle, to calculate any trajectory, and that it is a probabilistic theory in essence. This is not a correct statement, because one interpretation, known as the de Broglie-Bohm theory, gives a perfectly deterministic picture of quantum mechanics (at least for massive particles). In such an interpretation both a wave and a particle co-exist. The wave guides the particle, and in Bohm’s version the only guiding rule states that the particle momentum is equal to the phase gradient of the complex wave obeying the Schrödinger equation multiplied by ħ, a physical quantity known as the Planck constant [HOL 93]. In such a picture we can calculate well-defined trajectories (which are quite weird compared to classical ones, due to the action of the guiding wave). The unknown parameters, or “hidden variables”, which make experiments exhibit a statistical aspect are nothing but the initial particle coordinates with respect to the wave. Thus, it is not possible to say from quantum mechanics that the basic facts of nature are undeterministic in principle. However, since this interpretation does not provide any new prediction with respect to the usual quantum rules, and exhibits the moral drawback of being obviously non-local, it has not attracted the favor of most physicists2.
The “orthodox” interpretation of the quantum-mechanical formalism is that in between measurements we cannot precisely define a thing such as a particle; the experimental indeterminacy obtained when repeating the same experiment a large number of times with identically prepared systems results from the indeterminacy of nature itself, and not from a difference in system preparation which would be unknown to the observer. This was quite an incredible statement when quantum mechanics emerged, but it has now become a common “philosophical” view among scientists. In this book we shall not enter into those considerations any longer. We shall just use the quantum-mechanical rules, which up to now have always been experimentally validated with a numerical precision unprecedented by any other physical theory.
Here we reproduce the postulates as they are expressed in most quantum mechanics textbooks (see, e.g., [COH 77]). Maybe some of you are already acquainted with them, but for others it is perhaps not completely useless to give a reminder. If you already followed a good course in quantum mechanics just skip this part; you will learn nothing from it. If you are more inexperienced and require further explanation consult any quantum mechanics textbook.
First postulate: At a given time t, the state of a physical system is described by an abstract state vector | ψ (t)〉, also called a ket, which belongs to the state space.
In practice, we shall essentially appeal to the “quantum mechanics of the poor man”, most often contenting ourselves with identifying those states with the wave functions obtained by solving the Schrödinger equation inside our nanostructures. Before a position measurement, the nature of the electron wave prevents us from assigning a uniquely defined space-time position to the particle. These wave functions are mathematical devices associated with a given electron and link to each point of space a complex number. As with any wave, they can propagate or lead to stationary phenomena. The scalar product of two kets |φ〉 and |ψ〉 is defined as
(2.1)
where φ* is the complex conjugate of φ (the state space is a mathematical, complex Hilbert space; it is formed by complex functions operating on real space, and not using simple real vectors). The notation 〈φ| allows us to manipulate easily scalar products such as in equation (2.1). This was proposed by Dirac and is called a bra. The wave nature of an electron forbids its precise localization as long as it is not subject to a position measurement, during which it reveals its particle nature. Therefore, it is clear that in a quantum system, classical measurable quantities which depended on position, or on position and velocity (such as, for instance, the energy), cannot be assigned a precise and unique value unless they are specifically measured. Their description thus requires an operation which can act on the whole wave field, and this leads us to the formulation of the second postulate.
Second postulate: Any measurable physical quantity (such as, e.g., a position, or a momentum, or an energy) is described by an operator M which acts on the state vector | ψ (t)〉. This operator is called an observable.
We can apply to a state vector the operator B, and then another operator A. The operator corresponding to those two successive actions is noted AB, but this product is not always commutative. AB is not necessarily equal to BA, and when these two operator products differ we say that A and B do not commute. A famous example of non-commuting operators is the couple formed by position and momentum. Non-commutativity of some operators is indeed at the heart of many strange consequences of quantum mechanics. The quantity AB-BA is called the commutator of A and B, and is denoted [AB].
Third postulate: The result of a measurement of a physical quantity is always an eigenvalue of the corresponding observable M.
Consider for instance the position operator. Its eigenvalues are formed by the ensemble of all three-dimensional real vectors . The corresponding eigenstates are Dirac peaks centered at , which are written under the form . For the next postulate we shall limit ourselves to the case of a non-degenerate spectrum (i.e. we assume that to one eigenvalue corresponds one and only one eigenstate).
Fourth postulate: If the spectrum of the observable is discrete, and if the state vector is normalized, the probability P(mn) of obtaining the eigenvalue mn as a measurement result is equal to P(mn=|〈un |ψ 〉|2, where |un〉 is the normalized eigenvector of M associated with the eigenvalue mn. If the spectrum is continuous, the probability dP(μ) of obtaining a result between μ and μ+dμ is equal to dP(μ)=|〈uμ|ψ〉|2, where |uμ〉 is the eigenvector of the eigenvalue μ.
An example of a continuous spectrum is the position, and an example of a discrete spectrum is the energy inside a quantum well. From this postulate we can give a probabilistic interpretation of the wave function (initially proposed by M. Born): if we make a position measurement, from the fourth postulate the probability of finding the particle at a position is given by . Thus, from the fourth postulate is nothing but the probability density of finding the particle at a position . However, be careful: in the “orthodox” interpretation this is not the probability of the particle being at , and we cannot say that we do not know its position before the measurement simply because we are missing some information. Before a measurement the entity “electron” exists, but your particle is literally “nowhere”. Otherwise we could not obtain any wave interference effect. No matter how strange nature may seem, it is really the “measurement process” which turns an entity such as an electron into a corpuscle. In between measurements (or energy exchange with the macroscopic external world) the only relevant physical entity to prescribe the system evolution is the electron wave and nothing else.
Fifth postulate (also called the measurement postulate): If the measurement result of the physical quantity gives the result mn, the state vector of the system immediately after the measurement is the normalized projection
of |ψ〉 onto the sub-eigenspace associated with mn (Pn is the projection operator).
If the spectrum is non-degenerate this means that just after a measurement the state vector is necessarily equal to the normalized eigenstate corresponding to the obtained eigenvalue mn. This postulate has caused more ink to flow than all the newspaper issues devoted to Princess Diana and Madonna put together. Nevertheless, in this book we shall not even try to discuss the subtle issues which are attached to it.
Sixth (and final) postulate: The time evolution of the state vector obeys Schrödinger’s equation
(2.2)
in which the Hamiltonian H(t) is the operator associated with the energy of the system.
An operator A is Hermitian or self-adjoint if it verifies the property
(2.3)
As a matter of fact, in quantum mechanics all observables are both linear and Hermitian. Linearity ensures the validity of the superposition principle of our wave functions, which derives from the form of the Schrödinger equation. Hermiticity is required because if we measure something, we always obtain a real number, not a complex one, even if the scalar products are calculated using complex waves. Now from equation (2.3) and the fifth postulate, it is easy to demonstrate that this is achieved with Hermitian operators, because we have
(2.4)
Since the eigenvalue is equal to its complex conjugate, it must be real.
It is worth noting that the eigenstates of an observable always form an orthogonal basis of the state space, even if it is of infinite dimension (in fact this should be taken as the definition of an observable). Thus, it is possible to develop any state vector as a linear combination of the observable eigenstates |un〉:
(2.5)
A useful relation can be easily derived from that property. The unity operator I (i.e. the operator which leaves any state vector unchanged) can be expressed as
(2.6)
where the sum runs over all eigenstates (this is immediately demonstrated by making this operator act on a state vector, since this gives its expansion in terms of the eigenfunctions).
Assume that M is an observable and that the state vector is |ψ〉. By expressing the ket |ψ〉 in the eigenstate basis |un〉 the action of M upon |ψ〉 can be written as
(2.7)
Using the fourth postulate and equation (2.7), the quantity 〈ψ |Mψ〉 can thus be transformed as
(2.8)
Thus, 〈ψ|M|ψ〉 is nothing but the expectation value of M (i.e. the average value which would be approached after carrying out many measurements with identically prepared systems, all described by the state |ψ〉).
The relation between momentum and wavevector proposed by de Broglie is one of the key ideas which paved the way for the advent of a rigorous version of quantum mechanics. His proposal was to associate both a wave and a particle to describe an entity such as an electron, and that a plane wave function of the form carries a momentum . A heuristic way3 to find again the expression of the momentum operator
(2.9)
is to apply operator p to a plane wave , and to state that we must find the eigenvalue . It immediately appears that the momentum operator must have the form above. An important relation links the momentum operator to the position operator, that we can also easily find again using a plane wave. Apply the operator rp to a plane wave, and then apply pr. The difference between both results is not equal to zero and we will easily find the non-commutation relation
(2.10)
which can be shown to lead in turn to the famous Heisenberg’s uncertainty relation (see section 10.1 for a demonstration based on simple mathematics):
(2.11)
Equation (2.11) means that it is not possible to measure with an arbitrary precision both the momentum and position. This can also be viewed another way: from the de Broglie relation the momentum is proportional to the wavevector, and thus happens to be the Fourier transform of the position, within a constant proportionality factor ħ. If the reader has followed a course in signal processing they will already know that the more something is bounded (e.g. in time), the more its Fourier transform spreads (e.g. in frequency), and reciprocally. Thus, we cannot restrict one without extending the uncertainty of the other. The momentum and position are not the only non-commuting observables. For instance, two orthogonal components of an orbital angular momentum do not commute. From our comment on the Fourier transform we can also understand that time and energy, whose unit appears as a frequency multiplied by a constant action ħ in the Schrödinger equation, cannot be measured simultaneously with an arbitrary precision. Even if time is not an operator, we can also write
(2.12)
Assume that the potential energy does not depend on time. The Hamiltonian is , and along with equation (2.9) the Schrödinger equation can be written as
(2.13)
Here we are going to show that it is possible to separate the space and time dependence. If we look for solutions of the form
(2.14)
after a few manipulations we can obtain from equation (2.13) that
(2.15)
The left-hand side is a function of t only, and the right-hand side is a function of . Thus, to be equal for any value of t and these two quantities must be a constant, which we shall note ħω. Then we can integrate the left-hand side to obtain
(2.16)
and the right-hand side leads to
(2.17)
which can also be written under the form
(2.18)
defining Eħω. We can incorporate the C factor in function φ, because if we do so φ will still be a solution of equation (2.15), and we find that
(2.19)
is a solution to the stationary Schrödinger’s equation. This type of solution is called a stationary solution, because with a form such as equation (2.19) it is clear that the probability density does not depend on time and is just a function of position. Since the function satisfies equation (2.18) it is an eigenstate of the Hamiltonian operator, and E=ħω is an energy eigenvalue.
Equation (2.18) is nothing but the eigenvalue equation of the Hamiltonian operator. From the fifth postulate, after an energy measurement on such a system we can only obtain one of the eigenvalues of H. This is the famous energy quantization phenomenon, which is an essential ingredient of semiconductor nanostructures and devices such as semiconductor quantum wells. Be careful: the fact that we can only obtain one energy eigenvalue does not mean that before the measurement the electron is in the corresponding eigenstate, even though the scalar product between the actual state vector and the eigenstate must be different from zero (see the fifth postulate). The stationary eigenstates do form a basis onto which can be expanded any other state, but if the actual state vector is a linear combination of several stationary states its evolution becomes time-dependent, because if we add several complex functions such as equation (2.19) the time does not only appear in the phase factor.
In the general case the wave function depends on time and thus the probability density to find the particle somewhere also depends on time. This means that there is a probability density flow and to calculate an average electron current we must be able to express this flow as a function of the wave function. Since probability should be conserved we expect to find a relation such as the charge conservation equation established from Maxwell’s equations. In this section we limit ourselves to the case of a scalar potential.
To find write the wave function under the form ψRexp(iS/ħ), where R(ψψ*)1/2 is the modulus of the wave function and S its phase multiplied by ħ. Introduce this form into the Schrödinger equation and separate the real and imaginary parts. This healthy calculation exercise will lead us to two new equations, and the one corresponding to the imaginary part of Schrödinger equation is
(2.20)
where PR2ΨΨ* is the probability density. This should remind us of the charge conservation equation obtained from Maxwell’s equations of electromagnetism [JAC 98]:
(2.21)
(in equation (2.21)ρ is the charge density and the current density). By comparing equations (2.20) and (2.21) we should easily be convinced that equation (2.20) expresses nothing but the probability conservation, and from equation (2.20) the probability current density is obviously given by
(2.22)
A more convenient and common way to write equation (2.22) is
(2.23)
We can easily check that equation (2.22) and equation (2.23) are identical by replacing Ψ by Rexp(iS/ħ) in equation (2.22) and by developing the resulting expression. If we calculate the probability current carried by a plane wave exp(i(kx−ωt)), by applying equation (2.23) we immediately find that it is equal to ħk/m. Fortunately enough this is also the velocity expected from the de Broglie relationship pħk. Note that with a plane wave, the probability density does not change with time, but nevertheless we do have a probability flow. Electrons can be found everywhere but only move one way.
With a scalar potential vanishing everywhere the Schrödinger equation becomes
(2.24)
If we seek solutions of the form φ(x,y,z)φx(x)φy(y)φz(z) we can separate the space variables and for each space coordinate we only have to deal with an ordinary second order linear differential equation. Any (positive) energy value is allowed and the eigenstates are plane waves of the form
(2.25)
where the wavevector and pulsation ω are related to the energy E by the relationship
(2.26)
A plane wave spreading over the whole space cannot be normalized, so that it is clear that a physically meaningful wave must be a linear combination of plane waves in order to form a normalizable wavepacket (note that if the plane waves are confined to a given volume V they can be normalized just by dividing them by V1/2). The velocity operator of a plane wave is given by
(2.27)
A localized electron wave function must be represented by a wavepacket, i.e. by a linear combination of plane waves of the form
(2.28)
The spreading in k values is required by the uncertainty principle to obtain a finite spreading of the wave function in real space. To simplify the notations we restrict the following analysis to one dimension, but it is straightforwardly generalizable to three. First remember that ω is a function of k. If the wavevector spreading remains limited around a mean value k0 so that kk0+Δk we can use the expansion [SMI 69]
(2.29)
and obtain from equation (2.28)
(2.30)
If we define the function h(Δk)=g(k0+Δk) it takes appreciable values just for small Δk values, and changing the integration variable we can express the wave function as
(2.31)
For t=0equation (2.31) reads
(2.32)
f(x) is the inverse Fourier transform of h, whose variable is Δk, and since h takes appreciable values only over a small interval Δkmax, f(x) is restricted to an interval of order xmax1/Δkmax. Thus, we see that the wave function is normalizable. We can rewrite equation (2.31) as
(2.33)
From this equation the phase velocity is still equal to ω0/k0, as for a single plane wave, but the velocity which characterizes the wavepacket envelope f is obviously given by
(2.34)
and is called the group velocity. It is the average velocity which would be found by measuring the velocity of electrons prepared in a state described by the wave packet considered above.
This section is a reminder. For those interested in rigorous derivations and not afraid of indexes and Fourier transforms or group theory they can consult the book by Ashcroft and Mermin [ASH 76]. All semiconductors of interest for our book are formed by the crystalline assembly of atoms, forming a three-dimensional periodic lattice and thus a periodic, three-dimensional potential landscape which is seen by the electrons of the materials under consideration. As we will explain in more detail in section 2.6, the lattice periodicity renders possible the existence of allowed and forbidden energy bands, electrons being free to move in the former. To make the whole picture clearer, hereafter we restrict ourselves to the 1D case. If V(x) is the potential energy seen by the electrons, with lattice periodicity a, we have to solve Schrödinger’s equation in the stationary case:
(2.35)
where Ψ(x) is the electron wave function and E is the energy eigenvalue. If we remember that |Ψ(x)|2 represents the probability of finding the electron at abscissa x if we make a position measurement, it is quite reasonable to assume that for extended states, if there is any, the probability density should follow the lattice periodicity (in fact this can be rigorously demonstrated, see [ASH 76]). Thus, we can write
(2.36)
From equation (2.36) it immediately follows that Ψ(x) and Ψ(x+a) differ only by a phase factor, hence
(2.37)
where Φ is a real number. If we now define the real number k as kΦ/a, we easily obtain
(2.38)
Defining the function
(2.39)
so that we can re-express the wave function as Ψ(x)eikxuk(x), we straightforwardly arrive at
(2.40)
by using equations (2.38) and (2.39). Therefore, uk has the lattice periodicity. As a matter of fact, this is generalizable to three dimensions and is known as the Bloch theorem, which we state below.
The stationary electron wave functions in a periodic crystal are Bloch waves of the form where has the lattice periodicity and is a real wavevector.
An equivalent formulation is the statement that a 3D Bloch wave verifies the property
(2.41)
where is any lattice vector4. Note that equation (2.41) is nothing but the 3D extension of equation (2.38).
If the lattice were perfectly periodic, the electrons could really propagate freely! Their wave function is a plane wave modulated by a Bloch function. All k values are allowed if the medium is infinite, and subject to some boundary conditions if the sample is of finite size. However, if the reader has already followed an introductory course in quantum mechanics they will have certainly studied the 1D Kronig-Penney model (see, e.g., [COH 77]) and will know that not all energy values are authorized, but the propagating states are restricted to allowed energy bands separated by forbidden bands (the existence of those bands and the way in which they are formed is discussed in more detail in section 2.6). In fact, this is a general feature: whatever the three-dimensional periodic lattice, the periodic potential leads to the formation of a band structure, which defines continuous and allowed energy intervals with extended states described by Bloch waves, separated by forbidden energy gaps. This band structure is fully defined by the relationship which exists between energy E and wavevector k.
Figure 2.2.Band structure of gallium arsenide
In practice, once we know the periodic lattice, it is a matter of (sometimes quite hard) calculation to recover the band structure of the considered materials (a way to do this in a quite approximate but simple way is exposed in section 2.7). Of course in the case of commercial semiconductors such as silicon (Si) and gallium arsenide (GaAs) this was done a long time ago and in great detail. Thus, if we want to understand how these semiconductors behave we just have to examine the E versus k curves that are available in the literature, such as the one above (Figure 2.2). It must be noted that the energy value does not depend only on the magnitude of the wavevector, but also on its orientation. In general, the E vs k curves are given for the main crystallographic orientations in the first Brillouin zone (the latter is defined in section 10.2).
A quite interesting feature is that at a local extremum in the E versus k curve in a band (i.e. at the bottom or at the top of an allowed energy band, see Figure 2.2) the first derivative is equal to zero and we can of course develop E versus k up to second order:
(2.42)
where k0 is the extremum position in reciprocal space, and where we consider for the sake of simplicity that the extremum is isotropic. Equation (2.42) defines the effective mass, and is indeed extremely useful: take as the origin of energy the bottom (or the top) of the band EC. From equation (2.42) we see that close to this threshold we obtain the electron kinetic energy in exactly the same form as for free electrons in vacuum, but with a change in apparent mass. In addition, suppose that our electron has a definite momentum. It can be readily demonstrated that its velocity will be given as in the vacuum case by the usual group velocity expression equation (2.34).
These are quite amazing results, because we discovered that we really can get rid of all this complicated part, the Bloch function, which is very difficult to calculate from the knowledge of the lattice and must be calculated numerically. However, there is still more than this: it turns out that all what we have to do, if we want to study electron dynamics in a band and their response to an external field or (long-range) potential, is to consider that they behave just like electrons in vacuum, but with a different mass (i.e. we just have to consider the wave function envelope). Since the E vs k relation depends on orientation, this mass may also depend on angle (this is the case for Si). However, we can use this extraordinary (and convenient) result that the periodic lattice is fully taken into account by just a mass renormalization, which allows us to drop the Bloch function term of the electron wave function and just keep the free wave envelope (this is further demonstrated in section 2.8). Nevertheless, do not forget that if the electron kinetic energy further increases (e.g. by applying a high electric field), then they can reach a region far from the bottom of the band, in which the second order term is not necessarily the prevailing one. However, even in such a case we can use an equation which keeps just the wave function envelope, even though it may become slightly more complicated than simply using an effective mass. What you must retain is that in a semiconductor, at the bottom (or at the top) of an energy band, the energy is proportional to k2 and then we have an effective mass along a given axis which is given by
(2.43)
Even if the effective mass approximation is not valid, from the Bloch theorem we can describe an electron wave function with a function owning the periodicity of the lattice, i.e. with short-range, atomic scale variations, modulated by a plane wave with wavevector . In the semi-classical approximation we consider that any perturbing potential (e.g. due to an externally applied electric field) is long range with respect to the Bloch wave variations and to the spread in the electron wavepacket. We can thus assume that the wave packet extends over a small distance, over which the potential is a well-defined constant (see Figure 2.3), and that the packet itself is considered as long-range with respect to the atomic lattice.
Figure 2.3.Relative size of the elements considered in the semi-classical approximation
As a consequence, if we are concerned with an electrostatic potential , the energy conservation requires that the sum , which includes both the kinetic electron energy and the electron electrostatic energy, remains constant (the potential energy included in is of course a constant). The derivative of this sum with respect to the time must be equal to zero and can be straightforwardly turned into
(2.44)
From equation (2.44) we immediately obtain
(2.45)
where is the electric field. Addition of the magnetic field-induced Lorentz force leads to the more complete equation5
(2.46)
Equation (2.34), together with equation (2.46), form the semi-classical equations of motion. In these equations the dynamical aspects due to the forces exerted on the electron by the periodic lattice are fully taken into account by the knowledge of the dispersion relation.
In this section we shall restrict the discussion to one-dimensional systems in order to simplify the concepts. We consider a lattice cell of size a. As explained in section 10.2, in a band the allowed energies are a periodic function of wavevector (see also Figure 2.4), and is defined modulo 2π/a. Therefore a unit cell of the reciprocal lattice, which is defined as the interval [−π/a,π/a], contains all values necessary to recover the full dispersion relationship and all eigenfunctions. It is convenient (and usual) to consider only the first Brillouin zone, since any value of can be reduced to another wavevector value comprised inside it. All allowed states being described by a wavevector of the first Brillouin zone, we can enumerate all of them just by counting the number of allowed wavevectors in this zone, and below we will see that this consideration still applies if we submit our electrons to a force. From a dispersion relation such as in Figure 2.4 a number of interesting points can be deduced and are worth being mentioned.
Figure 2.4.An electron crossing the first Brillouin zone under the action of an electric field is equivalent to an electron re-entering into the opposite side of the same zone, because two wavevectors which differ by a reciprocal lattice vector describe the same wave function
First, from equation (2.43) we see that at the band bottom the effective mass is positive, but at the top of a band it is negative (see Figure 2.4). Therefore, in the latter case, the electron sees a force which exhibits the same sign as that exerted by the external electric field (and which is of course due to the combined action of the field and the lattice)! Put just one electron in a band, and assume that there is no scattering. Under the action of a negative electric field and from equation (2.45) the wavevector value increases linearly with time, and when k crosses the first Brillouin zone boundary, due to the periodicity of the dispersion relationship, its is exactly equivalent to consider that the wavevector goes on increasing after having passed the boundary (left part of Figure 2.4), or that the electron crossing the first Brillouin zone boundary re-enters through the opposite side of the zone (see the right part of Figure 2.4). Hence, the movement is described by a sawtooth oscillation of the wavevector, and from equation (2.34) the velocity follows periodic oscillations known as Bloch oscillations ( Figure 2.5a). In such a picture an electron cannot be indefinitely accelerated by the field, but is decelerated whenever it enters a part of the zone where the effective mass is negative6.
Figure 2.5.(a) Bloch oscillations of the velocity and (b) a filled band cannot conduct electricity because all electron velocities compensate one another
Now assume that a band is totally filled. All states can be described by wavevectors inside the first Brillouin zone, and in the absence of scattering, all electrons follow the same process as before. However, since there are as many electrons going one way as the opposite, and since the electric field cannot lead to any overall change (see Figure 2.5b) in the state occupation, the electrical current is equal to zero. A filled band does not conduct electricity. Thus, in practice, in a semiconductor the existence of an electrical current is achieved by partially filling an empty conduction band, or by partially emptying an initially full valence band.
In the almost empty band case, there is no conceptual difficulty because the electrons lie in the lowest energy states and have a positive effective mass, so that their velocity increases in the direction opposite to that of the field, as usual. In addition, in practical devices, they are in fact never accelerated up to the top of a band by the electric field, as they suffer from inelastic collisions which prevent the observation of Bloch oscillations. However, in the almost filled band case, it would seem that the state of affairs is more difficult, as we now have to consider the movement of many electrons with a negative mass, which in addition acquire a velocity in the electric field direction. However, a great simplification is obtained by appealing to the concept of holes.
Figure 2.6.An electron vacancy in an almost full band as in (a) can be replaced by a negative electron and a positively charge particle, both following the same movement as the vacancy. The added electron and the other electrons form a filled band which does not conduct, and the positive particle, whose velocity increases with time in the direction of the field, can be replaced by a positively charged “hole” moving as in the band of figure (b); this hole carries the same current as the almost filled band, exhibits a positive effective mass and follows the equation ħdk/dt=+eε
Have a look at Figure 2.6. All states are filled but one, and under the action of a negative electric field, the wavevector of all electrons continuously increases, until it crosses the Brillouin zone and re-enters at the right side through an instantaneous negative jump. The “absent electron” wavevector obviously follows the same evolution. This situation is thus completely equivalent to a picture in which we replace this “electron vacancy” by a pair formed by a fictitious electron of charge −e and a fictitious, positively charged particle following the same movement as the vacancy. Charge and current are obviously conserved, and the real electrons plus the fictitious electron form a filled band, not carrying any current, and you can just forget them. We are thus left with the positive particle, which can advantageously be replaced by a positively charged “hole” with an effective mass mh and a wavevector opposite to that of the electron. The hole velocity is the same as that of the fictitiously introduced electron, and it acquires a velocity in the direction of the electric field. In an almost filled band the electrons fill the lower energy states and therefore holes are located at the bottom of a hole band. The hole dispersion relation is usually plotted upside down so as to keep the same appearance as the corresponding electron band. We can thus model electrical transport just by considering positively charged holes, with a positive effective mass equal to mh=−me, and follow our classical intuition7.
A semiconductor heterostructure is a stack of different semi-conducting materials, the lattices of which are well matched so as to form acceptable interfaces. These stacks are most often obtained by an advanced growth technique known as Molecular Beam Epitaxy (MBE), in which the successive atomic layers are deposited one by one in an ultra-high vacuum chamber. If we consider such a device, we have a misalignment between the energy bands of the different materials we put together, and in the conduction band (or valence band for the holes) we can obtain something which is very close to the potential wells that the reader may have studied in an introductory quantum mechanics course, and which leads to quantized energy values corresponding to a kinetic energy along the confining direction (Figure 2.7).
Generally, if we add a particular potential U which is long-range with respect to the periodic lattice, since we dropped the Bloch term, we now have to solve Schrödinger’s equation but with an effective mass:
(2.47)
Figure 2.7.Conduction band energy diagram of a finite quantum well
This is the approach that we will take in many cases. It is known as the Effective Mass Approximation (EMA). It is valid as long as the variations of the potential U remain long-range considering that of the periodic lattice, but it is very often extended to cases where the potential variation is abrupt, because its use enormously simplifies the physical discussion. This relation is demonstrated in section 2.8, and also discussed in section 2.9. We shall often drop the energy EC by taking it as the origin. Note that the effective mass is in general not the same in the two materials which form the heterostructure, a point which renders realistic calculations somewhat more subtle than the demonstrations presented in this book, but which does not radically alter the conclusions of our physical discussions.
Take two materials with a nice interface between them (e.g. GaAs and InGaAs) and make a sandwich with the materials with the lowest bandgap in between (a “heterostructure” is formed). For the conduction band we obtain something similar to Figure 2.7. To simplify we assume that the effective mass is the same in both materials, and then we have to solve the Schrödinger equation which can be straightforwardly put in the form below:
(2.48)
Here we note that for a 1D system the solutions of the Schrödinger equation are non-degenerate (i.e. there is only one type of independent wave function for one energy level). Suppose that the potential V(x) is even (V(x)=V(−x)), and that Ψ(x) is a solution. Replacing x by −x in the Schrödinger equation immediately yields that Ψ(−x) is also a solution with the same energy. Since the levels are non-degenerate and the wavefuctions are normalized we thus have
(2.49)
where φ is an arbitrary phase factor. Applying the transformation x → −x twice gives
(2.50)
Thus, we find that either Ψ (x)Ψ(−x), or Ψ(x)=−Ψ(−x). If the 1D potential is even then the solutions are either odd or even. Let us apply this result to our potential.
If we are interested in the bound states with energy E lower than V0, defining
(2.51)
and
(2.52)
the previous equations (equation (2.48)) simplify to
(2.53)
These are simple second order differential equations, whose solutions are
(2.54)
In the last two expressions we only keep the vanishing exponential terms, because the physical solutions cannot grow exponentially when going to infinity. Then, we impose the continuity of Ψ and the continuity of dΨ/dx for x±L/2. These two properties might be demonstrated directly from the Schrödinger equation, as long as the potential energy remains finite (you can find the demonstration in [COH 77]; it is not reproduced here because understanding the demonstration is not essential for our discussion). We obtain for the even solutions
(2.55)
To obtain solutions other than zero for B and D the determinant of this system must be equal to zero:
(2.56)
Therefore, equation (2.55) leads to
(2.57)
for the even solutions, and we would obtain from a similar reasoning that
(2.58)
for the odd solutions. These implicit equations give the wavevector and thus the quantized energy levels as a function of the well width L. In general we consider a given well, thus L is fixed, and in textbooks it is often proposed to find graphical solutions to these equations (see [COH 77]). However, if we want to plot analytically the variation of these levels as a function of L
