Atomic Structure Prediction of Nanostructures, Clusters and Surfaces - Cristian V. Ciobanu - E-Book

Atomic Structure Prediction of Nanostructures, Clusters and Surfaces E-Book

Cristian V. Ciobanu

0,0
93,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

This work fills the gap for a comprehensive reference conveying the developments in global optimization of atomic structures using genetic algorithms. Over the last few decades, such algorithms based on mimicking the processes of natural evolution have made their way from computer science disciplines to solid states physics and chemistry, where they have demonstrated their versatility and predictive power for many materials. Following an introduction and historical perspective, the text moves on to provide an in-depth description of the algorithm before describing its applications to crystal structure prediction, atomic clusters, surface and interface reconstructions, and quasi one-dimensional nanostructures. The final chapters provide a brief account of other methods for atomic structure optimization and perspectives on the future of the field.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 379

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Cover

Related Titles

Title Page

Copyright

Preface

Chapter 1: The Challenge of Predicting Atomic Structure

1.1 Evolution: Reality and Algorithms

1.2 Brief Historical Perspective

1.3 Scope and Organization of This Book

References

Chapter 2: The Genetic Algorithm in Real-Space Representation

2.1 Structure Determination Problems

2.2 General Procedure

2.3 Selection of Parent Structures

2.4 Crossover Operations

2.5 Mutations

2.6 Updating the Genetic Pool: Survival of the Fittest

2.7 Stopping Criteria and Subsequent Analysis

References

Chapter 3: Crystal Structure Prediction

3.1 Complexity of the Energy Landscape

3.2 Improving the Efficiency of GA

3.3 Interaction Models

3.4 Creating the Generation-Zero Structures

3.5 Assessing Structural Diversity of the Pool

3.6 Variable Composition

3.7 Examples

References

Chapter 4: Optimization of Atomic Clusters

4.1 Alloys, Oxides, and Other Cluster Materials

4.2 Optimization of Substrate-Supported Clusters via GA

4.3 GA Solution to the Thomson Problem

References

Chapter 5: Atomic Structure of Surfaces, Interfaces, and Nanowires

5.1 Reconstruction of Semiconductor Surfaces as a Problem of Global Optimization

5.2 Genetic Algorithm for Interface Structures

5.3 Nanowire and Nanotube Structures via GA Optimization

References

Chapter 6: Other Methodologies for Investigating Atomic Structure

6.1 Parallel Tempering Monte Carlo Annealing

6.2 Basin Hopping Monte Carlo

6.3 Optimization via Minima Hopping

6.4 The Metadynamics Approach

6.5 Comparative Studies between GA and Other Structural Optimization Techniques

References

Chapter 7: Perspectives and Outlook

7.1 Expansion Through the Community

7.2 Future Algorithm Developments

7.3 Problems to Tackle – Discovery Versus Design

Index

Related Titles

Werner, W. S. M. (ed.)

Characterization of Surfaces and Nanostructures

Academic and Industrial Applications

2008

ISBN: 978-3-527-31760-8

Eftekhari, A. (ed.)

Nanostructured Materials in Electrochemistry

2008

ISBN: 978-3-527-31876-6

Vedmedenko, E.

Competing Interactions and Patterns in Nanoworld

2007

ISBN: 978-3-527-40484-1

Wilkening, G., Koenders, L.

Nanoscale Calibration Standards and Methods

Dimensional and Related Measurements in the Micro- and Nanometer Range

2005

ISBN: 978-3-527-40502-2

Reich, S., Thomsen, C., Maultzsch, J.

Carbon Nanotubes

Basic Concepts and Physical Properties

2004

ISBN: 978-3-527-40386-8

All books published by Wiley-VCH are carefully produced. Nevertheless, authors, editors, and publisher do not warrant the information contained in these books, including this book, to be free of errors. Readers are advised to keep in mind that statements, data, illustrations, procedural details or other items may inadvertently be inaccurate.

Library of Congress Card No.: applied for

British Library Cataloguing-in-Publication Data

A catalogue record for this book is available from the British Library.

Bibliographic information published by the Deutsche Nationalbibliothek

The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at <http://dnb.d-nb.de>.

©2013 Wiley-VCH Verlag GmbH & Co. KGaA, Boschstr. 12, 69469 Weinheim, Germany

All rights reserved (including those of translation into other languages). No part of this book may be reproduced in any form – by photoprinting, microfilm, or any other means – nor transmitted or translated into a machine language without written permission from the publishers. Registered names, trademarks, etc. used in this book, even when not specifically marked as such, are not to be considered unprotected by law.

Print ISBN: 978-3-527-40902-0

ePDF ISBN: 978-3-527-65505-2

ePub ISBN: 978-3-527-65504-5

mobi ISBN: 978-3-527-65503-8

oBook ISBN: 978-3-527-65502-1

Cover Design Adam-Design, Weinheim

Typesetting Thomson Digital, Noida, India

Preface

Knowledge of the atomic structure of a materials system is the key to understanding most of its properties, as well as the physical phenomena that can occur in that system. Even when the bulk crystal structure of a material is known or understood, that knowledge does not readily imply that we know the structure of atomic clusters or nanowires made of that particular material. We might venture a good guess for clusters with very large numbers of atoms, or in the case of very thick nanowires (whiskers), but most of the new and interesting phenomena will not happen in the regimes of large dimensions but at the nanoscale. At the nanoscale, often the structure and properties of materials have little to do with the structure of the bulk crystalline material! This realization has provided strong motivation for the development of methodologies aimed at finding the atomic configuration of nanostructures. If we analyze deeper the field of atomic structure determination, we would recognize, for example, that our old college textbooks provided only limited understanding of the correlation between the composition of an alloy or compound and its equilibrium structure: this situation was described a few decades ago as a “continuing scandal in physical sciences” (John Maddox, 1988). This challenge also sparked significant and long-standing efforts to predict the crystal structure from its composition. The two main impediments to crystal structure prediction and to the determination of the atomic configuration of nanostructures have been for a long time the lack of realistic models of atomic interactions and the lack of an efficient optimization scheme. Both of these impediments have by now been remedied for a respectable number of materials systems, and the time seems ripe to describe atomic structure prediction in a book. We are doing that here, focusing mainly on one method, the genetic algorithm. Genetic algorithms “mimic” the processes of the natural evolution to create progressively better-fit atomic structures, and eventually find or predict their equilibrium configurations. While one may argue that the natural evolution is overly simplified in this case, practice has shown that it is the simplification of dealing with real-space configuration (as opposed to their representation as genotype, akin to actual genes) that is responsible for obtaining solutions within a reasonable time and computational resources.

The main purpose of this book is to offer a beginning practitioner (often a graduate student) a detailed background in the way genetic algorithms for atomic structure prediction work. As noted in the book, we use the terms “genetic” and “evolutionary” interchangeably, although a few workers in the field tend to reserve the term “genetic” to describe the algorithm in the binary representation of the coordinate space. Confusions are unlikely, since nowadays the binary representation is hardly used anymore. When this book was commissioned by Wiley-VCH, the focus was only on nanostructures, clusters, and surfaces, hence the cover title. However, while this project progressed, so did the state of the field – culminating with the genetic algorithm solution of one of the most important problems in solid-state physics, that is, the prediction of crystal structure solely from the knowledge of its composition. We have included this important development in the book, so the reader obtains a timely and up-to-date view of the field.

While our aim is to provide a clear understanding of how the genetic algorithm works and of its strengths and successes, this book is by no means a review of all genetic algorithm works published so far. Given the rapid developments in the field, such a review would be both too long and incomplete. We have structured this book as a primer that intertwines basic general descriptions of the algorithms (e.g., the operations, the fitness functions, boundary conditions, where and how the algorithm can be applied, etc.) with full-detail applications to specific systems including crystals (3D), surfaces (2D), nanowires and nanotubes (1D), and certainly clusters (0D). We believe that the reader would benefit from this approach, as well as from the fact that all the examples or case studies have been scrutinized in the peer-review process (as technical journal publications) and contain virtually all working details of the approach and of the analysis. As such, we hope that the book provides significant understanding of the method so that any reader interested in applying the algorithm to his or her problem can do so with relative ease and confidence – whether or not his/her problem has already been covered in the book.

Throughout the years, we have been fortunate to work with a number of faculty colleagues, postdoctoral associates, and graduate students from Ames Laboratory, Iowa State University, and Colorado School of Mines, as well as from many other institutions. The work that is described in most detail (i.e., the full-detail applications) in this book has been performed alongside with them, and we gratefully recognize them here: R.M. Briggs, T.-L. Chan, F.-C. Chuang, T. Davies, D.M. Deaven, G.H. Gilmer, B. Harmon, M. Ji, B. Liu, N. Lu, D.E. Lytle, D.P. Mehta, J.R. Morris, M.C. Nguyen, C. Predescu, J.-L. Rodriguez-Lopez, V.B. Shenoy, K. Umemoto, R.M. Wentzcovitch, S. Wu, J. Zhang, and X. Zhao.

We thank them for their work and insights, and we look forward to our continued collaborations. Last but not least, we thank very much the editor of Physical Sciences Books at Wiley-VCH, Nina Stadthaus, who has guided this project with unmatched professionalism, patience, and dedication.

Golden, CO

Ames, IA

Ames, IA

September 2012

Cristian V. CiobanuCai-Zhuang WangKai-Ming Ho

1

The Challenge of Predicting Atomic Structure

The atomic structure is the most important piece of information that is necessary when studying the properties of crystals, surfaces, interfaces, or nanostructures. If the bulk crystal structure of a material or compound is known, this may, under certain conditions, help in determining the structures of surfaces, clusters, or nanowires; however, such knowledge does not at all imply that we automatically know the structure of a surface or of a nanoparticle made out of that material. Often the surfaces and nanostructures adopt very intriguing atomic configurations, especially when their size is small (e.g., small number of atoms for clusters, thin diameter for nanowires, etc.). By now, the structure of many crystals is already known and usually taken for granted, sometimes to the point of considering a surface or a nanostructure as a simple truncation of the bulk material. As we see in other chapters, this is rarely the case at the nanoscale: the structure of atomic small clusters has little (usually nothing) to do with that of the bulk crystalline material!

Coming back to the issue of bulk crystal structure, on fundamental grounds, we have to recognize that retrieving the crystal structure from a given material or compound solely from knowing its composition is not an easy or a straightforward task: what determines, say, molybdenum to “choose” a body-centered cubic structure at normal conditions of temperature and pressure, when palladium has a face-centered cubic (fcc) structure and ruthenium adopts a hexagonal close-packed one? Why does NaCl, the common salt, adopt a structure in which both the fcc sublattices of Na and those of Cl are displaced along the side of the conventional cube, but CsCl adopts a different structure even though Cs is in the same group as Na in the periodic table? Granted, one can easily give a partial answer to this question on grounds that the Cs atom, although of same valence as Na, has a larger ionic radius and therefore would tend to have more Cl atoms around it than Na has. Still, what made NaCl adopt its specific structure (Figure 1.1) in the first place? Why are the sodium atoms in NaCl crystal arranged in an fcc structure, could they not have chosen a different arrangement? Some decades ago, John Maddox [1] phrased the problem of determining the crystal structure from its composition as provocation also meant as a statement of fact (or perhaps the other way around):

“One of the continuing scandals in the physical sciences is that it remains impossible to predict the structure of even the simplest crystalline solids from a knowledge of their composition.”

Figure 1.1 The crystal structures of NaCl (a) and CsCl (b).

The scientific community has responded very well to that provocation: now, two and a half decades later, the development of global search algorithms coupled with the increase in available computing power allows us to make the crystal structure predictions from first-principles calculations, starting only with the desired composition. In this chapter, we are going to follow the main milestones that have led to such progress, and then describe the scope and organization of this book.

1.1 Evolution: Reality and Algorithms

What eventually has facilitated the best answer to Maddox's challenge are evolutionary algorithms or genetic algorithms (GAs) for structure prediction. We use these terms interchangeably, although only part of the scientific community does so. Methods to simulate materials thermodynamics already existed (such as simulated annealing [2]): those methods are highly meaningful from a fundamental standpoint. Not only do they simulate the thermodynamics of the material, but, in principle, they may double as ground-state structure prediction methods if the system is run at high enough temperatures and then slowly cooled down. However versatile this approach may seem at the first sight, in its early days it has not always led to ground-state structures due to the tendency to get stuck in metastable local minima. As it turns out, having the materials system go over energetic barriers in the potential energy surface (PES) is not an easy task: in fact, it is extremely time consuming and inefficient. Even if sufficiently slow cooling rates may be achieved for certain systems and ground state is reached, those rates do not serve as universal knowledge to be transferred to other materials.

What has significantly changed this situation are methods that can cross energy barriers with some ease, which have been continuously developed starting in early 1990s. Among those, the GA approaches have been developed now to the point that they make reliable predictions of cluster or crystal structures. In a genetic algorithm, we usually start with a pool of structures (genetic pool) that may or may not have anything to do with the real material structure. Only the nature of the atoms that compose those structures should be chosen as desired.

GAs are simple and generic methods that evolve the structures in this pool according to a certain energetic criterion (or cost function). The evolution proceeds via a set of genetic operations: through these genetic operations, new structures are created from the old ones, their cost functions are evaluated, and the new structures are considered for inclusion in the pool of structures depending on the values of their cost functions. If the energy (cost function) of a new structure is sufficiently low, then that newly created structure will be included in the genetic pool at the expense of an older and less energetically favorable one. It is obvious that the GA procedure, which will be described in more detail throughout this book, is guaranteed to lead to improvements in the structure in the pool. At the very worst, any newly created structure has higher energy than the old ones at any given point, in which case the genetic pool is never updated. However, this never happens if the initial genetic pool is initialized with random structures. The genetic operations, after a sufficient number of applications, will generate structures that are better than the random ones simply by virtue of keeping the pool updated to include structures with low energies.

In a way, the analogy with reality is very clear. Traits (atomic configurations) that are beneficial to the species are propagated in time through the genetic operations, while those that are not lead to creation of specimens that are unfit to survive and eventually die (structures are pushed out of the genetic pool). Although the principle of GA is strikingly close to that encountered in the actual Darwinian evolution of species, we note that in GA for structure optimization the parent structures can pass on not only traits that they themselves have been born with but also traits that they have acquired during the evolution: this is called Lamarckian evolution. In fact, this Lamarckian evolution coupled with the relative simplicity of atomic and molecular systems compared to actual genomes leads to structures that evolved sufficiently fast. In the evolution of species, there is no actual end to the process; that is, there is a continuous adaptation to the environmental conditions of life. However, when using GAs for atomic structure problems, we use them as a way to find a solution, usually a ground-state structure for some given conditions; that is, we use the lowest-energy structure for a given composition and then halt the procedure if the best structure in the genetic pool stops improving after a prescribed number of generations.

1.2 Brief Historical Perspective

The use of GAs as a structure optimization method can be traced back to the work of Hartke [3], who used it for small silicon clusters; another early use of the method was for small molecular clusters [4]. It is interesting to note that in order to keep the algorithm close to the real-life evolution, the structures in those early works were encoded as strings of 0s and 1s, strings that are interpreted as the genomes of the individual structures. Then, new structures would be created from gene splicing in which genetic operators (crossover, mutation) acted on binary strings, bit by bit. As a matter of historical perspective, we have to note that the encoding in the work of Hartke was not done by independently discretizing the space; rather, there was a prescribed set of geometric parameters associated with the cluster that was encoded as a binary string. Although there have been other works benefiting from GA searches in this binary encoding (called genotype representation) [5–7], there are already two problems to be recognized. First, the encoding of space requires a careful definition of certain geometric parameters. Second, because in this encoding GA does not fine-tune on local or global minima, in order to do that one would have to add a minimization step, in real space.

As a response to the first of these two problems, Zeiri [8] described the structures in a genetic pool not as binary strings, but as the collection of position vectors for the atoms. This description removed the artifacts associated with encoding and decoding, which stem from user-based decisions of the parameters to be encoded. This obviation of the requirement for encoding was an important conceptual step in the development of GA for cluster structure determination. Also in 1995, the next significant steps in the development of the algorithm came from Deaven and Ho [9], who (i) replaced the gene-splicing operations by real-space crossover operations in which spatial domains of two parent structures are combined together to form a new structure and (ii) introduced a local optimization (relaxation) step after each new structure was generated. The introduction of a local relaxation with every single structure considered for inclusion into the genetic pool changes the PES of the system, in a way that makes the algorithm deal only with local minima on the PES and nothing else. This is a significant simplification of the structure determination problem and is the key to the success of GA. There are still energy barriers to be crossed from between various local minima, but the cut-and-paste crossover operation of Deaven and Ho [9] is not hampered by these barriers. This is because the local minimization of coordinates can often alleviate most of the stresses created at the boundary between the two parental domains joined together; some new characteristics are acquired after the relaxation step, which is the reason why in Deaven–Ho development the GA is more akin to the Lamarckian evolution than to the Darwinian one.

After a few initial tests on carbon clusters [9], point charges [10], and Lennard-Jones systems [11], the GA in real-space representation with the cut-and-paste crossover operations has became mature and versatile enough to tackle systems of tens to hundreds of atoms. As such, the use of GA has spread with its application to finding the structure of clusters made of very different materials, such as silicon [12], germanium [13], metals (Al, Au, Ag, Ni, Zn, and others [14–20]), metallic alloys (Ni–Al, Cu–Au, and others [21–25]), oxides (MgO, TiO2, SiO2, and others [26–28]), water molecules [29, 30], and many others. During this time of rapidly expanding applications of GA, it has become apparent that the method is very efficient and powerful. It has also become clear that, short of the dimensionality curse (by which any global structural optimization method fails if the number of atoms is increased sufficiently), the quality of the results or lack thereof is not affected by the GA procedure itself but, rather, by the interatomic potential model used in the determination of energies or cost functions. The results of GA searches based on empirical potentials are checked, when possible, at the end of the GA procedures against density functional theory (DFT) or tight-binding calculations. Often these checks reveal a significantly different rank ordering of the structures at the DFT level as compared to the empirical potential level. For this reason, GA global optimum searches are more and more often coupled with accurate tight-binding empirical potentials or with DFT calculations (see, for example, Ref. [12]).

With the standing of real-space GA well established as a method of efficiently locating global minima, the community has changed focus to systems other than clusters. In 2004, Ciobanu and Predescu cast the reconstruction of silicon surfaces as a problem of global optimization of the positions of the atoms at the surface and designed a parallel tempering annealing method to find the global minimum for high-index silicon surfaces [31]. Clearly, if a Monte Carlo-based global optimizer was successful, a GA one should also be able to find the optimal reconstruction of surface. And indeed, shortly thereafter a GA for finding surface reconstructions has been designed and results have been published for a variety of surface orientations of silicon [32–36]. The main idea with the GA for surfaces was the realization that the set of atoms subjected to GA operations can and should be different from the total number of atoms present in the systems: this is so because the structures are selected based on their surface energy, which requires many surface layers to relax in order to be correctly calculated. This way, the optimization is carried out over much fewer atoms than those needed to compute surface energies; in passing, we note that the same requirement holds true for the case of interfaces, for which case GA can also identify optimal structures [37, 38]. Furthermore, conceptually, there is another key aspect of the algorithm development that was not present in the case of clusters: the number of atoms subjected to GA operations could be variable and does not have to be forced to remain constant [32]. This is because the surface energy defined per unit area and thus the number of atoms subjected to GA is not a predefined physical characteristic of the system (the way it is for clusters).

In parallel with the developments for surface reconstructions, the GA was applied to finding the structure of ultrathin nanowires [39–42]. In the case of metals, GA had minimal modifications with respect to its implementation for atomic clusters. Those modifications consisted, evidently, in the use of periodic boundary conditions along one spatial direction [39, 40]. In the case of silicon nanowires (SiNWs), remarkable progress has been achieved in terms of the preparation and characterization of SiNWs, but atomic-level knowledge of the structure was still required for a complete understanding of the device properties of these wires. GA was envisioned as a tool to gain insight into the structure, but the role of passivation and the parameters associated with it were not readily clear and transferable potential models had yet to be tested against DFT data. Eventually, around 2006, these difficulties were shown to be manageable and GA has registered success in predicting the structure of passivated Si nanowires [41, 42].

In 2006, Abraham and Probert refreshed the interest of the scientific community in using global optimizers for predicting crystal structures with a very creative article [43]. These authors showed that the cut-and-paste crossover of Deaven and Ho [9] can be used in the context of periodic boundary conditions in the three spatial dimensions provided it is executed on the fractional coordinates of the atoms and not on the Cartesian ones. Combined with the variable number feature [32], the real-space GA for crystal structure prediction of Abraham and Probert was successful in finding the global minimum energy configurations as well as in finding polymorphs. The algorithm required no prior assumptions regarding the size, shape, or symmetry of the simulation cell, and also no knowledge about the atomic configuration in the cell. Results on large Lennard-Jones systems with fcc- and hcp-commensurate cells showed robust convergence to the bulk structure from a random initial assignment and an ability to successfully discriminate between competing low-energy configurations. Abraham and Probert also coupled their GA with ab initio calculations and have shown the spontaneous emergence of both lonsdaleite and graphite-like structures in their GA search [43]. This, in effect, was a very clear answer to Maddox's challenge [1].

In June 2006, Oganov and Glass [44] also published a very similar GA development for crystal structure predictions, followed by Trimarchi and Zunger's development in 2007 [45]. The main advancements over Abraham and Probert are in regard to the efficiency of the algorithm. Since Oganov and Glass [44] couple their GA mostly to DFT calculations (as do Trimarchi and Zunger [45]), it becomes critical to have very clear cases in which new structures can be discarded without having a DFT relaxation performed on them; another factor that beneficially affects the time taken to success is seeding the initial pool of structures with known polymorphs. The work of Oganov and Glass culminated with the development of a USPEX (Universal Structure Predictor: Evolutionary Xtallography) code, which at present is available to users worldwide [46]. This code was interfaced with widely popular ab initio packages such as VASP [47] and SIESTA [48], and with GULP for atomistic calculations [49]. A plethora of applications of the GA for crystal structure prediction followed [50–65], effectively articulating a welcome and clear answer to Maddox's challenge.

1.3 Scope and Organization of This Book

The purpose of this book is to provide a beginner, most usually a graduate student, a solid background in the genetic algorithms for solving very diverse problems of atomic structure prediction. We use the terms “genetic” and “evolutionary” interchangeably, but the reader will notice the former term more often. We believe that this should not really create confusion, since nowadays the genotype representation of the method (i.e., the one in which atomic structures have to be binary encoded) does not offer solutions anywhere close to those obtained using the real-space representation in terms of robustness, efficiency, or achievable size of the problem with given resources. Other than mentioning it in historical context (such as that of the previous section) and in brief comparisons, the genotype representation is virtually absent from this book.

We describe in detail the inner workings of the real-space GA method in Chapters 2–5 for various materials systems and structural problems, guided by the developments described above. Specifically, we discuss the general method (Chapter 2), while considering all possible system dimensionalities from 0D to 3D. The specific applications to crystals and clusters are described in Chapters 3 and 4, respectively. Chapter 5, the most extended one in the book, contains applications to surfaces, interfaces, nanowires, and nanotubes. A short account of a few other popular global structural optimization methods is given in Chapter 6. Even though there are codes available [46], those do not necessarily tackle all problems regarding structure optimization; many others remain to be solved, as described in Chapter 7.

While we do aim to provide a clear understanding of the inner workings and of the strengths and successes of GA in real-space representation, this book is not a review of all GA works published so far. Rather, it is meant as a primer that also contains partially [9, 12, 13, 31] or fully described examples [10, 32–35, 37, 41, 63–69] from the work done over the years by our own groups at Ames Laboratory/Iowa State University and Colorado School of Mines or done through collaborations with colleagues from other institutions. The references in every chapter are selected judiciously so as to provide the reader a good picture of the current state of the field, but the list is certainly not exhaustive – the field grows too rapidly to even make attempts at completeness of citation lists. The book benefits from the fact that all the examples or case studies have been scrutinized in the peer-review process and contain virtually all working details of the approach and analysis – which should provide insights into the method to any reader interested in designing her or his own GA methodology for problems that may or may not be similar to the examples given.

References

1. Maddox, J. (1988) Nature, 335, 201.

2. Kirkpatrick, S. (1983) Science, 220, 671.

3. Hartke, B. (1993) J. Phys. Chem., 97, 9973.

4. Xiao, Y. and Williams, D.E. (1993) Chem. Phys. Lett., 215, 17.

5. Hartke, B. (1996) Chem. Phys. Lett., 258, 144.

6. Hartke, B. (2003) Phys. Chem. Chem. Phys., 5, 275.

7. Hartke, B., Flad, H.-J., and Dolg, M. (2001) Phys. Chem. Chem. Phys., 3, 5121.

8. Zeiri, Y. (1995) Phys. Rev. E, 51, 2769.

9. Deaven, D.M. and Ho, K.M. (1995) Phys. Rev. Lett., 75, 288.

10. Morris, J.R., Deaven, D.M., and Ho, K.M. (1996) Phys. Rev. B, 53, R1740.

11. Deaven, D.M., Tit, N., Morris, J.R., and Ho, K.M. (1996) Chem. Phys. Lett., 256, 195.

12. Ho, K.M., Shvartsburg, A.A., Pan, B., Lu, Z.Y., Wang, C.Z., Wacker, J.G., Fye, J.L., and Jarrold, M.F. (1998) Nature, 392, 582.

13. Qin, W., Lu, W.C., Zhao, L.Z., Zang, Q.J., Chen, G.J., Wang, C.Z., and Ho, K.M. (2009) J. Chem. Phys., 131, 124507.

14. Lloyd, L.D., Johnston, R.L., Roberts, C. et al. (2002) ChemPhysChem, 3, 408.

15. Chuang, F.C., Wang, C.Z., and Ho, K.M. (2006) Phys. Rev. B, 73, 125431.

16. Michaelian, K., Rendon, N., and Garzon, I.L. (1999) Phys. Rev. B, 60, 2000.

17. Zhang, W.X., Liu, L., and Li, Y.F. (1999) Acta Phys. Sin., 48, 642.

18. Wang, J.L., Wang, G.H., and Zhao, J.J. (2003) Phys. Rev. A, 68, 013201.

19. Sun, H.Q., Luo, Y.H., Zhao, J.J. et al. (1999) Phys. Status Solidi B, 215, 1127.

20. Wang, B.L., Zhao, J.J., Chen, X.S. et al. (2005) Phys. Rev. A, 71, 033201.

21. Hsu, P.J. and Lai, S.K. (2006) J. Chem. Phys., 124, 044711.

22. Wang, J.L., Wang, G.H., Chen, X.S. et al. (2002) Phys. Rev. B, 66, 014419.

23. Bailey, M.S., Wilson, N.T., Roberts, C. et al. (2003) Eur. Phys. J. D, 25, 41.

24. Diaz-Ortiz, A., Aguilera-Granja, F., Michaelian, K. et al. (2005) Physica B, 370, 200.

25. Darby, S., Mortimer-Jones, T.V., Johnston, R.L. et al. (2002) J. Chem. Phys., 116, 1536.

26. Hamad, S., Catlow, C.R.A., Woodley, S.M. et al. (2005) J. Phys. Chem. B, 109, 15741.

27. Wang, C., Liu, L., and Li, Y.F. (1999) Acta Phys.-Chim. Sin., 15, 143–149.

28. Roberts, C. and Johnston, R.L. (2001) Phys. Chem. Chem. Phys., 3, 5024.

29. Hartke, B., Schutz, M., and Werner, H.J. (1998) Chem. Phys., 239, 561.

30. Guimaraes, F.F., Belchior, J.C., Johnston, R.L. et al. (2002) J. Chem. Phys., 116, 8327.

31. Ciobanu, C.V. and Predescu, C. (2004) Phys. Rev. B, 70, 085321.

32. Chuang, F.C., Ciobanu, C.V., Shenoy, V.B., Wang, C.Z., and Ho, K.M. (2004) Surf. Sci., 573, L375.

33. Chuang, F.C., Ciobanu, C.V., Predescu, C., Wang, C.Z., and Ho, K.M. (2005) Surf. Sci., 578, 183.

34. Chuang, F.C., Ciobanu, C.V., Wang, C.Z., and Ho, K.M. (2005) Jpn. J. Appl. Phys., 98, 073507.

35. Ciobanu, C.V., Chuang, F.C., and Lytle, D.E. (2007) Appl. Phys. Lett., 91, 171909.

36. Ciobanu, C.V., Jariwala, B.N., Davies, T.E.B., and Agarwal, S. (2009) Comput. Mater. Sci., 45, 150.

37. Zhang, J., Wang, C.Z., and Ho, K.M. (2009) Phys. Rev. B, 80, 174102.

38. Chua, A.L.S., Benedeck, N.A., Chen, L., Finnis, M.W., and Sutton, A.P. (2010) Nat. Mater., 9, 418.

39. Wang, B., Yin, S., Wang, G.H., Buldum, A., and Zhao, J.J. (2001) Phys. Rev. Lett., 86, 2046.

40. Wang, B.L., Wang, G.H., and Zhao, J.J. (2002) Phys. Rev. B, 65, 235406.

41. Chan, T.L., Ciobanu, C.V., Chuang, F.C., Lu, N., Wang, C.Z., and Ho, K.M. (2006) Nano Lett., 6, 277.

42. Lu, N., Ciobanu, C.V., Chan, T.L., Chuang, F.C., Wang, C.Z., and Ho, K.M. (2007) J. Phys. Chem. C, 111, 7933.

43. Abraham, N.L. and Probert, M.I.J. (2006) Phys. Rev. B, 73, 224104.

44. Oganov, A.R. and Glass, C.W. (2006) J. Chem. Phys., 124, 244704.

45. Trimarchi, G. and Zunger, A. (2007) Phys. Rev. B, 75, 104113.

46.http://han.ess.sunysb.edu/~USPEX/.

47. Kresse, G. and Furthmüller, J. (1996) Phys. Rev. B, 54, 11169.

48. Soler, J.M., Artacho, E., Gale, J.D., Garcia, A., Junquera, J., Ordejon, P., and Sanchez-Portal, D. (2002) J. Phys.: Condens. Matter, 14, 2745.

49. Gale, J.D. (2005) Z. Kristallogr., 220, 552.

50. Oganov, A.R., Glass, C.W., and Ono, S. (2006) Earth Planet. Sci. Lett., 241, 95.

51. Oganov, A.R., Ono, S., Ma, Y.M., Glass, C.W., and Garcia, A. (2008) Earth Planet. Sci. Lett., 273, 38.

52. Gao, G.Y., Oganov, A.R., Bergara, A., Martinez-Canales, M., Cui, T., Iitaka, T., Ma, Y.M., and Zou, G.T. (2008) Phys. Rev. Lett., 101, 107002.

53. Oganov, A.R., Chen, J.H., Gatti, C., Ma, Y.Z., Ma, Y.M., Glass, C.W., Liu, Z.X., Yu, T., Kurakevych, O.O., and Solozhenko, V.L. (2009) Nature, 457, 863.

54. Ma, Y.M., Oganov, A.R., Li, Z.W., Xie, Y., and Kotakoski, J. (2009) Phys. Rev. Lett., 102, 065501.

55. Li, Q., Ma, Y.M., Oganov, A.R., Wang, H.B., Wang, H., Xu, Y., Cui, T., Mao, H.K., and Zou, G.T. (2009) Phys. Rev. Lett., 102, 175506.

56. Oganov, A.R., Ma, Y.M., Xu, Y., Errea, I., Bergara, A., and Lyakov, A.O. (2010) Proc. Natl. Acad. Sci. USA, 107, 7646.

57. Xie, Y., Oganov, A.R., and Ma, Y.M. (2010) Phys. Rev. Lett., 104, 177005.

58. Zhou, X.F., Dong, X., Zhao, Z.S., Oganov, A.R., Tian, Y.J., and Wang, H.T. (2012) Appl. Phys. Lett., 100, 061905.

59. d'Avezac, M. and Zunger, A. (2007) J. Phys.: Condens. Matter, 19, 402201.

60. Trimarchi, G. and Zunger, A. (2008) J. Phys.: Condens. Matter, 20, 295212.

61. Trimarchi, G., Freeman, A.J., and Zunger, A. (2009) Phys. Rev. B, 80, 092101.

62. d'Avezac, M., Luo, J.W., Chanier, T., and Zunger, A. (2012) Phys. Rev. Lett., 108, 027401.

63. Ji, M., Umemoto, K., Wang, C.Z., Ho, K.M., and Wentzkovitch, R.M. (2011) Phys. Rev. B, 84, 220105(R).

64. Wu, S., Umemoto, K., Ji, M., Wang, C.Z., Ho, K.M., and Wentzkovitch, R.M. (2011) Phys. Rev. B, 83, 184102.

65. Nguyen, M.C., Zhao, X., Ji, M., Wang, C.Z., Harmon, B., and Ho, K.M. (2012) J. Appl. Phys., 111, 07E338.

66. Chuang, F.-C., Liu, B., Wang, C.Z., Chan, T.L., and Ho, K.M. (2005) Surf. Sci., 598, L339.

67. Briggs, R.M. and Ciobanu, C.V. (2007) Phys. Rev. B, 75, 195415.

68. Davies, T.E.B., Mehta, D.P., Rodriguez-Lopez, J.L., Gilmer, G.H., and Ciobanu, C.V. (2009) Mater. Manuf. Process., 24, 265.

69. Ji, M., Wang, C.-Z., and Ho, K.-M. (2010) Phys. Chem. Chem. Phys., 12, 11617.

2

The Genetic Algorithm in Real-Space Representation

This chapter, as the remainder of the book, focuses strictly on real-space representation solutions of atomic structure problems. As mentioned in Chapter 1, in this representation, the solution is sought directly in terms of spatial coordinates of the atoms in a given structure, without passing through a binary representation. We will address, with certain generality, the operational way in which atomic structure problems are solved using genetic algorithms (GAs). It is important to understand in detail the options that we have, or that we can produce, in terms of implementing the main GA operations, i.e., selection, crossover, and mutation. The reason for this is the practical observation that there is not, for example, a general best way of evolving the genetic pool or a general best way of performing a crossover, and so on. Many of the options or choices available do not influence the convergence tremendously if they are exercised wisely. There is certainly a general procedure for carrying out genetic algorithm optimization, but the components of it have to be modified according to the optimization problem; as an obvious example, one needs to change the quantity to optimize when dealing with surfaces, as opposed to clusters. Moreover, there are many sets of sensible options in setting up GA operations, and we describe in this chapter one of them while pointing out at times ways in which the practitioners can modify such set. Although the overall complexity of a problem is determined to a large extent by the number of degrees of freedom of the atomic system considered, structural problems can still vary widely in terms of their level of difficulty (assessed, for example, by the computational time taken to reach a solution using same computing resources); the level of difficulty can be affected by boundary conditions or by interactions prescribed for atoms. Interestingly, the level of difficulty can also depend directly on how “smart” the crossover operations are or on how the genetic pool is advanced from one generation to the next. This is to say, a given optimization problem can prove lengthy to solve if we have a “poor” crossover or a “poor” updating of the genetic pool. But how do we know, a priori, what a poor or smart crossover is for a given problem? Strictly speaking, we do not! However, there are certain guidelines for assessing the suitability or performance of a set of genetic operations. After a brief sampling of several important atomic structure problems (Section 2.1), this chapter addresses, in turn, the stages of genetic algorithms and the typical operations (Sections 2.2–2.6) in the context of various structural problems that currently arise in nanoscience, physical chemistry, or condensed matter physics.

2.1 Structure Determination Problems

Let us state precisely, for a few important cases, the atomic structure problems that should be tackled by global optimization methods, and in particular by genetic algorithms. Such statements, which are at the heart of this book, are important not merely for the sake of completeness, but for understanding the details of the evolutionary procedure that should be adopted toward their solutions.

2.1.1 Cluster Structure

For a numberof atoms of a given atomic species, determine the structure that this collection of atoms adopts.

The determination of the structure is based on changing (optimizing) the atomic coordinates so as the total formation energy per atom adopts the lowest possible value. The formation energy is defined as

(2.1)

where is the difference between the total energy of the bound system (cluster) and is the product of the number of atoms and the chemical potential of an atom in the given reference state. Since optimizations are carried out at 0 K, is actually the total potential energy of the system. The number of degrees of freedom to optimize should be , since there are three (Cartesian) atomic coordinates for each atom in the cluster; however, since clusters that differ only by a spatial rotation or translation should not be considered as having different structures, the number of degrees of freedom is actually . Note that for the optimization of an -atom cluster, it would be sufficient to compare the total energy of various -atom structures, as opposed to the formation energy per atom. Still, using the formation energy per atom instead of the total energy allows us to compare the stabilities of clusters with different numbers of atoms, and thus decide if there is a strongly preferred size. For example, the optimization of clusters of carbon atoms shows the size significantly more stable than the clusters with numbers of atoms that differ only a little: the cluster adopts the well-known buckyball structure in which hexagons (six-atom rings) and pentagons (five-atom rings) adjoin to form a near-spherical structure in which every carbon atom is threefold coordinated (Figure 2.1).

Figure 2.1 Buckyball, the ground-state structure of a 60-atom carbon cluster. This is the first report of successful real-space genetic algorithm optimization applied to clusters [1].

The buckyball, discovered experimentally by Curl, Kroto, and Smalley – a discovery for which they were awarded the 1996 Nobel Prize in Chemistry, is a particularly stable form of carbon that appears in hot carbon plasmas [2]. Carbon is significantly richer in terms of displaying polymorphic crystal structures than other group IV elements such as silicon or germanium. Optimization of carbon clusters with lower numbers of atoms shows that, in the order of increasing , they tend to form single-ring structures and then multiring configuration whose shape can be flat, bowl-like, and cap-like (Figure 2.2) [3]. These structures were obtained by local relaxations starting from different initial configurations, in order to access more than one possible isomer; energies were computed at the level of reactive Brenner potential [4]. Some of the reported ground-state structures shown in Figure 2.2 can certainly be considered under debate, as shown by comparisons with, for example, Refs [5–7].As an immediate example of the debate, structures with take different configurations when the optimization method is aimed at the global minimum structure: using the same reactive potential [4], Wang et al. show that their quasi-dynamics-based global method leads to cage-like structures (Figure 2.3) for values between 21 and 30.

Figure 2.2 Structures of small carbon clusters for different values of , obtained from local relaxations of different starting configurations at each value of . (Adapted from Ref. [3], with permission from American Physical Society.)

Figure 2.3 Structure of clusters obtained by a quasi-dynamics global search method. (From Ref. [7], with permission from American Physical Society.)

Without aiming to solve or to exacerbate controversies in ground states of carbon clusters, we point out that there are two major reasons cluster structures (or, in fact, of any atomic structures) can come under debate. The first reason has to do with the type of interatomic potential used to calculate cluster energies: classical (empirical), tight-binding (TB), or pseoudopotentials developed for density functional theory (DFT) calculations, each having different versions with various levels of applicability or transferability. The second reason has to do with the optimization procedure. Even when same atomic interaction models are used, the ground-state and metastable structures can still differ depending on how robust the search for global minimum structure is. It is the case, for example, of the comparison mentioned above between the work of Kosimov et al. [3] and Wang et al. [7], that is, the comparison of Figures 2.2 and 2.3 for .

Robust searches with well-transferable atomic interactions have also been published for silicon clusters [8] and germanium clusters [9]. Such searches reveal similarities and differences between structures of group IV elements with same numbers of atoms and offer insight into the growth patterns for these clusters [10]. Initial theoretical predictions for clusters with were diverse and contradictory, and none of them seemed particularly consistent with experimental data. However, clusters obtained via genetic algorithms coupled with DFT calculations of binding energy for the size range of are consistent with experiments in terms of ionic mobilities determined for these global minima structures (Figure 2.4). As seen in Figure 2.4, these clusters are built upon a Si9 trigonal prisms. At larger sizes, the global search predicts that near-spherical cage geometries become more stable than trigonal prisms.

Figure 2.4 Global minima of Sin () obtained via a genetic algorithm coupled with DFT calculations, with binding energy per atom indicated. The two geometries shown for Si16 are nearly degenerate. (From Ref. [8], with permission from Nature Publishing Group.)

In comparison, germanium clusters can form plate-like structures that consist in four Ge9 or Ge10 subunits and a Ge4 core that acts as a linker connecting the four subunits (refer to Figure 2.5). Such optimized geometry of Ge clusters already reveals significant differences in growth patterns between C, Si, and Ge, which could be exploited in practice since variations in sizes in the regime of 1–100 atoms correspond to significant differences in electronic and optic properties.

Figure 2.5 Plate-like Ge clusters obtained via a genetic algorithm combined with DFT calculations. Shades distinguish between the linker (core) and the corners of the plate-like structures. (Adapted from Ref. [9], with permission from American Institute of Physics.)

The field of cluster structure determination and size-dependent properties (including stability, optical, electronic, and mechanical properties) has grown tremendously from the point of view of both algorithm development and applications, and now covers a wide spectrum of materials, for example, metallic clusters, metallic alloy clusters, ceramic clusters, molecular clusters, and others, and we leave the more in-depth discussion of these types of clusters for Chapter 4.

2.1.2 Crystal Structure Prediction

Determine the structure of a crystalline material made of a known set of atomic species, with a given composition.

In this problem, it is not sufficient to optimize the atomic positions (up to an arbitrary translation), but we need to optimize the shape and size of the simulation box that model the entire crystal through the use of periodic boundary conditions. Since the directional angles (, , ) and the lengths of the simulation box (, , ) (see Figure 2.6) have to be optimized and since spatial rotations are not symmetry operations anymore, the number of degrees of freedom is not as in the case of clusters, but