116,99 €
The Second Edition demonstrates how computational chemistry continues to shed new light on organic chemistry
The Second Edition of author Steven Bachrach’s highly acclaimed Computational Organic Chemistry reflects the tremendous advances in computational methods since the publication of the First Edition, explaining how these advances have shaped our current understanding of organic chemistry. Readers familiar with the First Edition will discover new and revised material in all chapters, including new case studies and examples. There’s also a new chapter dedicated to computational enzymology that demonstrates how principles of quantum mechanics applied to organic reactions can be extended to biological systems.
Computational Organic Chemistry covers a broad range of problems and challenges in organic chemistry where computational chemistry has played a significant role in developing new theories or where it has provided additional evidence to support experimentally derived insights. Readers do not have to be experts in quantum mechanics. The first chapter of the book introduces all of the major theoretical concepts and definitions of quantum mechanics followed by a chapter dedicated to computed spectral properties and structure identification. Next, the book covers:
The final chapter offers new computational approaches to understand enzymes. The book features interviews with preeminent computational chemists, underscoring the role of collaboration in developing new science. Three of these interviews are new to this edition.
Readers interested in exploring individual topics in greater depth should turn to the book’s ancillary website www.comporgchem.com, which offers updates and supporting information. Plus, every cited article that is available in electronic form is listed with a link to the article.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 1160
Veröffentlichungsjahr: 2014
Cover
Title Page
Copyright
Dedication
Preface
References
Acknowledgments
Chapter 1: Quantum Mechanics for Organic Chemistry
1.1 Approximations to the Schrödinger Equation—The Hartree–Fock Method
1.2 Electron Correlation—Post-Hartree–Fock Methods
1.3 Density Functional Theory (DFT)
1.4 Computational Approaches to Solvation
1.5 Hybrid QM/MM Methods
1.6 Potential Energy Surfaces
1.7 Population Analysis
1.8 Interview: Stefan Grimme
References
Chapter 2: Computed Spectral Properties and Structure Identification
2.1 Computed Bond Lengths and Angles
2.2 IR Spectroscopy
2.3 Nuclear Magnetic Resonance
2.4 Optical Rotation, Optical Rotatory Dispersion, Electronic Circular Dichroism, and Vibrational Circular Dichroism
2.5 Interview: Jonathan Goodman
References
Chapter 3: Fundamentals of Organic Chemistry
3.1 Bond Dissociation Enthalpy
3.2 Acidity
3.3 Isomerism and Problems With DFT
3.4 Ring Strain Energy
3.5 Aromaticity
3.6 Interview: Professor Paul Von Ragué Schleyer
References
Chapter 4: Pericyclic Reactions
4.1 The Diels–Alder Reaction
4.2 The Cope Rearrangement
4.3 The Bergman Cyclization
4.4 Bispericyclic Reactions
4.5 Pseudopericyclic Reactions
4.6 Torquoselectivity
4.7 Interview: Professor Weston Thatcher Borden
References
Chapter 5: Diradicals and Carbenes
5.1 Methylene
5.2 Phenylnitrene and Phenylcarbene
5.3 Tetramethyleneethane
5.4 Oxyallyl Diradical
5.5 Benzynes
5.6 Tunneling of Carbenes
5.7 Interview: Professor Henry “Fritz” Schaefer
5.8 Interview: Professor Peter R. Schreiner
References
Chapter 6: Organic Reactions of Anions
6.1 Substitution Reactions
6.2 Asymmetric Induction Via 1,2-Addition to Carbonyl Compounds
6.3 Asymmetric Organocatalysis of Aldol Reactions
6.4 Interview: Professor Kendall N. Houk
References
Chapter 7: Solution-Phase Organic Chemistry
7.1 Aqueous Diels–Alder Reactions
7.2 Glucose
7.3 Nucleic Acids
7.4 Amino Acids
7.5 Interview: Professor Christopher J. Cramer
References
Chapter 8: Organic Reaction Dynamics
8.1 A Brief Introduction To Molecular Dynamics Trajectory Computations
8.2 Statistical Kinetic Theories
8.3 Examples of Organic Reactions With Non-Statistical Dynamics
8.4 Conclusions
8.5 Interview: Professor Daniel Singleton
References
Chapter 9: Computational Approaches to Understanding Enzymes
9.1 Models for Enzymatic Activity
9.2 Strategy for Computational Enzymology
9.3
De Novo
Design of Enzymes
References
Index
xv
xvi
xvii
xviii
xix
xxi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
415
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
Cover
Table of Contents
Preface
Chapter 1: Quantum Mechanics for Organic Chemistry
Figure 1.1
Figure 1.2
Figure 1.3
Figure 1.4
Figure 1.5
Figure 1.6
Figure 1.7
Figure 1.8
Figure 1.9
Figure 1.10
Figure 2.1
Figure 2.2
Figure 3.1
Scheme 3.1
Scheme 3.2
Figure 3.2
Figure 3.3
Figure 3.4
Figure 3.5
Figure 3.6
Figure 3.7
Figure 3.8
Figure 3.9
Figure 3.10
Figure 3.11
Figure 3.12
Figure 3.13
Figure 3.14
Figure 3.15
Scheme 3.3
Figure 3.16
Scheme 4.1
Figure 4.1
Figure 4.2
Figure 4.3
Scheme 4.2
Figure 4.4
Figure 4.5
Figure 4.6
Scheme 4.3
Scheme 4.4
Scheme 4.5
Scheme 4.6
Figure 4.7
Figure 4.8
Scheme 4.7
Figure 4.9
Scheme 4.8
Scheme 4.9
Figure 4.10
Figure 4.11
Figure 4.12
Figure 4.13
Figure 4.14
Figure 4.15
Figure 4.16
Figure 4.17
Scheme 4.10
Figure 4.18
Figure 4.19
Figure 4.20
Figure 4.21
Figure 4.22
Figure 4.23
Scheme 4.11
Scheme 4.12
Scheme 4.13
Scheme 4.14
Scheme 4.15
Scheme 4.16
Scheme 4.17
Figure 5.1
Figure 5.2
Figure 5.3
Figure 5.4
Figure 5.5
Scheme 5.1
Figure 5.6
Scheme 5.2
Figure 5.7
Scheme 5.3
Figure 5.8
Figure 5.9
Scheme 5.4
Figure 5.10
Figure 5.11
Figure 5.12
Figure 5.13
Figure 5.14
Figure 5.15
Scheme 5.5
Scheme 5.6
Scheme 5.7
Scheme 5.8
Figure 6.1
Figure 6.2
Figure 6.3
Figure 6.4
Figure 6.5
Figure 6.6
Figure 6.7
Figure 6.8
Figure 6.9
Figure 6.10
Scheme 6.1
Figure 6.11
Scheme 6.2
Figure 6.12
Figure 6.13
Scheme 6.3
Figure 6.14
Figure 6.15
Figure 6.16
Figure 6.17
Figure 6.18
Figure 6.19
Scheme 6.4
Scheme 6.5
Scheme 6.6
Scheme 6.7
Scheme 6.8
Figure 6.20
Figure 6.21
Figure 6.22
Figure 6.23
Scheme 6.9
Figure 6.24
Figure 6.25
Scheme 6.10
Scheme 6.11
Figure 6.26
Scheme 6.12
Figure 6.27
Figure 6.28
Figure 6.29
Figure 6.30
Scheme 7.1
Figure 7.1
Figure 7.2
Figure 7.3
Figure 7.4
Scheme 7.2
Figure 7.5
Figure 7.6
Figure 7.7
Figure 7.8
Figure 7.9
Figure 7.10
Scheme 7.3
Figure 7.11
Figure 7.12
Figure 7.13
Figure 7.14
Figure 7.15
Figure 7.16
Figure 7.17
Figure 7.18
Figure 7.19
Figure 7.20
Figure 7.21
Figure 8.1
Figure 8.2
Scheme 8.1
Figure 8.3
Figure 8.4
Scheme 8.2
Figure 8.5
Figure 8.6
Scheme 8.3
Figure 8.7
Figure 8.8
Scheme 8.4
Scheme 8.5
Figure 8.9
Figure 8.10
Scheme 8.6
Scheme 8.7
Scheme 8.8
Figure 8.11
Scheme 8.9
Figure 8.12
Figure 8.13
Figure 8.14
Figure 8.15
Scheme 8.10
Figure 8.16
Figure 8.17
Figure 8.18
Figure 8.19
Figure 8.20
Figure 8.21
Scheme 8.11
Scheme 8.12
Scheme 8.13
Scheme 8.14
Scheme 8.15
Figure 9.1
Figure 9.2
Figure 9.3
Figure 9.4
Scheme 9.1
Scheme 9.2
Scheme 9.3
Scheme 9.4
Scheme 9.5
Scheme 9.6
Table 2.1
Table 2.2
Table 2.3
Table 2.4
Table 2.5
Table 2.6
Table 2.7
Table 2.8
Table 2.9
Table 2.10
Table 2.11
Table 2.12
Table 2.13
Table 2.14
Table 2.15
Table 3.1
Table 3.2
Table 3.3
Table 3.4
Table 3.5
Table 3.6
Table 3.7
Table 3.8
Table 3.9
Table 3.10
Table 3.11
Table 3.12
Table 3.13
Table 3.14
Table 3.15
Table 3.16
Table 3.17
Table 3.18
Table 3.19
Table 3.20
Table 3.21
Table 3.22
Table 3.23
Table 3.24
Table 3.25
Table 3.26
Table 3.27
Table 3.28
Table 3.29
Table 3.30
Table 3.31
Table 4.1
Table 4.2
Table 4.3
Table 4.4
Table 4.5
Table 4.6
Table 4.7
Table 4.8
Table 4.9
Table 4.10
Table 4.11
Table 4.12
Table 4.13
Table 4.14
Table 4.15
Table 4.16
Table 4.17
Table 4.18
Table 4.19
Table 5.1
Table 5.2
Table 5.3
Table 5.4
Table 5.5
Table 5.6
Table 5.7
Table 5.8
Table 5.9
Table 5.10
Table 5.11
Table 5.12
Table 5.13
Table 5.14
Table 5.15
Table 6.1
Table 6.2
Table 6.3
Table 6.4
Table 6.5
Table 6.6
Table 6.7
Table 6.8
Table 6.9
Table 6.10
Table 6.11
Table 6.12
Table 6.13
Table 7.1
Table 7.2
Table 7.3
Table 7.4
Table 7.5
Table 7.6
Table 7.7
Table 7.8
Table 7.9
Table 7.10
Table 7.11
Table 7.12
Table 7.13
Table 7.14
Table 7.15
Table 7.16
Table 7.17
Table 7.18
Table 7.19
Table 7.20
Table 7.21
Table 7.22
Table 7.23
Table 8.1
Table 8.2
Table 8.3
Table 8.4
Table 9.1
Table 9.2
Steven M. Bachrach
Copyright © 2014 by John Wiley & Sons, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data:
Bachrach, Steven M., 1959-
Computational organic chemistry / by Steven M. Bachrach, Department of Chemistry, Trinity University, San Antonio, TX. – Second edition.
pages cm
Includes bibliographical references and index.
ISBN 978-1-118-29192-4 (cloth)
1. Chemistry, Organic–Mathematics. 2. Chemistry, Organic–Mathematical models. I. Title.
QD255.5.M35B33 2014
547.001′51–dc23
2013029960
To Carmen and Dustin
In 1929, Dirac famously proclaimed that
The fundamental laws necessary for the mathematical treatment of a large part of physics and
the whole of chemistry
(emphasis added) are thus completely known, and the difficulty lies only in the fact that application of these laws leads to equations that are too complex to be solved.
1
This book is a testament to just how difficult it is to adequately account for the properties and reactivities of real chemical systems using quantum mechanics (QM).
Though QM was born in the mid-1920s, it took many years before rigorous solutions for molecular systems appeared. Hylleras2 and others3, 4 developed nearly exact solutions to the single-electron diatomic molecule in the 1930s and 1940s. Reasonable solutions for multielectron multiatom molecules did not appear until 1960, with Kolos'5, 6 computation of H2 and Boys'7 study of CH2. The watershed year was perhaps 1970 with the publication by Bender and Schaefer8 on the bent form of triplet CH2 (a topic of Chapter 5) and the release by Pople's9 group of Gaussian-70, which is the first full-featured quantum chemistry computer package that was to be used by a broad range of theorists and nontheorists alike. So, in this sense, computational quantum chemistry is really only some five decades old.
The application of QM to organic chemistry dates back to Hückel's π-electron model of the 1930s.10–12 Approximate quantum mechanical treatments for organic molecules continued throughout the 1950s and 1960s. Application of ab initio approaches, such as Hartree–Fock theory, began in earnest in the 1970s and really flourished in the mid-1980s, with the development of computer codes that allowed for automated optimization of ground and transition states and incorporation of electron correlation using configuration interaction or perturbation techniques.
In 2006, I began writing the first edition of this book, acting on the notion that the field of computational organic chemistry was sufficiently mature to deserve a critical review of its successes and failures in treating organic chemistry problems. The book was published the next year and met with a fine reception.
As I anticipated, immediately upon publication of the book, it was out of date. Computational chemistry, like all science disciplines, is a constantly changing field. New studies are published, new theories are proposed, and old ideas are replaced with new interpretations. I attempted to address the need for the book to remain current in some manner by creating a complementary blog at http://www.comporgchem.com/blog. The blog posts describe the results of new papers and how these results touch on the themes presented in the monograph. Besides providing an avenue for me to continue to keep my readers posted on current developments, the blog allowed for feedback from the readers. On a few occasions, a blog post and the article described engendered quite a conversation!
Encouraged by the success of the book, Jonathan Rose of Wiley approached me about updating the book with a second edition. Drawing principally on the blog posts, I had written since 2007, I knew that the ground work for writing an updated version of the book had already been done. So I agreed, and what you have in your hands is my perspective of the accomplishments of computational organic chemistry through early 2013.
The structure of the book remains largely intact from the first edition, with a few important modifications. Throughout this book. I aim to demonstrate the major impact that computational methods have had upon the current understanding of organic chemistry. I present a survey of organic problems where computational chemistry has played a significant role in developing new theories or where it provided important supporting evidence of experimentally derived insights. I expand the scope to include computational enzymology to point interested readers toward how the principles of QM applied to organic reactions can be extended to biological system too. I also highlight some areas where computational methods have exhibited serious weaknesses.
Any such survey must involve judicious selecting and editing of materials to be presented and omitted. In order to reign in the scope of the book, I opted to feature only computations performed at the ab initio level. (Note that I consider density functional theory to be a member of this category.) This decision omits some very important work, certainly from a historical perspective if nothing else, performed using semiempirical methods. For example, Michael Dewar's influence on the development of theoretical underpinnings of organic chemistry13 is certainly underplayed in this book since results from MOPAC and its decedents are largely not discussed. However, taking a view with an eye toward the future, the principle advantage of the semiempirical methods over ab initio methods is ever-diminishing. Semiempirical calculations are much faster than ab initio calculations and allow for much larger molecules to be treated. As computer hardware improves, as algorithms become more efficient, ab initio computations become more practical for ever-larger molecules, which is a trend that certainly has played out since the publication of the first edition of this book.
The book is designed for a broad spectrum of users: practitioners of computational chemistry who are interested in gaining a broad survey or an entrée into a new area of organic chemistry, synthetic and physical organic chemists who might be interested in running some computations of their own and would like to learn of success stories to emulate and pitfalls to avoid, and graduate students interested in just what can be accomplished by computational approaches to real chemical problems.
It is important to recognize that the reader does not have to be an expert in quantum chemistry to make use of this book. A familiarity with the general principles of quantum mechanics obtained in a typical undergraduate physical chemistry course will suffice. The first chapter of this book introduces all of the major theoretical concepts and definitions along with the acronyms that so plague our discipline. Sufficient mathematical rigor is presented to expose those who are interested to some of the subtleties of the methodologies. This chapter is not intended to be of sufficient detail for one to become expert in the theories. Rather it will allow the reader to become comfortable with the language and terminology at a level sufficient to understand the results of computations and understand the inherent shortcoming associated with particular methods that may pose potential problems. Upon completing Chapter 1, the reader should be able to follow with relative ease a computational paper in any of the leading journals. Readers with an interest in delving further into the theories and their mathematics are referred to three outstanding texts, Essential of Computational Chemistry by Cramer,14Introduction to Computational Chemistry by Jensen,15 and Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory by Szabo and Ostlund.16 In a way, this book serves as the applied accompaniment to these books.
How is the second edition different from the first edition? Chapter 1 presents an overview of computational methods. In this second edition, I have combined the descriptions of solvent computations and molecular dynamics computations into this chapter. I have added a discussion of QM/molecular mechanics (MM) computations and the topology of potential energy surfaces. The discussion of density functional theory is more extensive, including discussion of double hybrids and dispersion corrections. Chapter 2 of the second edition is mostly entirely new. It includes case studies of computed spectra, especially computed NMR, used for structure determination. This is an area that has truly exploded in the last few years, with computed spectra becoming an important tool in the structural chemists' arsenal. Chapter 3 discusses some fundamental concepts of organic chemistry; for the concepts of bond dissociation energy, acidity, and aromaticity, I have included some new examples, such as π-stacking of aromatic rings. I also added a section on isomerism, which exposes some major problems with families of density functionals, including the most commonly used functional, B3LYP.
Chapter 4 presents pericyclic reactions. I have updated some of the examples from the last edition, but the main change is the addition of bispericyclic reactions, which is a topic that is important for the understanding of many of the examples of dynamic effects presented in Chapter 8. Chapter 5 deals with radicals and carbenes. This chapter contains one of the major additions to the book: a detailed presentation of tunneling in carbenes. The understanding that tunneling is occurring in some carbenes was made possible by quantum computations and this led directly to the brand new concept of tunneling control.
The chemistry of anions is the topic of Chapter 6. This chapter is an update from the material in the first edition, incorporating new examples, primarily in the area of organocatalysis. Chapter 7, presenting solvent effects, is also updated to include some new examples. The recognition of the role of dynamic effects, situations where standard transition state theory fails, is a major triumph of computational organic chemistry. Chapter 8 extends the scope of reactions that are subject to dynamic effects from that presented in the first edition. In addition, some new types of dynamic effects are discussed, including the roundabout pathway in an SN2 reaction and the roaming mechanism.
A major addition to the second edition is Chapter 9, which discusses computational enzymology. This chapter extends the coverage of quantum chemistry to a sister of organic chemistry—biochemistry. Since computational biochemistry truly deserves its own entire book, this chapter presents a flavor of how computational quantum chemical techniques can be applied to biochemical systems. This chapter presents a few examples of how QM/MM has been applied to understand the nature of enzyme catalysis. This chapter concludes with a discussion of de novo design of enzymes, which is a research area that is just becoming feasible, and one that will surely continue to develop and excite a broad range of chemists for years to come.
Science is an inherently human endeavor, performed and consumed by humans. To reinforce the human element, I interviewed a number of preeminent computational chemists. I distilled these interviews into short set pieces, wherein each individual's philosophy of science and history of their involvements in the projects described in this book are put forth, largely in their own words. I interviewed six scientists for the first edition—Professors Wes Borden, Chris Cramer, Ken Houk, Henry “Fritz” Schaefer, Paul Schleyer, and Dan Singleton. I have reprinted these interviews in this second edition. There was a decided USA-centric focus to these interviews and so for the second edition, I have interviewed three European scientists: Professors Stefan Grimme, Jonathan Goodman, and Peter Schreiner. I am especially grateful to these nine people for their time they gave me and their gracious support of this project. Each interview ran well over an hour and was truly a fun experience for me! This group of nine scientists is only a small fraction of the chemists who have been and are active participants within our discipline, and my apologies in advance to all those whom I did not interview for this book.
A theme I probed in all of the interviews was the role of collaboration in developing new science. As I wrote this book, it became clear to me that many important breakthroughs and significant scientific advances occurred through collaboration, particularly between a computational chemist and an experimental chemist. Collaboration is an underlying theme throughout the book, and perhaps signals the major role that computational chemistry can play; in close interplay with experiment, computations can draw out important insights, help interpret results, and propose critical experiments to be carried out next.
I intend to continue to use the book's ancillary Web site www.comporgchem.com to deliver supporting information to the reader. Every cited article that is available in some electronic form is listed along with the direct link to that article. Please keep in mind that the reader will be responsible for gaining ultimate access to the articles by open access, subscription, or other payment option. The citations are listed on the Web site by chapter, in the same order they appear in the book. Almost all molecular geometries displayed in the book were produced using the GaussView17 molecular visualization tool. This required obtaining the full three-dimensional structure, from the article, the supplementary material, or through my reoptimization of that structure. These coordinates are made available for reuse through the Web site. Furthermore, I intend to continue to post (www.comporgchem.com/blog) updates to the book on the blog, especially focusing on new articles that touch on or complement the topics covered in this book. I hope that readers will become a part of this community and not just read the posts but also add their own comments, leading to what I hope will be a useful and entertaining dialogue. I encourage you to voice your opinions and comments. I wish to thank particular members of the computational chemistry community who have commented on the blog posts; comments from Henry Rzepa, Stephen Wheeler, Eugene Kwan, and Jan Jensen helped inform my writing of this edition. I thank Jan for creating the Computational Chemistry Highlights (http://www.compchemhighlights.org/) blog, which is an overlay of the computational chemistry literature, and for incorporating my posts into this blog.
1. Dirac, P. “Quantum mechanics of many-electron systems,”
Proc. R. Soc. A
1929,
123
, 714–733.
2. Hylleras, E. A. “Über die Elektronenterme des Wasserstoffmoleküls,”
Z. Physik
1931, 739–763.
3. Barber, W. G.; Hasse, H. R. “The two centre problem in wave mechanics,”
Proc. Camb. Phil. Soc.
1935,
31
, 564–581.
4. Jaffé, G. “Zur theorie des wasserstoffmolekülions,”
Z. Physik
1934,
87
, 535–544.
5. Kolos, W.; Roothaan, C. C. J. “Accurate electronic wave functions for the hydrogen molecule,”
Rev. Mod. Phys.
1960,
32
, 219–232.
6. Kolos, W.; Wolniewicz, L. “Improved theoretical ground-state energy of the hydrogen molecule,”
J. Chem. Phys.
1968,
49
, 404–410
7. Foster, J. M.; Boys, S. F. “Quantum variational calculations for a range of CH
2
configurations,”
Rev. Mod. Phys.
1960,
32
, 305–307.
8. Bender, C. F.; Schaefer, H. F., III “New theoretical evidence for the nonlinearlity of the triplet ground state of methylene,”
J. Am. Chem. Soc.
1970,
92
, 4984–4985.
9. Hehre, W. J.; Lathan, W. A.; Ditchfield, R.; Newton, M. D.; Pople, J. A.; Quantum Chemistry Program Exchange, Program No. 237: 1970.
10. Huckel, E. “Quantum-theoretical contributions to the benzene problem. I. The Electron configuration of benzene and related compounds,”
Z. Physik
1931,
70
, 204–288.
11. Huckel, E. “Quantum theoretical contributions to the problem of aromatic and non-saturated compounds. III,”
Z. Physik
1932,
76
, 628–648.
12. Huckel, E. “The theory of unsaturated and aromatic compounds,”
Z. Elektrochem. Angew. Phys. Chem.
1937,
43
, 752–788.
13. Dewar, M. J. S.
A Semiempirical Life
; ACS Publications: Washington, DC, 1990.
14. Cramer, C. J.
Essentials of Computational Chemistry: Theories and Models
; John Wiley & Sons: New York, 2002.
15. Jensen, F.
Introduction to Computational Chemistry
; John Wiley & Sons: Chichester, England, 1999.
16. Szabo, A.; Ostlund, N. S.
Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory
; Dover: Mineola, NY, 1996.
17. Dennington II, R.; Keith, T.; Millam, J.; Eppinnett, K.; Hovell, W. L.; Gilliland, R.
GaussView
; Semichem, Inc.: Shawnee Mission, KS, USA, 2003.
This book is the outcome of countless interactions with colleagues across the world, whether in person, on the phone, through Skype, or by email. These conversations directly or indirectly influenced my thinking and contributed in a meaningful way to this book, and especially this second edition. In particular I wish to thank these colleagues and friends, listed here in alphabetical order: John Baldwin, David Birney, Wes Borden, Chris Cramer, Dieter Cremer, Bill Doering, Tom Cundari, Cliff Dykstra, Jack Gilbert, Tom Gilbert, Jonathan Goodman, Stephen Gray, Stefan Grimme, Scott Gronert, Bill Hase, Ken Houk, Eric Jacobsen, Steven Kass, Elfi Kraka, Jan Martin, Nancy Mills, Mani Paranjothy, Henry Rzepa, Fritz Schaefer, Paul Schleyer, Peter Schreiner, Matt Siebert, Dan Singleton, Andrew Streitwieser, Dean Tantillo, Don Truhlar, Adam Urbach, Steven Wheeler, and Angela Wilson. I profoundly thank all of them for their contributions and assistance and encouragements. I want to particular acknowledge Henry Rzepa for his extraordinary enthusiasm for, and commenting on, my blog. The library staff at Trinity University, led by Diane Graves, was extremely helpful in providing access to the necessary literature.
The cover image was prepared by my sister Lisa Bachrach. The image is based on a molecular complex designed by Iwamoto and co-workers (Angew. Chem. Int. Ed., 2011, 50, 8342–8344).
I wish to acknowledge Jonathan Rose at Wiley for his enthusiastic support for the second edition and all of the staff at Wiley for their production assistance.
Finally, I wish to thank my wife Carmen for all of her years of support, guidance, and love.
Computational chemistry, as explored in this book, will be restricted to quantum mechanical descriptions of the molecules of interest. This should not be taken as a slight upon alternate approaches. Rather, the aim of this book is to demonstrate the power of high level quantum computations in offering insight toward understanding the nature of organic molecules—their structures, properties, and reactions—and to show their successes and point out the potential pitfalls. Furthermore, this book will address the applications of traditional ab initio and density functional theory (DFT) methods to organic chemistry, with little mention of semiempirical methods. Again, this is not to slight the very important contributions made from the application of complete neglect of differential overlap (CNDO) and its progenitors. However, with the ever-improving speed of computers and algorithms, ever-larger molecules are amenable to ab initio treatment, making the semiempirical and other approximate methods for treatment of the quantum mechanics (QM) of molecular systems simply less necessary. This book is therefore designed to encourage the broader use of the more exact treatments of the physics of organic molecules by demonstrating the range of molecules and reactions already successfully treated by quantum chemical computation. We will highlight some of the most important contributions that this discipline has presented to the broader chemical community toward understanding of organic chemistry.
We begin with a brief and mathematically light-handed treatment of the fundamentals of QM necessary to describe organic molecules. This presentation is meant to acquaint those unfamiliar with the field of computational chemistry with a general understanding of the major methods, concepts, and acronyms. Sufficient depth will be provided so that one can understand why certain methods work well while others may fail when applied to various chemical problems, allowing the casual reader to be able to understand most of any applied computational chemistry paper in the literature. Those seeking more depth and details, particularly more derivations and a fuller mathematical treatment, should consult any of the three outstanding texts: Essentials of Computational Chemistry by Cramer,1Introduction to Computational Chemistry by Jensen,2 and Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory by Szabo and Ostlund.3
Quantum chemistry requires the solution of the time-independent Schrödinger equation,
where is the Hamiltonian operator, is the wavefunction for all of the nuclei and electrons, and E is the energy associated with this wavefunction. The Hamiltonian contains all the operators that describe the kinetic and potential energies of the molecule at hand. The wavefunction is a function of the nuclear positions R and the electron positions r. For molecular systems of interest to organic chemists, the Schrödinger equation cannot be solved exactly and so a number of approximations are required to make the mathematics tractable.
Dirac4 achieved the combination of QM and relativity. Relativistic corrections are necessary when particles approach the speed of light. Electrons near heavy nuclei will achieve such velocities, and for these atoms, relativistic quantum treatments are necessary for accurate description of the electron density. However, for typical organic molecules, which contain only first- and second-row elements, a relativistic treatment is unnecessary. Solving the Dirac relativistic equation is much more difficult than for nonrelativistic computations. A common approximation is to utilize an effective field for the nuclei associated with heavy atoms, which corrects for the relativistic effect. This approximation is beyond the scope of this book, especially since it is unnecessary for the vast majority of organic chemistry.
The complete nonrelativistic Hamiltonian for a molecule consisting of n electrons and N nuclei is
where the lowercase letter indexes the electrons and the uppercase one indexes the nuclei, h is the Planck's constant, me is the electron mass, mI is the mass of nucleus I, and r is the distance between the objects specified by the subscript. For simplicity, we define
The total molecular wavefunction Ψ(R,r) depends on both the positions of all of the nuclei and the positions of all of the electrons. Since electrons are much lighter than nuclei, and therefore move much more rapidly, electrons can essentially instantaneously respond to any changes in the relative positions of the nuclei. This allows for the separation of the nuclear variables from the electron variables,
This separation of the total wavefunction into an electronic wavefunction ψ(r) and a nuclear wavefunction Φ(R) means that the positions of the nuclei can be fixed, leaving it only necessary to solve for the electronic part. This approximation was proposed by Born and Oppenheimer5 and is valid for the vast majority of organic molecules.
The potential energy surface (PES) is created by determining the electronic energy of a molecule while varying the positions of its nuclei. It is important to recognize that the concept of the PES relies upon the validity of the Born–Oppenheimer approximation so that we can talk about transition states and local minima, which are critical points on the PES. Without it, we would have to resort to discussions of probability densities of the nuclear–electron wavefunction.
The Hamiltonian obtained after applying the Born–Oppenheimer approximation and neglecting relativity is
where Vnuc is the nuclear–nuclear repulsion energy. Eq. (1.5) is expressed in atomic units, which is why it appears so uncluttered. It is this Hamiltonian that is utilized in computational organic chemistry. The next task is to solve the Schrödinger equation (1.1) with the Hamiltonian expressed in Eq. (1.5).
The wavefunction ψ(r) depends on the coordinates of all of the electrons in the molecule. Hartree6 proposed the idea, reminiscent of the separation of variables used by Born and Oppenheimer, that the electronic wavefunction can be separated into a product of functions that depend only on one electron,
This wavefunction would solve the Schrödinger equation exactly if it weren't for the electron–electron repulsion term of the Hamiltonian in Eq. (1.5). Hartree next rewrote this term as an expression that describes the repulsion an electron feels from the average position of the other electrons. In other words, the exact electron–electron repulsion is replaced with an effective field produced by the average positions of the remaining electrons. With this assumption, the separable functions φi satisfy the Hartree equations
(Note that Eq. (1.7) defines a set of equations, one for each electron.) Solving for the set of functions φi is nontrivial because itself depends on all of the functions φi. An iterative scheme is needed to solve the Hartree equations. First, a set of functions (φ1, φ2, …, φn) is assumed. These are used to produce the set of effective potential operators , and the Hartree equations are solved to produce a set of improved functions φi. These new functions produce an updated effective potential, which in turn yields a new set of functions φi. This process is continued until the functions φi no longer change, resulting in a self-consistent field (SCF).
Replacing the full electron–electron repulsion term in the Hamiltonian with is a serious approximation. It neglects entirely the ability of the electrons to rapidly (essentially instantaneously) respond to the position of other electrons. In a later section, we address how one accounts for this instantaneous electron–electron repulsion.
Fock7, 8 recognized that the separable wavefunction employed by Hartree (Eq. (1.6)) does not satisfy the Pauli exclusion principle.9 Instead, Fock suggested using the Slater determinant
which is antisymmetric and satisfies the Pauli exclusion principle. Again, an effective potential is employed, and an iterative scheme provides the solution to the Hartree–Fock (HF) equations.
The solutions to the HF model, φi, are known as the molecular orbitals (MOs). These orbitals generally span the entire molecule, just as the atomic orbitals (AOs) span the space about an atom. Since organic chemists consider the atomic properties of atoms (or collection of atoms as functional groups) to persist to some extent when embedded within a molecule, it seems reasonable to construct the MOs as an expansion of the AOs,
where the index μ spans all of the AOs χ of every atom in the molecule (a total of k AOs), and ciμ is the expansion coefficient of AO χμ in MO φi. Eq. (1.9) defines the linear combination of atomic orbital (LCAO) approximation.
Combining the LCAO approximation for the MOs with the HF method led Roothaan10 to develop a procedure to obtain the SCF solutions. We will discuss here only the simplest case where all MOs are doubly occupied with one electron that is spin up and one that is spin down, also known as a closed-shell wavefunction. The open-shell case is a simple extension of these ideas. The procedure rests upon transforming the set of equations listed in Eq. (1.7) into matrix form
where S is the overlap matrix, C is the k × k matrix of the coefficients ciμ, and ϵ is the k × k matrix of the orbital energies. Each column of C is the expansion of φi in terms of the AOs χμ. The Fock matrix F is defined for the μν element as
where is the core-Hamiltonian, corresponding to the kinetic energy of the electron and the potential energy due to the electron–nuclear attraction, and the last two terms describe the Coulomb and exchange energies, respectively. It is also useful to define the density matrix (more properly, the first-order reduced density matrix)
The expression in Eq. (1.12) is for a closed-shell wavefunction, but it can be defined for a more general wavefunction by analogy.
The matrix approach is advantageous because a simple algorithm can be established for solving Eq. (1.10). First, a matrix X is found which transforms the normalized AOs χμ into the orthonormal set
which is mathematically equivalent to
where X† is the adjoint of the matrix X. The coefficient matrix C can be transformed into a new matrix C′
Substituting C = XC′ into Eq. (1.10) and multiplying by X† gives
By defining the transformed Fock matrix
we obtain the Roothaan expression
The Hartree–Fock–Roothaan algorithm is implemented by the following steps.
Specify the nuclear position, the type of nuclei, and the number of electrons.
Choose a basis set. The basis set is the mathematical description of the AOs. Basis sets are described in Section 1.1.8.
Calculate all of the integrals necessary to describe the core Hamiltonian, the Coulomb and exchange terms, and the overlap matrix.
Diagonalize the overlap matrix
S
to obtain the transformation matrix
X
.
Make a guess at the coefficient matrix
C
and obtain the density matrix
D
.
Calculate the Fock matrix and then the transformed Fock matrix
F
′.
Diagonalize
F
′ to obtain
C
′ and ε.
Obtain the new coefficient matrix with the expression
C
=
XC
′ and the corresponding new density matrix.
Decide if the procedure has converged. There are typically two criteria for convergence, one based on the energy and the other on the orbital coefficients. The energy convergence criterion is met when the difference in the energies of the last two iterations is less than some pre-set value. Convergence of the coefficients is obtained when the standard deviation of the density matrix elements in successive iterations is also below some pre-set value. If convergence has not been met, return to step 6 and repeat until the convergence criteria are satisfied.
One last point concerns the nature of the MOs that are produced in this procedure. These orbitals are such that the energy matrix ε will be diagonal, with the diagonal elements being interpreted as the MO energy. These MOs are referred to as the canonical orbitals. One must be aware that all that makes them unique is that these orbitals will produce the diagonal matrix ε. Any new set of orbitals φi′ produced from the canonical set by a unitary transformation
will satisfy the HF equations and give the exact same energy and electron distribution as that with the canonical set. No one set of orbitals is really any better or worse than another, as long as the set of MOs satisfies Eq. (1.19).
The preceding development of the HF theory assumed a closed-shell wavefunction. The wavefunction for an individual electron describes its spatial extent along with its spin. The electron can be either spin up (α) or spin down (β). For the closed-shell wavefunction, each pair of electrons shares the same spatial orbital but each has a different spin—one is up and the other is down. This type of wavefunction is also called a (spin)-restricted wavefunction since the paired electrons are restricted to the same spatial orbital, leading to the restricted Hartree–Fock (RHF) method.
This restriction is not demanded. It is a simple way to satisfy the Pauli exclusion principle,9 but it is not the only means for doing so. In an unrestricted wavefunction, the spin-up electron and its spin-down partner do not have the same spatial description. The Hartree–Fock–Roothaan procedure is slightly modified to handle this case by creating a set of equations for the α electrons and another set for the β electrons, and then an algorithm similar to that described above is implemented.
The downside to the (spin)-unrestricted Hartree–Fock (UHF) method is that the unrestricted wavefunction usually will not be an eigenfunction of the operator. Since the Hamiltonian and operators commute, the true wavefunction must be an eigenfunction of both of these operators. The UHF wavefunction is typically contaminated with higher spin states; for singlet states, the most important contaminant is the triplet state. A procedure called spin projection can be used to remove much of this contamination. However, geometry optimization is difficult to perform with spin projection. Therefore, great care is needed when an unrestricted wavefunction is utilized, as it must be when the molecule of interest is inherently open shell, like in radicals.
The variational principle asserts that any wavefunction constructed as a linear combination of orthonormal functions will have its energy greater than or equal to the lowest energy (E0) of the system. Thus,
if
If the set of functions φι is infinite, then the wavefunction will produce the lowest energy for that particular Hamiltonian. Unfortunately, expanding a wavefunction using an infinite set of functions is impractical. The variational principle saves the day by providing a simple way to judge the quality of various truncated expansions—the lower the energy, the better the wavefunction! The variational principle is not an approximation to treatment of the Schrödinger equation; rather, it provides a means for judging the effect of certain types of approximate treatments.
In order to solve for the energy and wavefunction within the Hartree–Fock–Roothaan procedure, the AOs must be specified. If the set of AOs is infinite, then the variational principle tells us that we will obtain the lowest possible energy within the HF–SCF method. This is called the HF limit, EHF. This is not the actual energy of the molecule; recall that the HF method neglects instantaneous electron–electron interactions, otherwise known as electron correlation.
Since an infinite set of AOs is impractical, a choice must be made on how to truncate the expansion. This choice of AOs defines the basis set.
A natural starting point is to use functions from the exact solution of the Schrödinger equation for the hydrogen atom. These orbitals have the form
where R is the position vector of the nucleus upon which the function is centered and N is the normalization constant. Functions of this type are called Slater-type orbitals (STOs). The value of ζ for every STO for a given element is determined by minimizing the atomic energy with respect to ζ. These values are used for every atom of that element, regardless of the molecular environment.
At this point, it is worth shifting nomenclature and discussing the expansion in terms of basis functions instead of AOs. The construction of MOs in terms of some set of functions is entirely a mathematical “trick,” and we choose to place these functions at a nucleus since that is the region of greatest electron density. We are not using “AOs” in the sense of a solution to the atomic Schrödinger equation, but just mathematical functions placed at nuclei for convenience. To make this more explicit, we will refer to the expansion of basis functions to form the MOs.
Conceptually, the STO basis is straightforward as it mimics the exact solution for the single electron atom. The exact orbitals for carbon, for example, are not hydrogenic orbitals, but are similar to the hydrogenic orbitals. Unfortunately, with STOs, many of the integrals that need to be evaluated to construct the Fock matrix can only be solved using an infinite series. Truncation of this infinite series results in errors, which can be significant.
Following on a suggestion of Boys,11 Pople decided to use a combination of Gaussian functions to mimic the STO. The advantage of the Gaussian-type orbital (GTO),
is that with these functions, the integrals required to build the Fock matrix can be evaluated exactly. The trade-off is that GTOs do differ in shape from the STOs, particularly at the nucleus where the STO has a cusp while the GTO is continually differentiable (Figure 1.1). Therefore, multiple GTOs are necessary to adequately mimic each STO, increasing the computational size. Nonetheless, basis sets comprising GTOs are the ones that are most commonly used.
Figure 1.1 Plot of the radial component of Slater-type and Gaussian-type orbitals.
A number of factors define the basis set for a quantum chemical computation. First, how many basis functions should be used? The minimum basis set has one basis function for every formally occupied or partially occupied orbital in the atom. So, for example, the minimum basis set for carbon, with electron occupation 1s22s22p2, has two s-type functions and px, py, and pz functions, for a total of five basis functions. This minimum basis set is referred to as a single zeta (SZ) basis set. The use of the term zeta here reflects that each basis function mimics a single STO, which is defined by its exponent, ζ.
The minimum basis set is usually inadequate, failing to allow the core electrons to get close enough to the nucleus and the valence electrons to delocalize. An obvious solution is to double the size of the basis set, creating a double zeta (DZ) basis. So for carbon, the DZ basis set has four s basis functions and two p basis functions (recognizing that the term p basis functions refers here to the full set—px, py, and pz functions), for a total of 10 basis functions. Further improvement can be made by choosing a triple zeta (TZ) or even larger basis set.
Since most of chemistry focuses on the action of the valence electrons, Pople12, 13 developed the split-valence basis sets, SZ in the core and DZ in the valence region. A double-zeta split-valence basis set for carbon has three s basis functions and two p basis functions for a total of nine functions, a triple-zeta split valence basis set has four s basis functions, and three p functions for a total of 13 functions, and so on.
For a vast majority of basis sets, including the split-valence sets, the basis functions are not made up of a single Gaussian function. Rather, a group of Gaussian functions are contracted together to form a single basis function. This is perhaps most easily understood with an explicit example: the popular split-valence 6-31G basis. The name specifies the contraction scheme employed in creating the basis set. The dash separates the core (on the left) from the valence (on the right). In this case, each core basis function is comprised of six Gaussian functions. The valence space is split into two basis functions, frequently referred to as the inner and outer functions. The inner basis function is composed of three contracted Gaussian functions, while each outer basis function is a single Gaussian function. Thus, for carbon, the core region is a single s basis function made up of six s-GTOs. The carbon valence space has two s and two p basis functions. The inner basis functions are made up of three Gaussians, and the outer basis functions are each composed of a single Gaussian function. Therefore, the carbon 6-31G basis set has nine basis functions made up of 22 Gaussian functions (Table 1.1).
Table 1.1 Composition of the Carbon 6-31G and 6-31+G(d) Basis Sets
6-31G
6-31+G(d)
Basis functions
GTOs
Basis functions
GTOs
Core
s
6
s
6
Valence
s(inner)
3
s(inner)
3
s(outer)
1
s(outer)
1
p
x
(inner)
3
p
x
(inner)
3
p
x
(outer)
1
p
x
(outer)
1
p
y
(inner)
3
p
y
(inner)
3
p
y
(outer)
1
p
y
(outer)
1
p
z
(inner)
3
p
z
(inner)
3
p
z
(outer)
1
p
z
(outer)
1
Diffuse
s(diffuse)
1
p
y
(diffuse)
1
p
z
(diffuse)
1
p
z
(diffuse)
1
Polarization
d
xx
1
d
yy
1
d
zz
1
d
xy
1
d
xz
1
d
yz
1
Total
9
22
19
32
Even large multizeta basis sets will not provide sufficient mathematical flexibility to adequately describe the electron distribution in molecules. An example of this deficiency is the inability to describe bent bonds of small rings. Extending the basis set by including a set of functions that mimic the AOs with angular momentum one greater than in the valence space greatly improves the basis flexibility. These added basis functions are called polarization functions. For carbon, adding polarization functions means adding a set of d GTOs while for hydrogen, polarization functions are a set of p functions. The designation of a polarized basis set is varied. One convention indicates the addition of polarization functions with the label “+P”; DZ+P indicates a DZ basis set with one set of polarization functions. For the split-valence sets, addition of a set of polarization functions to all atoms but hydrogen is designated by an asterisk, that is, 6-31G*, and adding the set of p functions to hydrogen as well is indicated by double asterisks, that is, 6-31G**. Since adding multiple sets of polarization functions has become broadly implemented, the use of asterisks has been deprecated in favor of explicit indication of the number of polarization functions within parentheses, that is, 6-311G(2df,2p) means that two sets of d functions and a set of f functions are added to nonhydrogen atoms and two sets of p functions are added to the hydrogen atoms.
For anions or molecules with many adjacent lone pairs, the basis set must be augmented with diffuse functions to allow the electron density to expand into a larger volume. For split-valence basis sets, this is designated by “+,” as in 6-31+G(d). The diffuse functions added are a full set of additional functions of the same type as are present in the valence space. So, for carbon, the diffuse functions would be an added s basis function and a set of p basis functions. The composition of the 6-31+G(d) carbon basis set is detailed in Table 1.1.
The split-valence basis sets developed by Pople, though widely used, have additional limitations made for computational expediency that compromise the flexibility of the basis set. The correlation-consistent basis sets developed by Dunning14–16 are popular alternatives. The split-valence basis sets were constructed by minimizing the energy of the atom at the HF level with respect to the contraction coefficients and exponents. The correlation-consistent basis sets were constructed to extract the maximum electron correlation energy for each atom. We will define the electron correlation energy in the next section. The correlation-consistent basis sets are designated as “cc-pVNZ,” to be read as correlation-consistent polarized split-valence N-zeta, where N designates the degree to which the valence space is split. As N
