113,99 €
Pragmatic and Adaptable Textbook Meets the Needs of Students and Instructors from Diverse Fields
Numerical analysis is a core subject in data science and an essential tool for applied mathematicians, engineers, and physical and biological scientists. This updated and expanded edition of Numerical Analysis for Applied Science follows the tradition of its precursor by providing a modern, flexible approach to the theory and practical applications of the field. As before, the authors emphasize the motivation, construction, and practical considerations before presenting rigorous theoretical analysis. This approach allows instructors to adapt the textbook to a spectrum of uses, ranging from one-semester, methods-oriented courses to multi-semester theoretical courses.
The book includes an expanded first chapter reviewing useful tools from analysis and linear algebra. Subsequent chapters include clearly structured expositions covering the motivation, practical considerations, and theory for each class of methods. The book includes over 250 problems exploring practical and theoretical questions and 32 pseudocodes to help students implement the methods. Other notable features include:
Numerical Analysis for Applied Science, Second Edition provides an excellent foundation for graduate and advanced undergraduate courses in numerical methods and numerical analysis. It is also an accessible introduction to the subject for students pursuing independent study in applied mathematics, engineering, and the physical and life sciences and a valuable reference for professionals in these areas.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 679
Veröffentlichungsjahr: 2019
Cover
Preface
Preface to the First Edition
Preface to the Second Edition
Chapter 1: Some Useful Tools
1.1 Introduction
1.2 Bounded Sets
1.3 Normed Vector Spaces
1.4 Eigenvalues and Matrix Norms
1.5 Results from Calculus
1.6 Problems
Chapter 2: Approximation of Functions
2.1 Introduction
2.2 Polynomial Interpolation
2.3 Piecewise Polynomial Interpolation
2.4 Hermite Interpolation
2.5 Interpolation in Two Dimensions
2.6 Splines
2.7 Least‐squares Methods
2.8 Trigonometric Interpolation
2.9 Problems
Chapter 3: Direct Methods for Linear Systems
3.1 Introduction
3.2 The Condition Number of a Linear System
3.3 Gauss Elimination
3.4 Variants of Gauss Elimination
3.5 Band Matrices
3.6 Iterative Improvement
3.7 Problems
Chapter 4: Solution of Nonlinear Equations
4.1 Introduction
4.2 Bisection
4.3 Successive Substitution in One Variable
4.4 Newton's Method in One Variable
4.5 The Secant Method
4.6 Successive Substitution: Several Variables
4.7 Newton's Method: Several Variables
4.8 Problems
Chapter 5: Iterative Methods for Linear Systems
5.1 Introduction
5.2 Conceptual Foundations
5.3 Matrix‐Splitting Techniques
5.4 Successive Overrelaxation
5.5 Multigrid Methods
5.6 The Conjugate‐Gradient Method
5.7 Problems
Chapter 6: Eigenvalue Problems
6.1 More About Eigenvalues
6.2 Power Methods
6.3 The QR Decomposition
6.4 The QR Algorithm for Eigenvalues
6.5 Singular Value Decomposition
6.6 Problems
Chapter 7: Numerical Integration
7.1 Introduction
7.2 Newton–Cotes Formulas
7.3 Romberg and Adaptive Quadrature
7.4 Gauss Quadrature
7.5 Problems
Chapter 8: Ordinary Differential Equations
8.1 Introduction
8.2 One‐Step Methods
8.3 Multistep Methods: Consistency and Stability
8.4 Multistep Methods: Convergence
8.5 Problems
Chapter 9: Difference Methods for PDEs
9.1 Introduction
9.2 The Poisson Equation
9.3 The Advection Equation
9.4 Other Time‐Dependent Equations
9.5 Problems
Chapter 10: Introduction to Finite Elements
10.1 Introduction and Background
10.2 A Steady‐State Problem
10.3 A Transient Problem
10.4 Problems
Appendix A Divided Differences
Appendix BLocal Minima
Appendix CChebyshev Polynomials
References
Index
End User License Agreement
Chapter 2
Table 2.1 Progress of an FFT of length 8, illustrating bit reversal.
Chapter 4
Table 4.1 Successive substitution iterates for
.
Table 4.2 Iterates generated by Newton's method for
.
Chapter 7
Table 7.1 The first four Newton–Cotes formulas
Table 7.2 Error estimates for the first four Newton–Cotes formulas.
Table 7.3 Gauss–Legendre quadrature rules for
, with weights
and Gauss points
Chapter 8
Table 8.1 The explicit Euler method for
with
, using three different values of...
Table 8.2 Truncation errors for
‐step Adams–Bashforth and Adams–Moulton methods....
Chapter 9
Table 9.1 Properties of difference approximations for the advection equation wit...
Table 9.2 Properties of difference approximations for the heat equation.
Chapter 1
Figure 1.1 The set
and two of its upper bounds.
Figure 1.2 The ball
of radius
about the point
.
Figure 1.3 A set
, showing an interior point
and two points
that are not i...
Figure 1.4 A set
, along with two limit points
and
and a point
that is n...
Figure 1.5 The unit spheres
,
, and
in
.
Figure 1.6 Geometric interpretation of
as a measure of the largest excursion ...
Figure 1.7 The distance
between two vectors
.
Figure 1.8 Geometric action of the matrix
.
Figure 1.9 Graphic example of the mean value theorem. At the point
, the value...
Figure 1.10 An open set
, with the points
,
, and
referred to in Theorem 1...
Chapter 2
Figure 2.1 A grid on
with known points
on the graph of a function
.
Figure 2.2 Basis functions
and interpolant
for a sample problem involving q...
Figure 2.3 Geometric meaning of
.
Figure 2.4 The Runge phenomenon for standard polynomial interpolation of
on a...
Figure 2.5 Roots of
guaranteed by the Rolle theorem.
Figure 2.6 Piecewise polynomial interpolants of degree at most (a)
, (b)
, an...
Figure 2.7 Conflicting definitions of
in the interval
, arising from a failu...
Figure 2.8 Piecewise linear basis functions
and
.
Figure 2.9 The four piecewise Lagrange cubic functions
,
,
, and
that take...
Figure 2.10 The graph of
on
, together with the graph of its piecewise linea...
Figure 2.11 A function in the space
of piecewise constant functions on
.
Figure 2.12 Piecewise Hermite cubic basis functions associated with the nodes
Figure 2.13 Example of a piecewise Hermite cubic that interpolates prescribed n...
Figure 2.14 The rectangle
, along with the points
formed by grid‐line inters...
Figure 2.15 Graph of a piecewise Lagrange bilinear function.
Figure 2.16 Graph of a nodal basis function
for piecewise Lagrange bilinear i...
Figure 2.17 A typical element for piecewise Hermite bicubic interpolation.
Figure 2.18 (a) A bounded, simply connected set; (b) an unbounded set; (c) a se...
Figure 2.19 (a) An open set
; (b) the set
containing
and all of its limit ...
Figure 2.20 (a) A polygonal set and a triangulation of it; (b) a nonpolygonal s...
Figure 2.21 Graph of a piecewise planar function
in two dimensions.
Figure 2.22 Graph of a typical basis function
for piecewise planar interpolat...
Figure 2.23 An element
with the triangular subset used to compute
.
Figure 2.24 Triangular element containing
and the point
guaranteed by the T...
Figure 2.25 Resolution of the unit vector
into directions defined by the edge...
Figure 2.26 Natural cubic spline passing through seven preassigned points
.
Figure 2.27 A convergence plot for the vector
of moments for a complete splin...
Figure 2.28 Location of the zero
of
lying closest to
.
Figure 2.29 Schematic diagram of the Pythagorean theorem for functions in
.
Figure 2.30 A vector
and its best approximation
in a subspace
, represente...
Figure 2.31 The line
giving a least‐squares fit to data given in the text.
Figure 2.32 The functions
and
, sampled on a four‐interval grid on
to illu...
Chapter 3
Figure 3.1 Schematic illustration of a row interchange in partial pivoting.
Figure 3.2 Schematic illustration of row and column interchanges in total pivot...
Figure 3.3 Entries of the LU factorization determined during the
th pass throu...
Figure 3.4 Entries determined during the
th pass through the outer loop in the...
Chapter 4
Figure 4.1 A point where
.
Figure 4.2 A convex set (a) and a nonconvex set (b) in
.
Figure 4.3 The existence of at least one zero in
for a continuous function
f
...
Figure 4.4 Example of a sequence of iterates
generated by the bisection metho...
Figure 4.5 A function
f
that has zeros in
but for which
.
Figure 4.6 The graphs of
x
and
, showing that
has a zero in the interval
.
Figure 4.7 Schematic illustration of successive substitution for two smooth fun...
Figure 4.8 Schematic illustration of successive substitution for two smooth fun...
Figure 4.9 Correspondence between the steepness of the graph of
and the incre...
Figure 4.10 A convergence plot for the problem illustrated in Figure 4.6, confi...
Figure 4.11 How to find the zero of a straight line.
Figure 4.12 Progress of the iterations in Newton's method.
Figure 4.13 Convergence plot for Newton's method applied to
, with initial gue...
Figure 4.14 Divergent sequence of iterates generated by Newton's method when a ...
Figure 4.15 Graph of a function
f
on an interval
in which iterates generated ...
Figure 4.16 Progress of iterates generated by the secant method.
Figure 4.17 Level sets
(dashed curve) and
(solid curve) for the system (4.3...
Figure 4.18 The first few iterates generated by Newton's method for the system ...
Figure 4.19 Conceptual picture of a continuation method.
Figure 4.20 Relationship between the regions
and
in
.
Chapter 5
Figure 5.1 A simple grid on the unit square, showing how the standard finite‐d...
Figure 5.2 Directed graph for the finite‐difference approximation (5.4) to the ...
Figure 5.3 Typical plot of the spectral radius of the iteration matrix for SOR ...
Figure 5.4 Red‐black numbering method for the nodes in the grid shown in Figure...
Figure 5.5 Graphs of the relations
and
defined in Eq. (5.32), shown for the...
Figure 5.6 Graphs of the relations
and
, shown for the case
and
. The arr...
Figure 5.7 Plots of the
th entries of the eigenvectors corresponding to wavenu...
Figure 5.8 Eigenvalues
versus wavenumber
for the damped Jacobi method with
Figure 5.9 (a) Graph of the initial guess
for the discrete approximation to t...
Figure 5.10 Segment of a uniform fine grid showing the indices used in defining...
Figure 5.11 Schematic diagram of a V‐cycle using a sequence of
nested grids
Figure 5.12 Schematic diagram of a full multigrid algorithm, illustrating the s...
Figure 5.13 Level sets of a function
whose minimization corresponds to the so...
Figure 5.14 The one‐dimensional ray consisting of all vectors in
having the f...
Figure 5.15 Iterative behavior of the method of steepest descent when the graph...
Figure 5.16 Zero structures of block‐tridiagonal matrices arising from a differ...
Figure 5.17 Graph of the Chebyshev polynomial
on the interval
.
Figure 5.18 Graph of the scaled, shifted Chebyshev polynomial
on the interval...
Chapter 6
Figure 6.1 Union of the Gerschgorin disks for the matrix
, showing the locati...
Figure 6.2 Eigenvalues
and
that have nearly the same magnitude but lie far ...
Figure 6.3 (a) Effect of a rotation through angle
on a vector
. (b) Effect o...
Chapter 7
Figure 7.1 Illustration of the trapezoid rule over an interval
.
Figure 7.2 Schematic illustration of a composite formula for the Simpson rule.
Figure 7.3 Graph of the function
.
Figure 7.4 Graph of a function
that exhibits highly localized oscillatory beh...
Figure 7.5 A uniform grid on an interval
, with locations of the evaluation po...
Chapter 8
Figure 8.1 Extrapolation from
to
in the explicit Euler method.
Figure 8.2 The strip
, showing a typical point
.
Figure 8.3 The region
.
Figure 8.4 Quadratic polynomial
interpolating
over the interval
.
Figure 8.5 The interval
for a particular value of
.
Chapter 9
Figure 9.1 Solution surface
associated with Eq. (9.2), along with a smooth p...
Figure 9.2 Geometry of the advection problem, showing the characteristic curves...
Figure 9.3 Downstream propagation of the initial square wave of contaminant con...
Figure 9.4 A rectangular grid
.
Figure 9.5 Uniform grid
on the square
, showing points in the sets
(•) and...
Figure 9.6 Stencil of the five‐point approximation
for the differential opera...
Figure 9.7 Finite‐difference grid on the unit square
, showing the line of gho...
Figure 9.8 Finite‐difference grid near the boundary of a nonrectangular domain,...
Figure 9.9 Exact domain of dependence for the advection equation at a point
.
Figure 9.10 A grid in the
‐plane, showing the exact domain of dependence of th...
Figure 9.11 Locus of the values of the amplification factor
for the differenc...
Figure 9.12 Initial Dirac distribution and subsequent solution to the heat equa...
Figure 9.13 Stencil for the classic explicit approximation to the heat equation...
Figure 9.14 Stencil for the fully implicit approximation to the heat equation.
Figure 9.15 Stencil for the Crank–Nicolson approximation to the heat equation.
Figure 9.16 Stencil for the leapfrog and DuFort–Frankel approximations to the h...
Figure 9.17 Initial grid function and subsequent solution to the approximation ...
Figure 9.18 Numerical solution to the advection‐diffusion equation generated by...
Figure 9.19 Numerical solution to the advection‐diffusion equation generated us...
Figure 9.20 Translation along characteristic curves of left‐running and right‐r...
Chapter 10
Figure 10.1 Stationary elastic string, held between fixed ends and undergoing d...
Figure 10.2 Basis functions
for the space
of piecewise linear functions on ...
Figure 10.3 The projection
of
onto a subspace
.
Figure 10.4 Analog in Euclidean space of the projection corresponding to the Ga...
Figure 10.5 Template element, using the local coordinate
.
Cover
Table of Contents
Begin Reading
iii
vi
v
vi
vii
viii
ix
x
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
553
554
555
556
557
559
560
561
562
563
564
565
566
567
568
E1
Second Edition
Myron B. Allen III
University of WyomingLaramie, USA
Eli L. Isaacson†
University of WyomingLaramie, USA
This edition first published copyright year
© copyright year John Wiley & Sons, Inc
Edition History
John Wiley & Sons, Inc. (1e,1998).
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
The right of Myron B Allen III and Eli L Isaacson to be identified as the authors of this work has been asserted in accordance with law.
Registered Office(s)
John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA
Editorial Office
111 River Street, Hoboken, NJ 07030, USA
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Wiley also publishes its books in a variety of electronic formats and by print-on-demand. Some content that appears in standard print versions of this book may not be available in other formats.
Limit of Liability/Disclaimer of Warranty
While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
Library of Congress Cataloging-in-Publication Data
9781119245469 (hardback), 9781119245667 (ePDF), 9781119245650 (ePUB).
Names: Allen, Myron B., 1954- author. | Isaacson, Eli L., author.
Title: Numerical analysis for applied science / Myron B. Allen III (University of Wyoming, Laramie, USA), Eli L. Isaacson (University of Wyoming, Laramie, USA).
Description: Second edition. | Hoboken, NJ : Wiley, [2019] | Series: Pure and applied mathematics | Includes index. |
Identifiers: LCCN 2018046975 (print) | LCCN 2018055540 (ebook) | ISBN 9781119245667 (Adobe PDF) | ISBN 9781119245650 (ePub) | ISBN 9781119245469 (hardcover)
Subjects: LCSH: Numerical analysis.
Classification: LCC QA297 (ebook) | LCC QA297 .A53 2019 (print) | DDC 518-dc23
LC record available at https://lccn.loc.gov/2018046975
Cover design: Wiley
Cover image: Courtesy of Myron B. Allen III
Once in a while you get shown the light
In the strangest of places if you look at it right.
Robert Hunter
We intend this book to serve as a first graduate‐level text for applied mathematicians, scientists, and engineers. We hope that these students have had some exposure to numerics, but the book is self‐contained enough to accommodate students with no numerical background. Students should know a computer programming language, though.
In writing the text, we have tried to adhere to three principles:
The book should cover a significant range of numerical methods now used in applications, especially in scientific computation involving differential equations.
The book should be appropriate for mathematics students interested in the theory behind the methods.
The book should also appeal to students who care less for rigorous theory than for the heuristics and practical aspects of the methods.
The first principle is a matter of taste. Our omissions may appall some readers; they include polynomial root finders, linear and nonlinear programming, digital filtering, and most topics in statistics. On the other hand, we have included topics that receive short shrift in many other texts at this level. Examples include:
Multidimensional interpolation, including interpolation on triangles.
Quasi‐Newton methods in several variables.
A brief introduction to multigrid methods.
Conjugate‐gradient methods, including error estimates.
Rigorous treatment of the QR method for eigenvalues.
An introduction to adaptive methods for numerical integration and ordinary differential equations.
A thorough treatment of multistep schemes for ordinary differential equations (ODEs).
Consistency, stability, and convergence of finite‐difference schemes for partial differential equations (PDEs).
An introduction to finite‐element methods, including basic convergence arguments and methods for time‐dependent problems.
All of these topics are prominent in scientific applications.
The second and third principles conflict. Our strategy for addressing this conflict is threefold. First, most sections of the book have a “pyramid” structure. We begin with the motivation and construction of the methods, then discuss practical considerations associated with their implementation, then present rigorous mathematical details. Thus, students in a “methods” course can concentrate on motivation, construction, and practical considerations, perhaps grazing from the mathematical details according to the instructor's tastes. Students in an “analysis” course should delve into the mathematical details as well as the practical considerations.
Second, we have included Chapter 1, “Some Useful Tools,” which reviews essential notions from undergraduate analysis and linear algebra. Mathematics students should regard this chapter as a review; engineering and applied science students may profit by reading it thoroughly.
Third, at the end of each chapter are both theoretical and computational exercises. Engineers and applied scientists will probably concentrate on the computational exercises. Mathematicians should work a variety of both theoretical and computational problems. Numerical analysis without computation is a sterile enterprise.
The book's format allows instructors to use it in either of two modes. For a “methods” course, one can cover a significant set of topics in a single semester by covering the motivation, construction, and practical considerations. At the University of Wyoming, we teach such a course for graduate engineers and geophysicists. For an “analysis” course, one can construct a two‐ or three‐semester sequence that involves proofs, computer exercises, and projects requiring written papers. At Wyoming, we offer a two‐semester course along these lines for students in applied mathematics.
Most instructors will want to skip topics. The following remarks may help avoid infelicitous gaps:
We typically start our courses with
Chapter 2
. Sections 2.2 and 2.3 (on polynomial interpolation) and 2.7 (on least squares) seem essential.
Even if one has an aversion to direct methods for linear systems, it is worthwhile to discuss Sections 3.1 and 3.3. Also, the introduction to matrix norms and condition numbers in Sections 1.4 and 3.6 is central to much of numerical analysis.
While Sections 4.1–4.4 contain the traditional core material on nonlinear equations, our experience suggests that engineering students profit from
some
coverage of the multidimensional methods discussed in Sections 4.6 and 4.7.
Even in a proof‐oriented course, one might reasonably leave some of the theory in Sections 5.3 and 5.4 for independent reading. Section 5.6, The Conjugate‐Gradient Method, is independent of earlier sections in that chapter.
Taste permitting, one can skip
Chapter 6
, Eigenvalue Problems, completely.
One should cover Section 7.1 and at least some of Section 7.2, Newton–Cotes Formulas, in preparation for
Chapter 8
. Engineers use Gauss quadrature so often, and the basic theory is so elegant, that we seldom skip Section 7.4.
We rarely cover
Chapter 8
(on ODEs) completely. Still, in preparation for
Chapter 9
, one should cover at least the most basic material – through Euler methods – from Sections 8.1 and 8.2.
While many first courses in numerics omit the treatment of PDEs, at least some coverage of
Chapter 9
seems crucial for virtually all of the students who take our courses.
Chapter 10
, on finite‐element methods, emphasizes analysis at the expense of coding, since the latter seems to lie at the heart of most semester‐length engineering courses on the subject. It is hard to get this far in a one‐semester “methods” course.
We owe tremendous gratitude to many people, including former teachers and many remarkable colleagues too numerous to list. We thank the students and colleagues who graciously endured our drafts and uncovered an embarrassing number of errors. Especially helpful were the efforts of Marian Anghel, Damian Betebenner, Bryan Bornholdt, Derek Mitchum, Patrick O'Leary, Eun‐Jae Park, Gamini Wickramage, and the amazingly keen‐eyed Li Wu. (Errors undoubtedly remain; they are our fault.) The first author wishes to thank the College of Engineering and Mathematics at the University of Vermont, at which he wrote early drafts during a sabbatical year. Finally, we thank our wives, Adele Aldrich and Lynne Ipiña, to whom we dedicate the book. Their patience greatly exceeds that required to watch a book being written.
Midnight on a carousel ride,
Reaching for the gold ring down inside.
Never could reach
It just slips away when I try.
Robert Hunter
In producing this second edition of Numerical Analysis for Applied Science, I pursued two goals. First, I incorporated many suggestions and corrections made by people who have used the book since it first appeared in print. I owe my sincerest thanks to colleagues who have shared these improvements over the years. Professor Scott Fulton of Clarkson University and Professor Aleksey Telyakovskiy of the University of Nevada at Reno deserve special thanks for their extraordinary generosity in this respect.
Second, I have incorporated new topics or expanded treatments of existing topics, to reflect some of the evolving applications of numerical analysis during the past two decades. Among the new contents are the following:
A description of the symmetric successive overrelaxation method in
Chapter 5
, to facilitate an expanded discussion of preconditioners later in the chapter.
A separate section in
Chapter 5
on multigrid methods for solving linear systems, including more detail than the first edition's brief discussion.
A revised section in
Chapter 5
on the conjugate‐gradient method, including a more detailed discussion of preconditioners.
A short discussion in
Chapter 5
of the method of steepest descent.
More details on the power and QR methods for computing eigenvalues in
Chapter 6
.
An introduction in
Chapter 6
to the singular value decomposition and its application to principal components analysis.
Revised and expanded discussion in
Chapter 9
of the approximation of elliptic PDEs by finite‐difference methods, including the treatment of irregular boundaries in two dimensions.
Revised
approximation error estimates for the finite‐element method in
Chapter 10
.
An additional section in
Chapter 10
on the condition number of the stiffness matrix, a major motivation for many advances in numerical linear algebra over the past three decades.
Seven new pseudocodes, bringing the total to 32.
More than twice as many problems as appeared in the first edition.
Also, I moved a section on eigenvalues and matrix norms to Chapter 1 and a section on the condition number to Section 3.2, to make it easier to skip most of Chapter 3 in favor of iterative methods for linear systems. However, I recommend not skipping Sections 3.1 or 3.2. Finally, I removed a short section on Broyden's method, which appeared in Chapter 4 of the first edition. I hope these changes make the book more useful to the next generation of numerical analysts and modelers.
I owe many thanks to Professor David Isaacson and to staff members at John Wiley & Sons for helping to settle some of the details associated with the contract for this edition. Ezhilan Vikraman and Kathleen Pagliaro were especially helpful with these matters.
I wish I were writing “we” instead of “I.” My coauthor, Professor Eli Isaacson, passed away in May, 2017. Eli was a gifted mathematician, a superb colleague, and an insightful teacher. I learned a great deal about Mathematics from him, and he and I shared many delightful conversations about how people learn Mathematics and how we should teach the subject. It was a privilege to know him.
MYRON B. ALLEN IIILaramie, WyomingAugust 2018
One aim of this book is to make a significant body of mathematics accessible to people in various disciplines, including engineering, geophysics, computer science, the physical sciences, and applied mathematics. People who have had substantial mathematical training enjoy a head start in this enterprise, since they are more likely to be familiar with ideas that, too often, receive little emphasis outside departments of mathematics. The purpose of this preliminary chapter is to level the playing field by reviewing mathematical notations and concepts used throughout the book. We assume that the reader is familiar with concepts from elementary calculus, such as limits, continuity, differentiation, and integration. In three sections (2.8, 7.3, and 9.3) we refer to concepts associated with Fourier series.
Virtually every entity in mathematics is a set. If is an element of the set , we write and say that belongs to. If every element of a set also belongs to the set , we say that is a subset of and write . Using this concept, we say that provided and . There are several ways to specify the elements of a set. One way is simply to list them:
Another is to give a rule for selecting elements from a previously defined set. For example,
denotes the set of all elements of that are less than or equal to 6. If the statement fails for all , then is the empty set, denoted as .
The notation should be familiar enough, but two related notions are worth mentioning. By , we mean “assign the value held by the variable to the variable .” Distinguishing between and can seem pedantic until one recalls such apparent nonsense as “” that occur in Fortran and other programming languages. Also, we use to indicate that is defined to have the value .
If and are sets, then is their union, which is the set containing all elements of and all elements of . The intersection is the set of all elements that belong to both and . If is a set for each belonging to some index set , then
denote, respectively, the set containing all elements that belong to at least one of the sets and the set containing just those elements that belong to every . The set difference is the set of all elements of that do not belong to . If are sets, then their Cartesian product is the set of all ordered-tuples, where each . Two such -tuples and are equal precisely when .
Among the most commonly occurring sets in this book are , the set of all real numbers; , the set of all complex numbers , where and , and
the set of all ‐tuples of real numbers. We often write these ‐tuples as column vectors:
itself has several important types of subsets, including open intervals,
closed intervals,
and the half‐open intervals
To extend this notation, we sometimes use the symbol in a slightly abusive fashion:
and so forth.
In specifying functions, we write . This graceful notation indicates that is defined for every element belonging to , the domain of , and that each such value belongs to the set , called the codomain of . The codomain of contains as a subset the set of all images of points belonging to the domain . We call the range of .
The notation indicates that , the domain and codomain of being understood from context. Sometimes we write when the function itself as well as its domain and codomain are understood from context.
Throughout this book we assume that readers are familiar with the basics of calculus and linear algebra. However, it may be useful to review a few notions from these subjects. We devote the rest of this chapter to a summary of facts about bounded sets and normed vector spaces and some frequently used results from calculus.
In numerical analysis, sets of real numbers arise in many contexts. Examples include sequences of approximate values for some quantity, ranges of values for the errors in such approximations, and so forth. It is often important to estimate where these sets lie on the real number line–for example, to guarantee that the possible values for a numerical error lie in a small region around the origin. We say that a set is bounded above if there exists a number such that for every . In this case, is an upper bound for . Similarly, is bounded below if, for some , for every . In this case, is a lower bound for . A bounded set is one that is bounded both above and below. A set is bounded if and only if there exists a number such that for every .
By extension, if is a function whose range is bounded above, bounded below, or bounded, then we say that is bounded above, bounded below, or bounded, respectively.
Most upper and lower bounds give imprecise information. For example, 17 is an upper bound for the set , but, as Figure 1.1 illustrates, the upper bound 2 is sharper. We call a least upper bound or supremum for if is an upper bound for and whenever is an upper bound for . In this case, we write . Similar reasoning applies to lower bounds: is a lower bound for , but so is the more informative number 0. We call a greatest lower bound or infimum for if is a lower bound for and whenever is also a lower bound for . We write . The notations and have obvious extensions. For example, if denotes the unit circle in and is a real‐valued function defined on , then
Shortly we discuss conditions under which this quantity exists.
Figure 1.1 The set and two of its upper bounds.
Not every set has a supremum or an infimum. For example, the set
of all integers has neither a supremum nor an infimum. The set
of natural numbers has infimum but no supremum. One should take care to distinguish between and and the notions of maximum and minimum. By a maximum of a set , we mean an element for which whenever , and we write . Thus, , but does not exist. Similarly, an element is a minimum of if for every . Thus , while does not exist. These examples illustrate the fact that and are more general notions than and : when , but may exist even when does not. A corresponding statement holds for and .
The following principle, which one can take as a defining characteristic of , confirms the fundamental importance of and :
LEAST‐UPPER‐BOUND PRINCIPLE. If a nonempty subset of is bounded above, then it has a least upper bound.
Spivak [46, Chapter 8] gives an accessible introduction to this principle. Similarly, every nonempty subset of that is bounded below has a greatest lower bound. For example,
The set , however, is not bounded above, and it has no least upper bound. The least‐upper‐bound principle ensures that , defined in Eq. (1.1), exists whenever the set of real numbers
is bounded above. However, without knowing more about , we cannot guarantee the existence of a point where attains the value .
Which subsets of are bounded? Here we generally have no linear order analogous to the relation on which to base a definition of boundedness. Instead, we rely on the idea of distance, which is familiar from geometry.
The Euclidean length of is
The Euclidean distance between two points is the Euclidean length of their difference, .
Given a point and a positive real number , we call the set of all points in whose Euclidean distance from is less than the ball of radius about . We denote this set as . Figure 1.2 depicts such a set in . A set is bounded if it is a subset of for some . Observe that, if , then . One easily checks that a subset of is bounded in this sense if and only if it is bounded above and below.
Figure 1.2 The ball of radius about the point .
Other structural aspects of also prove useful. Let . A point is an interior point of if there is some ball such that . In Figure 1.3, the point is an interior point of , but and are not. A point (not necessarily belonging to ) is a limit point of if every ball contains at least one element of distinct from . In Figure 1.4, and are limit points of , but is not. If every element of is an interior point, then we call an open set. If contains all of its limit points, then we say that is a closed set. The definitions are by no means mutually exclusive: and are both open and closed.
Figure 1.3 A set , showing an interior point and two points that are not interior points.
Figure 1.4 A set , along with two limit points and and a point that is not a limit point of .
Finally, a subset of that is both closed and bounded is compact.1 Thus the following subsets of are compact:
while the sets
are not. Compact sets in have several interesting properties, one of which is especially useful in numerical analysis.
If is nonempty and compact and is a continuous function, then there are points for which and are the minimum and maximum, respectively, of the set .
For a proof, see [40, Chapter 4].
This theorem partially settles an issue raised earlier: If is a continuous, real‐valued function defined on the unit circle , then there is at least one point where takes the value defined in Eq. (1.1). By considering the function , one can also show that takes the value at some point in . Both of these statements hold just as well in , where . We use this generalization in the next section.
Vector spaces are ubiquitous.
A set is a vector space over if there are two operations, addition () and scalar multiplication, that obey the following rules for any and :
and
; in other words,
is
closed algebraically
under addition and scalar multiplication.
.
.
There is a unique vector
such that
for all
.
For any
, there is a unique vector
such that
.
.
.
.
.
We refer to as the field of scalars. The elements of are vectors. A set is a subspace of if every element of belongs to and is a vector space under the operations that it inherits from . Analogous definitions hold for vector spaces over the field of complex numbers.
We denote the scalar multiple by juxtaposing the scalar and the vector . In most cases of interest in this book, the algebraic properties of addition and scalar multiplication are obvious from the definitions of the two operations, and the main issue is whether is closed algebraically under these two operations.
Among the common examples of vector spaces are the finite‐dimensional Euclidean spaces, with their familiar rules of addition and scalar multiplication:
In this vector space, the zero vector is , the array that has as each of its entries. The real line is perhaps the simplest Euclidean space.
Various sets of functions constitute another important class of vector spaces. For example, if is an interval, then signifies the vector space of all functions for which and its derivatives through order are continuous. By extension of this notation, denotes the vector space of functions that have continuous derivatives of all orders on . On all of these spaces we define addition and scalar multiplication pointwise:
Here, the vector is the function that assigns the number to all arguments . A slightly more general function space is . Although the rigorous definition of this space involves some technicalities, for our purposes it suffices to think of as the set of all functions for which exists and is finite. Readers who are curious about the technicalities may consult [40, Chapter 11].
A third class of vector spaces consists of the sets of real matrices. Our notational convention is to a use sans‐serif capital letter, such as , to signify the matrix whose entry in row , column is the number denoted by the corresponding lowercase symbol . If and are two such matrices, then
The additive identity in is the matrix all of whose entries are .
Finally, the set is trivially a vector space.
One can use addition and scalar multiplication to construct subspaces.
If is a real vector space, a linear combination of the vectors is a vector of the form , where . If , the span of , denoted , is the set of all linear combinations of vectors belonging to . If , then spans.
Problem 1.2 asks for proof that is a subspace of whenever .
If is a vector space, then a set is linearly independent if no vector belongs to , that is, no vector in is a linear combination of the other vectors in . Otherwise, is linearly dependent.
One can regard a linearly independent set as containing minimal information needed to determine its span.
A subset of a vector space is a basis for if is linearly independent and .
A basic theorem of linear algebra asserts that, whenever two finite sets and are bases for a vector space , and have the same number of elements (see Ref. [48, Chapter 2]) We call this number the dimension of . For example, has the standard basis, where
If has a basis containing finitely many vectors, then we say that is finite‐dimensional. If not, then is infinite‐dimensional.
Given matrices and , one can compute their matrix product
where
If we identify vectors in with matrices in , then the product of an real matrix with a vector in is a vector in :
where . In this way, any real matrix acts as a mapping . It is easy to check that this mapping is a linear operator or linear transformation, that is, that it satisfies the following properties: For any and any ,
(
additivity
).
(
homogeneity
).
In this context, the identity matrix in
