89,99 €
A new edition of this classic work, comprehensively revised to present exciting new developments in this important subject
The study of numerical methods for solving ordinary differential equations is constantly developing and regenerating, and this third edition of a popular classic volume, written by one of the world’s leading experts in the field, presents an account of the subject which reflects both its historical and well-established place in computational science and its vital role as a cornerstone of modern applied mathematics.
In addition to serving as a broad and comprehensive study of numerical methods for initial value problems, this book contains a special emphasis on Runge-Kutta methods by the mathematician who transformed the subject into its modern form dating from his classic 1963 and 1972 papers. A second feature is general linear methods which have now matured and grown from being a framework for a unified theory of a wide range of diverse numerical schemes to a source of new and practical algorithms in their own right. As the founder of general linear method research, John Butcher has been a leading contributor to its development; his special role is reflected in the text. The book is written in the lucid style characteristic of the author, and combines enlightening explanations with rigorous and precise analysis. In addition to these anticipated features, the book breaks new ground by including the latest results on the highly efficient G-symplectic methods which compete strongly with the well-known symplectic Runge-Kutta methods for long-term integration of conservative mechanical systems.
This third edition of Numerical Methods for Ordinary Differential Equations will serve as a key text for senior undergraduate and graduate courses in numerical analysis, and is an essential resource for research workers in applied mathematics, physics and engineering.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 647
Veröffentlichungsjahr: 2016
Cover
Title Page
Copyright
Foreword
Preface to the first edition
Introductory remarks
Concluding remarks
Internet commentary
Acknowledgements
Preface to the second edition
Reintroductory remarks
How to read this book
Internet support pages
Reacknowledgements
Preface to the third edition
A new edition
Attributions and personal names
Algorithms
Notice concerning MATLAB® in this book
Acknowledgements
Chapter 1: Differential and Difference Equations
10 Differential Equation Problems
Exercises 10
11 Differential Equation Theory
Exercises 11
12 Further Evolutionary Problems
Exercises 12
13 Difference Equation Problems
Exercises 13
14 Difference Equation Theory
Exercises 14
15 Location of Polynomial Zeros
Exercises 15
Concluding remarks
Chapter 2: Numerical Differential Equation Methods
20 The Euler Method
Exercises 20
21 Analysis of the Euler Method
Exercises 21
22 Generalizations of the Euler Method
Exercises 22
23 Runge–Kutta Methods
Exercises 23
24 Linear Multistep Methods
Exercises 24
25 Taylor Series Methods
Exercises 25
26 Multivalue Mulitistage Methods
Exercises 26
27 Introduction to Implementation
Exercises 27
Concluding remarks
Chapter 3: Runge–Kutta Methods
30 Preliminaries
Exercises 30
31 Order Conditions
Exercises 31
32 Low Order Explicit Methods
Exercises 32
33 Runge–Kutta Methods with Error Estimates
Exercises 33
34 Implicit Runge–Kutta Methods
Exercises 34
35 Stability of Implicit Runge–Kutta Methods
Exercises 35
36 Implementable Implicit Runge–Kutta Methods
Exercises 36
37 Implementation Issues
Exercises 37
38 Algebraic Properties of Runge–Kutta Methods
Exercises 38
39 Symplectic Runge–Kutta Methods
Exercises 39
Concluding remarks
Chapter 4: Linear Multistep Methods
40 Preliminaries
Exercises 40
41 The Order of Linear Multistep Methods
Exercises 41
42 Errors and Error Growth
Exercises 42
43 Stability Characteristics
Exercises 43
44 Order and Stability Barriers
Exercises 44
45 One-leg Methods and G-stability
Exercises 45
46 Implementation Issues
Exercises 46
Concluding remarks
Chapter 5: General Linear Methods
50 Representing Methods in General Linear Form
Exercises 50
51 Consistency, Stability and Convergence
Exercises 51
52 The Stability of General Linear Methods
Exercises 52
53 The Order of General Linear Methods
Exercises 53
54 Methods with Runge–Kutta stability
Exercises 54
55 Methods with Inherent Runge–Kutta Stability
Exercises 55
56 G-symplectic methods
Exercises 56
Concluding remarks
References
Index
End User License Agreement
xiii
xiv
xv
xvi
xvii
xix
xx
xxi
xxii
xxiii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
499
500
501
502
503
504
505
506
507
509
510
511
512
513
Cover
Table of Contents
Begin Reading
Chapter 1: Differential and Difference Equations
Figure 103(i) Simple pendulum
Figure 104(i) Solution and most negative eigenvalue for the Robertson problem
Figure 105(i) Van der Pol problem with
Figure 105(ii) Van der Pol problem with
Figure 106(i) Phase diagram for Lotka–Volterra solution with , together with seven alternative orbits
Figure 120(i) A solution to the restricted three–body problem
Figure 120(ii) A second solution to the restricted three–body problem
Figure 120(iii) A Figure eight orbit for three equal masses
Figure 121(i) Solution to delay differential equation (121b)
Figure 121(ii) Solution to neutral delay differential equation (121c)
Figure 122(i) Solution to problem (122c) with pointing out of the page
Figure 123(i) Illustration of symplectic behaviour for two problems: (left) and (right). The underlying image depicts the North Island brown kiwi,
Apteryx mantelli
.
Chapter 2: Numerical Differential Equation Methods
Figure 200(i) An example of the Euler method
Figure 201(i) Local truncation (dotted line) and global truncation error (full line) for problem (201a).
Figure 202(i) Constant stepsize (○) and variable stepsize (•) for orbit with eccentricity . and
Figure 203(i) Norm error against for the ‘mildly stiff’ problem (203a)
Figure 203(ii) Norm error against tolerance
T
for the ‘mildly stiff’ problem (203a) with variable stepsize
Figure 203(iii) Stepsize
h
against
x
for the ‘mildly stiff’ problem (203a) with variable stepsize for
Figure 204(i) Norm error against for the ‘mildly stiff’ problem (203a) using the method (204a)
Figure 214(i) Error versus stepsize for problem (214a) at two alternative output points
Figure 214(ii) Error versus stepsize for problem (214c) at two alternative output points
Figure 216(i) Stability region: Euler method (left); implicit Euler method (right)
Figure 216(ii) Order star: Euler method (left); implicit Euler method (right)
Figure 216(iii) Order arrows: Euler method (left); implicit Euler method (right)
Figure 218(i) Schema showing effects of rounding error
Figure 218(ii) Errors for naive (○) and sophisticated (•) forms of the Euler method
Figure 218(iii) Accumulation of rounding errors in low accuracy calculations with sophisticated Euler, showing y (dashed line) and (solid line); also, for comparison, crude Euler (dotted line)
Figure 223(i) Errors in problem (223a) using Taylor series with orders
Figure 224(i) Classification of general method types
Figure 234(i) Some illustrative rooted trees
Figure 234(ii) Calculation details for and
t
!, where
Figure 238(i) Stability regions for some explicit Runge–Kutta methods
Figure 238(ii) Attempts to draw stability region for RK5. Left: using (238c); centre: plotting points only; right: using 238d).
Figure 238(iii) Stability regions for some implicit Runge–Kutta methods
Figure 239(i) Orbital calculations for various Runge–Kutta methods
Figure 239(ii) Runge–Kutta methods with cost corrections
Figure 247(i) Orbital calculations for various PEC methods
Figure 247(ii) Orbital calculations for various PECE methods
Figure 252(i) Taylor series calculations
Figure 252(ii) Taylor series calculations with cost correction
Figure 255(i) Predictor–corrector multiderivative methods for problem (252a)
Figure 255(ii) Rosenbrock method given by (254a) for problem (203a)
Figure 265(i) Comparison of Runge–Kutta with pseudo Runge–Kutta method using the Kepler problem with
Figure 273(i) Third order ARK method computations for the Kepler problem
Figure 274(i) Errors and number of rejections for (274a)
Chapter 3: Runge–Kutta Methods
Figure 316(i) Exact solution of the problems (316e) and (316f) on , .
Figure 319(i) Growth of global errors from local errors referred to the computed solution
Figure 319(ii) Growth of global errors from local errors referred to the exact solution
Figure 321(i) The condition relating (left-hand tree) to (righthand tree). The underlying tree is a pohutukawa (
Metrosideros excelsa
), also known as the ‘New Zealand Christmas tree’ because its bright red flowers bloom at Christmastime.
Figure 321(ii) The condition relating (left-hand tree) to (middle tree) and (right-hand tree). The underlying tree is a kauri (
Agathis australis
). Although the immature tree shown is only a few metres tall, the most famous kauri tree, Tane Mahuta (Lord of the Forest), has a height of 40 m and a diameter, 1.5 m above ground level, of 5.21 m.
Figure 332(i) Two alternative stepsize control mechanisms based on Richardson (dashed line) and built-in (solid line) error estimates
Figure 335(i) Stability regions of embedded Verner method with orders 5 and 6
Figure 342(i) Schema representing Theorem 342C. Inset: Corollary 342D
Figure 350(i) stability region for the method (350b)
Figure 354(i) Order star for the (1, 3) Padé approximation to exp
Figure 354(ii) Relation between order arrows and order stars
Figure 364(i) Error constant for
Figure 395(i) Solutions of the Hamiltonian problem . Left: Euler method (grey) and implicit Euler method (white). Right: exact solution (grey) and implicit mid-point method (white). The underlying image depicts the takahe
Porphyrio hochstetteri
, rediscovered in 1948 after many years of presumed extinction.
Figure 395(ii) Deviation of the numerical Hamiltonian from its initial value over two periods of the simple pendulum with initial value using the mid-point rule method. The dashed line is for and the full line is for
Figure 395(iii) Deviation of the numerical Hamiltonian from its initial value over two periods of the simple pendulum with initial value using the order 4 Gauss method. The dashed line is for and the full line is for
Figure 395(iv) Deviation of the numerical Hamiltonian from its initial value for the simple pendulum with initial value , over an extended time interval, using the order 4 Gauss method with for time steps
Figure 395(v) Experiments for problem (122c). The computed value of is shown after , steps
Chapter 4: Linear Multistep Methods
Figure 421(i) Development of accumulated errors in a single step
Figure 430(i) Stability region for the second order Adams–Bashforth method
Figure 432(i) Stability region for the third order Adams–Bashforth method
Figure 432(ii) Stability region for the fourth order Adams–Bashforth method
Figure 432(iii) Stability region for the third order Adams–Moulton method
Figure 432(iv) Stability region for the second order backward difference method
Figure 432(v) Stability region for the third order backward difference method
Figure 434(i) Stability regions for Adams–Moulton methods (solid lines) and PEC methods (dashed lines)
Figure 434(ii) Stability regions for PECE methods with (solid lines) and methods (dashed lines). In each case
p
is attached to the curves.
Figure 442(i) Order star for the second order BDF method
Figure 442(ii) Order star for the third order BDF method
Figure 443(i) Order arrows for order 2 BDF method
Figure 443(ii) Order arrows for order 3 BDF method
Chapter 5: General Linear Methods
Figure 511(i) A commutative diagram for covariance
Figure 522(i) Unmodified (left) and modified (right) order arrows for the approximation [4, 2]
Figure 522(ii) Homotopy from an order 3 to an order 4 approximation
Figure 522(iii) Illustrating the impossibility of A-stable methods with
Figure 531(i) Representation of local truncation error
Figure 531(ii) Representation of global truncation error
Figure 560(i) Variation in for , with ;
Figure 561(i) Variation in the Hamiltonian in attempts to solve the simple pendulum problem using method
P
with and time steps, with initial value (upper figure) and (lower figure)
Figure 561(ii) Variation in the Hamiltonian in attempts to solve the simple pendulum problem using method
N
with and time steps, with initial value (upper figure) and (lower figure)
Figure 561(iii) Variation in the Hamiltonian in solutions of the simple pendulum problem using method (561c) with and time steps, with initial value (upper figure) and (lower figure)
Figure 561(iv) Variation in the Hamiltonian in solutions of the simple pendulum problem with and time steps, with initial value using using method (561c) (upper figure) and the Gauss method (lower figure)
Figure 565(i) Simulation of the simple pendulum, using G4123 with and
Figure 565(ii) Simulation of the simple pendulum, using Gauss with and
Figure 565(iii) Simulation of the simple pendulum, using G4124 with and
Figure 566(i) The underlying one-step method related to idealized starting methods and related mappings
Figure 566(ii) Deviation from cohesiveness for the simple pendulum using G4124 for five million steps
Figure 566(iii) Alternative view of Figure 566(ii)
Figure 566(iv) Deviation from cohesiveness for the Hénon–Heiles problem using G4124 for five million steps
Figure 567(i) Relation between various mappings and the underlying one-step method
Figure 568(ii) Variation in the Hamiltonian in solutions of the Hénon–Heiles problem for using G6245. The variation in the Hamiltonian is bounded by and there is no apparent drift over steps.
Chapter 1: Differential and Difference Equations
Table 103(I) Period of simple pendulum for various amplitudes
Table 106(I) Approximations to the period
T
, given by (106d) for
Table 134(I) The first few terms in the solutions of some difference equations
Chapter 2: Numerical Differential Equation Methods
Table 201(I) Euler method: problem (201a)
Table 201(II) Euler method: problem (201b) with
Table 201(III) Euler method: problem (201b) with
Table 201(IV) Euler method: problem (201b) with
Table 204(I) Comparison of explicit and implicit Euler methods: problem (201a)
Table 214(I) An example of enhanced order for problem (214a)
Table 214(II) An example of reduced order for problem (214c)
Table 218(I) Ten steps of sophisticated Euler to three significant decimals
Table 221(I) Errors in the numerical solution of the orbital problem (201b) with zero eccentricity through a half period using the Runge–Kutta method (221a)
Table 222(I) Errors in the numerical solution of the orbital problem (201b) with zero eccentricity through a half period using (222a)
Table 234(I) The rooted trees up to order 4
Table 244(I) Coefficients and error constants for Adams–Bashforth methods
Table 244(II) Coefficients and error constants for Adams–Moulton methods
Table 253(I) Coefficients defined by (253c)
Table 253(II) Coefficients defined by (253d)
Table 273(I) Global error and numbers of steps for varying tolerance with the Kepler problem
Chapter 3: Runge–Kutta Methods
Table 300(I) Unrooted and rooted trees to order 6
Table 301(I) Trees, notations for trees and various functions on trees
Table 305(I) Various enumerations of rooted and unrooted trees up to order 10
Table 305(II) The bijection relating a monotonically labelled fourth order tree
t
and
Table 305(III) The bijection relating a fully labelled third order tree
t
and
Table 306(I) Rooted trees as graphs and directed graphs
Table 308(I) Members of and their symmetries
Table 310(I) Relation between terms in
y
derivatives and rooted trees
Table 310(II) Elementary differentials for orders 1 to 5
Table 312(I) Relation between elementary weights and rooted trees
Table 312(II) Elementary weights for orders 1 to 5
Table 314(I) Trees to order 4 with corresponding differential equations
Table 321(I) Order conditions corresponding to some pairs of related trees
Table 321(II) Sets of three related trees illustrating
Table 344(I) Methods in the Radau and Lobatto families
Table 352(I) Padé approximations for
Table 352(II) Diagonal members of the Padé Table for
Table 352(III) First sub-diagonal members of the Padé Table for
Table 352(IV) Second sub-diagonal members of the Padé Table for
Table 352(V) Error constants for diagonal and first two sub-diagonals
Table 363(I) Laguerre polynomials for degrees
Table 363(II) Zeros of Laguerre polynomials for degrees
Table 372(I) Minimal value of stepsize ratio and maximal value of error
/T
for step acceptance
Table 383(I) Formulae for up to trees of order 5
Table 383(II) Formulae for up to trees of order 5
Table 388(I) Effective order conditions
Table 388(II) Group elements associated with a special effective order 4 method
Chapter 4: Linear Multistep Methods
Table 412(I) Coefficients of the backward difference methods up to order 7
Table 461(I) Coefficients, , for Nordsieck methods
Chapter 5: General Linear Methods
Table 521(I) Some generalized Padé approximations
Table 541(I) Types of DIMSIM and related methods
Table 543(I) Calculation of stages and stage derivatives for the method (505a)
Table 543(II) Output and input values for (505a) evaluated at fifth order trees
Table 560(I) Calculations to verify order for
P
J. C. Butcher
This edition first published 2016
© 2016, John Wiley & Sons, Ltd
First Edition published in 2003
Second Edition published in 2008
Registered office
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom
For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.
The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. It is sold on the understanding that the publisher is not engaged in rendering professional services and neither the publisher nor the author shall be liable for damages arising herefrom. If professional advice or other expert assistance is required, the services of a competent professional should be sought.
Library of Congress Cataloging-in-Publication data applied for
ISBN: 9781119121503
A catalogue record for this book is available from the British Library.
This book is devoted to a subject – the numerical solution of ordinary differential equations – where practical relevance meets mathematical beauty in a unique way. Its writing is masterly, as befits its author, someone whose past and current contributions to the field are second to none in history.
The numerical integration of differential equations plays a crucial role in all applications of mathematics. Virtually all the scientific laws that govern the physical world can be expressed as differential equations; therefore making explicit the implications and consequences of these laws requires finding the solutions to the equations. This necessity, coupled with the unfortunate fact that there is no general rule to solve analytically a given differential equation, has led over the years to the introduction by the best mathematical minds of a raft of techniques applicable only in particular equations or oriented to specific features of the solutions sought. While some of those efforts have significantly spurred the development of mathematics in the last 300 years (e.g. they have given us the theory of special functions, Lie groups and topology), numerical methods are the only master key for solving differential equations.
The subject matter of this volume is not only of exceptional relevance due to its importance in practical applications; it also constitutes a rich and elegant branch of mathematics with exceptionally distinguished roots. As is well known, the simplest numerical algorithm to solve (ordinary) differential equations was suggested by Euler in the mid 18th century. It is less well known that, for Euler, what we now call Euler’s method was just a stepping stone in his insightful presentation of the Taylor series method of arbitrary order. Euler also carefully discussed the use of variable order and variable step lengths and implementation details. The next milestone of the subject, the introduction of multistep algorithms, was reached in the mid 19th century by Adams, the scientist best known for having co-discovered the existence of the planet Neptune using only mathematics. Another important class of numerical integrators was introduced by Runge and systematized by Kutta around the year 1900. Thus, 100 years ago, the sciences had a pressing need to solve differential equations, so the mathematicians put forward many useful algorithms to solve them … and yet there was a big gap: carrying out the required computations was typically unfeasible when pencil and paper or mechanical machines were the only ways of performing arithmetic operations. It is no exaggeration to state that the need to implement in practice the integration algorithms of Adams, Runge and Kutta led to the conception and construction of (digital, electronic) computers; after all, one the first computers was named ENIAC, (electronic numerical integrator and computer). Since then, computers have of course revolutionized not only the mathematical solution of differential equations but almost everything else.
It was only natural that when the use of computers became widespread, mathematicians asked themselves whether the venerable integrators introduced by Adams, Runge and Kutta were the best conceivable. As it turned out, in the multistep field, the beautiful mathematics of Dahlquist showed that for nonstiff problems it is not really feasible to do much better than Adams had suggested. By the addition of extra free parameters, it is possible to concoct more sophisticated integrators, but these are doomed to be unstable. In the Runge-Kutta realm the situation is just the opposite: the many degrees of freedom in the world of Runge-Kutta integrators have shown themselves capable of providing a good integrator for each situation.
The construction and analysis of specific Runge-Kutta schemes is a daunting job if one approaches it as Runge and Kutta did; these schemes are highly nonlinear with a remarkable Matrioshka doll structure, where the vector field has to be evaluated at an expression that involves the vector field evaluated at an expression that involves the vector field … Mathematics owes much to the author of this book for a simple and elegant, alternative, general methodology based on the use of trees and other algebraic ideas. It is thanks to J. C. Butcher’s techniques that many authors have been able in the last decades to develop Runge-Kutta methods tailored to different needs and to implement them in useful software. Many such techniques have found uses away from numerical mathematics in fields such as quantum field theory and noncommutative geometry. Connes (Field medallist 1982) and Kreimer, writing on renormalization theories, state: ‘We regard Butcher’s work on the classification of numerical integration methods as an impressive example that concrete problem-oriented work can lead to far-reaching conceptual results.’
The author wrote an earlier text restricted to Runge-Kutta and general linear methods. In the case of the present more comprehensive volume, this is the third, significantly updated, edition; I am sure it will be as well received as the two preceding editions.
JM Sanz-SernaUniversidad Carlos III de Madrid
This book represents an attempt to modernize and expand my previous volume, The Numerical Analysis of Ordinary Differential Equations: Runge–Kutta and General Linear Methods. It is more modern in that it considers several topics that had not yet emerged as important research areas when the former book was written. It is expanded in that it contains a comprehensive treatment of linear multistep methods. This achieves a better balance than the earlier volume which made a special feature of Runge–Kutta methods.
In order to accommodate the additional topics, some sacrifices have been made. The background work which introduced the earlier book is here reduced to an introductory chapter dealing only with differential and difference equations. Several topics that seem to be still necessary as background reading are now introduced in survey form where they are actually needed. Some of the theoretical ideas are now explained in a less formal manner. It is hoped that mathematical rigour has not been seriously jeopardized by the use of this more relaxed style; if so, then there should be a corresponding gain in accessibility. It is believed that no theoretical detail has been glossed over to the extent that an interested reader would have any serious difficulty in filling in the gaps.
It is hoped that lowering the level of difficulty in the exposition will widen the range of readers who might be able to find this book interesting and useful. With the same idea in mind, exercises have been introduced at the end of each section.
Following the chapter on differential and difference equations, Chapter 2 is presented as a study of the Euler method. However, it aims for much more than this in that it also reviews many other methods and classes of methods as generalizations of the Euler method. This chapter can be used as a broad-ranging introduction to the entire subject of numerical methods for ordinary differential equations.
Chapter 3 contains a detailed analysis of Runge–Kutta methods. It includes studies of the order, stability and convergence of Runge–Kutta methods and also considers in detail the design of efficient explicit methods for non-stiff problems. For implicit methods for stiff problems, inexpensive implementation costs must be added to accuracy and stability as a basic requirement. Recent work on each of these questions is surveyed and discussed.
Linear multistep methods, including the combination of two methods as predictor–corrector pairs, are considered in Chapter 4. The theory interrelating stability, consistency and convergence is presented together with an analysis of order conditions. This leads to a proof of the (first) ‘Dahliquist barrier’. The methods in this class which are generally considered to be the most important for the practical solution of non-stiff problems are the Adams–Bashforth and Adams–Moulton formulae. These are discussed in detail, including their combined use as predictor–corrector pairs. The application of linear multistep methods to stiff problems is also of great practical importance and the treatment will include an analysis of the backward difference formulae.
In Chapter 5 the wider class of general linear methods is introduced and analysed. Questions analogous to those arising in the classical Runge–Kutta and linear multistep methods – that is, questions of consistency, stability, convergence and order – are considered and explored. Several sub-families of methods, that have a potential practical usefulness, are examined in detail. This includes the so-called DIMSIM methods and a new type of method exhibiting what is known as inherent Runge–Kutta stability.
The remarks in the following paragraphs are intended to be read following Chapter 5.
Any account of this rapidly evolving subject is bound to be incomplete. Complete books are all alike; every incomplete book is incomplete in its own way.
It has not been possible to deal adequately with implementation questions. Numerical software for evolutionary problems entered its modern phase with the DIFSUB code of Gear (1971b). ‘Modern’ in this sense means that most of the ingredients of subsequent codes were present. Both stiff and non-stiff problems are catered for, provision is made for Jacobian calculation either by subroutine call or by difference approximation; the choice is up to the user. Most important, automatic selection of stepsize and order is made dynamically as the solution develops. Compared with this early implementation of linear multistep methods, the Radau code (Hairer and Wanner 1996) uses implicit Runge–Kutta methods for the solution of stiff problems.
In recent years, the emphasis in numerical methods for evolutionary problems has moved beyond the traditional areas of non-stiff and stiff problems. In particular, differential algebraic equations have become the subject of intense analysis as well as the development of reliable and efficient algorithms for problems of variable difficulty, as measured for example by the indices of the problems. Some basic references in this vibrant area are Brenan, Campbell and Petzold (1996) and Hairer, Lubich and Roche (1989). In particular, many codes are now designed for applications to stiff ordinary differential equations in which algebraic constraints also play a role. On the Runge–Kutta side, Radau is an example of this multipurpose approach. On the linear multistep side, Petzold’s DASSL code is closely related to Gear’s DIFSUB code but has the capability of solving differential algebraic equations, at least of low index.
Many problems derived from mechanical systems can be cast in a Hamiltonian formulation. To faithfully model the behaviour of such problems it is necessary to respect the symplectic structure. Early work on this by the late Feng Kang has led to worldwide activity in the study of this type of question. A basic reference on Hamiltonian problems is Sanz-Serna and Calvo (1994)
The emphasis on the preservation of qualitative features of a numerical solution has now grown well beyond the Hamiltonian situation and has become a mathematical discipline in its own right. We mention just two key references in this emerging subject of ‘geometric integration’. They are Iserles et al. (2000) and Hairer, Lubich and Wanner (2006).
Undoubtedly there will be comments and suggestions raised by readers of this volume. A web resource has been developed to form a commentary and information exchange for issues as they arise in the future. The entry point is
http://www.math.auckland.ac.nz/~butcher/book
I acknowledge with gratitude the support and assistance of many people in the preparation of this volume. The editorial and production staff at Wiley have encouraged and guided me through the publishing process. My wife, children, grandchildren and stepchildren have treated me gently and sympathetically.
During part of the time I have been working on this book, I have received a grant from the Marsden Fund. I am very grateful for this assistance both as an expression of confidence from my scientific colleagues in New Zealand and as practical support.
The weekly workshop in numerical analysis at The University of Auckland has been an important activity in the lives of many students, colleagues and myself. We sometimes refer to this workshop as the ‘Runge–Kutta Club’. Over the past five or more years especially, my participation in this workshop has greatly added to my understanding of numerical analysis through collaboration and vigorous discussions. As this book started to take shape they have provided a sounding board for many ideas, some of which were worked on and improved and some of which were ultimately discarded. Many individual colleagues, both in Auckland and overseas, have read and worked through drafts of the book at various stages of its development. Their comments have been invaluable to me and I express my heartfelt thanks.
Amongst my many supportive colleagues, I particularly want to name Christian Brouder, Robert Chan, Tina Chan, David Chen, Allison Heard, Shirley Huang, Arieh Iserles, Zdzisław Jackiewicz, Pierre Leone, Taketomo (Tom) Mitsui, Nicolette Moir, Steffen Schulz, Anjana Singh, Angela Tsai, Priscilla Tse and Will Wright.
The incremental changes incorporated into this edition are an acknowledgement of progress in several directions. The emphasis on structure-preserving algorithms has driven much of this recent progress, but not all of it. The classical linear multistep and Runge–Kutta methods have always been special cases of the large family of general linear methods, but this observation is of no consequence unless some good comes of it. In my opinion, there are only two good things that might be worth achieving. The first is that exceptionally good methods might come to light which would not have been found in any other way. The second is that a clearer insight and perhaps new overarching theoretical results might be expressed in the general linear setting. I believe that both these aims have been achieved, but other people might not agree. However, I hope it can be accepted that some of the new methods which arise naturally as general linear methods have at least some potential in practical computation. I hope also that looking at properties of traditional methods from within the general linear framework will provide additional insight into their computational properties.
Of the five chapters of this book, the first two are the most introductory in nature. Chapter 1 is a review of differential and difference equations with a systematic study of their basic properties balanced against an emphasis on interesting and prototypical problems. Chapter 2 provides a broad introduction to numerical methods for ordinary differential equations. This is motivated by the simplicity of the Euler method and a view that other standard methods are systematic generalizations of this basic method. If Runge–Kutta and linear multistep methods are generalizations of Euler then so are general linear methods, and it is natural to introduce a wide range of multivalue–multistage methods at this elementary level.
A reading of this book should start with these two introductory chapters. For a reader less experienced in this subject this is an obvious entry point but they also have a role for a reader who is ready to go straight into the later chapters. For such readers, they will not take very long but they do set the scene for an entry into the most technical parts of the book.
Chapter 3 is intended as a comprehensive study of Runge–Kutta methods. A full theory of order and stability is presented and at least the early parts of this chapter are prerequisites for Chapter 5 and to a lesser extent for Chapter 4. The use of B-series, or the coefficients that appear in these series, is becoming more and more a standard tool for a full understanding of modern developments in this subject.
Chapter 4 is a full study of linear multistep methods. It is based on Dahlquist’s classic work on consistency, stability and order and includes analysis of linear and nonlinear stability. In both Chapters 3 and 4 the use of order stars to resolve order and stability questions is complemented by the introduction of order arrows. It is probably a good idea to read through most of Chapter 4 before embarking on Chapter 5. This is not because general linear methods are intrinsically inaccessible, but because an appreciation of their overarching nature hinges on an appreciation of the special cases they include.
General linear methods, the subject of Chapter 5, treat well-known methods in a unified way, but it is hoped that they do more than this. There really seem to be new and useful methods buried amongst them which cannot be easily motivated in any other way. Thus, while this chapter needs to be put aside to be read as a culmination, it should not be put off too long. There is so much nice mathematics already associated with these methods, and the promise of more to come provides attraction enough. It is general linear methods, and the stability functions associated with them, that really put order arrows in their rightful place.
For additional information and supporting material see
http://www.math.auckland.ac.nz/~butcher/ODE–book–2008
I have many people to thank and to rethank in my efforts to produce an improved edition. My understanding of the stability and related properties of general linear methods has been sharpened by working with Adrian Hill and Laura Hewitt. Helmut Podhaisky has given me considerable help and advice, especially on aspects of general linear method implementation. My special thanks to Jane Lee for assistance with the final form of the manuscript. A number of people have made comments and provided corrections on the first edition or made constructive suggestions on early drafts of this new version. In addition to people acknowledged in some other way, I would like to mention the names of Ian Gladwell, Dawoomi Kim, Yoshio Komori, René Lamour, Dione O’Neale, Christian Perret, Higinio Ramos, Dave Simpson, Steve Stalos, Caren Tischendorf, Daniel Weiß, Frank Wrona and Jinsen Zhuang.
‘Numerical methods for ordinary differential equations’ is a mature and stable subject, and new ideas and techniques should not be expected too frequently. While this new edition is a consolidation of the early editions, it does attempt to take the subject further in some directions. Most notably, Section 56, dealing with G-symplectic methods, records some major advances in what is now known about these methods. At the time of the appearance of the second edition the author, and people with whom he was collaborating, did not fully appreciate the disastrous role that parasitism could play in extended time integrations. This shortcoming has now been overcome. In Butcher, Habib, Hill and Norton (2014), parasitism has been analysed and largely dealt with as a source of numerical difficulty. A recent result (Butcher 2015) has gone some way towards explaining why G-symplectic methods work as well as they do. However, D’Ambrosio and Hairer (2014) show that the suppression of unstable behaviour caused by parasitism cannot be relied on forever. This third edition attempts to present this new work in a manner that underlines the potential of G-symplectic methods as practical integrators for Hamiltonian and other problems. Although known G-symplectic methods are generally restricted to order 4, a new result (Butcher, Imran and Podhaisky 2016) shows how order 6 methods can be constructed. Limited numerical testing confirms the order and conservation properties of this new method and one of these tests is quoted as Figure 568(ii).
Chapter 3 played a central role in the previous editions and does so again. The aspects of this subject, dealing with the composition group, will be difficult for some readers and this edition attempts to explain this in a fresh way. But the importance of algebraic analysis, or anything else of a theoretical nature, must, to a large extent, be in the applications. This theory is not merely a mathematical device; it leads to the construction of useful numerical methods and gives insight to the nature of these methods.
Many results in numerical analysis began as conjectures and were eventually proved, but not always by the individuals who formulated the conjectures. For example, the Ehle (Ehle 1973) and Daniel Moore (Daniel and Moore 1970) conjectures were not settled until the invention of order stars (Wanner, Hairer and Nørsett 1978). In this volume I have tried to use the convention of naming these results using the names of the first people to formulate them because I believe this avoids confusion about which result is being referred to.
Some authors refer to the commonly used tableaux of coefficients in a specific Runge–Kutta methods as ‘Butcher tableaux’. In this third edition I sometimes follow this terminology but the single word ‘tableau’ is usually enough to make it clear what is being referred to in any particular case. There are many different products associated with rooted trees, the most important being used for the constructing of forests as the product of trees. However, in this book, extensive use is made of the product formed by adjoining the two roots and defining the root of the product as the same vertex as the root of . This is sometimes referred to as the ‘Butcher product’ and this name will be used in this edition.
The late Germund Dahlquist once told me why he used the name ‘A-stability’ for this fundament linear stability definition. It was simply to follow the lead of David M. Young who used ‘property A’ as the name of a fundamental condition in an entirely different area of numerical analysis. When Germund introduced the nonlinear definition ‘G-stability’, he was referring to the matrix G which appears in the formulation of the concept. Shortly afterwards I needed a name for nonlinear stability in the case of Runge–Kutta methods, and I chose B because it next follows A. The fact that B is one of my initials is no more significant than the fact that G is one of the initials of Germund Dahlquist.
The purpose of including formal algorithms in this book is to illuminate the numerical processes they represent. Hence, they should not be interpreted as computer codes. Nevertheless, with little or no change, they can be used as scripts or functions in a variety of languages including MATLAB®, Octave and Scilab.
MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® software or related products does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the MATLAB® software.
I am grateful to Ernst Hairer for correspondence which led to an appreciation of the nature of parasitism. For the method P, introduced in Subsection 560, applied to the simple pendulum, it is observed that, for small amplitudes, very little goes wrong but, for amplitudes greater than, parasitism eventually produces disastrous effects. Ernst kindly sent me his own analysis of this phenomenon.
Many colleagues throughout the world have discussed interesting questions with me, and I am grateful for these stimulating interactions. There are too many to name individually but I do want to make a special mention of the colleagues and former students who have collaborated with me and learnt with me about G-symplectic and other types of general linear methods: Saghir Ahmad, Yousaf Habib, Adrian Hill, Gulshad Imran, Terrence Norton and Helmut Podhaisky.
Over the years, including the period when I worked on this new edition, I have received generous support from the Marsden Fund and this has given me opportunities I would not otherwise have had.
I have enjoyed the support of children, stepchildren and grandchildren in all aspects of my life. I express my special appreciation for the nest Jennifer has provided for me: a bower ‘full of sweet dreams, and health, and quiet breathing’.
As essential tools in scientific modelling, differential equations are familiar to every educated person. In this introductory discussion we do not attempt to restate what is already known, but rather to express commonly understood ideas in the style that will be used for the rest of this book.
The aim will always be to understand, as much as possible, what we expect to happen to a quantity that satisfies a differential equation. At the most obvious level, this means predicting the value this quantity will have at some future time. However, we are also interested in more general questions such as the adherence to possible conservation laws or perhaps stability of the long-term solution. Since we emphasize numerical methods, we often discuss problems with known solutions mainly to illustrate qualitative and numerical behaviour.
Even though we sometimes refer to ‘time’ as the independent variable, that is, as the variable on which the value of the ‘solution’ depends, there is no reason for insisting on this interpretation. However, we generally use x to denote the ‘independent’ or ‘time’ variable and y to denote the ‘dependent variable’. Hence, differential equations will typically be written in the form
where
Sometimes, for convenience, we omit the x in .
The terminology used in (100a) is misleadingly simple, because y could be a vector-valued function. Thus, if we are working in , and x is permitted to take on any real value, then the domain and range of the function f which defines a differential equation and the solution to this equation are given by
Since we might be interested in time values that lie only in some interval , we sometimes consider problems in which , and . When dealing with specific problems, it is often convenient to focus, not on the vector-valued functions f and y, but on individual components. Thus, instead of writing a differential equation system in the form of (100a), we can write coupled equations for the individual components:
A differential equation for which f is a function not of x, but of y only, is said to be ‘autonomous’. Some equations arising in physical modelling are more naturally expressed in one form or the other, but we emphasize that it is always possible to write a non-autonomous equation in an equivalent autonomous form. All we need to do to change the formulation is to introduce an additional component into the y vector, and ensure that this can always maintain the same value as x, by associating it with the differential equation . Thus, the modified system is
A system of differential equations alone does not generally define a unique solution, and it is necessary to add to the formulation of the problem a number of additional conditions. These are either ‘boundary conditions’, if further information is given at two or more values of x, or ‘initial conditions’, if all components of y are specified at a single value of x.
If the value of is given, then the pair of equations
is known as an ‘initial value problem’. Our main interest in this book is with exactly this problem, where the aim is to obtain approximate values of for specific values of x, usually with , corresponding to the prediction of the future states of a differential equation system.
Note that for an N-dimensional system, the individual components of an initial value vector need to be given specific values. Thus, we might write
When the problem is formally converted to autonomous form (100c), the value of must be identical to , otherwise the requirement that should always equal x would not be satisfied.
For many naturally occurring phenomena, the most appropriate form in which to express a differential equation is as a high order system. For example, an equation might be of the form
with initial values given for . Especially important in the modelling of the motion of physical systems subject to forces are equation systems of the form
where the equations, though second order, do have the advantages of being autonomous and without occurring amongst the arguments of .
To write (100d) in what will become our standard first order system form, we can introduce additional components . The differential equation system (100d) can now be written as the first order system
The problems discussed in this section are selected from the enormous range of possible scientific applications. The first example problem describes the motion of a single planet about a heavy sun. By this we mean that, although the sun exerts a gravitational attraction on the planet, we regard the corresponding attraction of the planet on the sun as negligible, and that the sun will be treated as being stationary. This approximation to the physical system can be interpreted in another way: even though both bodies are in motion about their centre of mass, the motion of the planet relative to the sun can be modelled using the simplification we have described. We also make a further assumption, that the motion of the planet is confined to a plane.
Let and denote rectangular coordinates centred at the sun, specifying at time x the position of the planet. Also let and denote the components of velocity in the and directions, respectively. If M denotes the mass of the sun, the gravitational constant and m the mass of the planet, then the attractive force on the planet will have magnitude
Resolving this force in the coordinate directions, we find that the components of acceleration of the planet, due to this attraction, are and , where the negative sign denotes the inward direction of the acceleration.
We can now write the equations of motion:
By adjusting the scales of the variables, the factor can be removed from the formulation, and we arrive at the equations
The solutions of this system are known to be conic sections, that is, ellipses, parabolas or hyperbolas, if we ignore the possibility that the trajectory is a straight line directed either towards or away from the sun. We investigate this further after we have shown that two ‘first integrals’, or invariants, of the solution exist.
The quantities
are constant.
We verify that the values of and are zero if y satisfies (101a)–(101d). We have
and
The quantities H and A are the ‘Hamiltonian’ and ‘angular momentum’, respectively. Note that , where is the kinetic energy and is the potential energy.
A further property of this problem is its invariance under changes of scale of the variables:
The Hamiltonian and angular momentum get scaled to
A second type of transformation is based on a two-dimensional orthogonal transformation (that is, a rotation or a reflection or a composition of these) Q, where . The time variable x is invariant, and the position and velocity variables get transformed to
It is easy to see that implies that the trajectory lies entirely in a subspace defined by for some fixed angle . We move on from this simple case and assume that . The sign of H is of crucial importance: if then it is possible to obtain arbitrarily high values of without vanishing. We exclude this case for the present discussion and assume that . Scale H so that it has a value and at the same time A takes on a positive value. This value cannot exceed 1 because we can easily verify an identity involving the derivative of . This identity is
Since the left-hand side cannot be negative, the quadratic function in r on the right-hand side must have real roots. This implies that . Write , for , where we see that e is the eccentricity of an ellipse on which the orbit lies. The minimum and maximum values of r are found to be and , respectively. Rotate axes so that when and . At this point we find that and . We will use these as initial values at .
Change to polar coordinates by writing . It is found that
