139,99 €
This book provides the mathematical basis for investigating numerically equations from physics, life sciences or engineering. Tools for analysis and algorithms are confronted to a large set of relevant examples that show the difficulties and the limitations of the most naïve approaches. These examples not only provide the opportunity to put into practice mathematical statements, but modeling issues are also addressed in detail, through the mathematical perspective.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 599
Veröffentlichungsjahr: 2016
Cover
Title
Copyright
Preface
1 Ordinary Differential Equations
1.1. Introduction to the theory of ordinary differential equations
1.2. Numerical simulation of ordinary differential equations, Euler schemes, notions of convergence, consistence and stability
1.3. Hamiltonian problems
2 Numerical Simulation of Stationary Partial Differential Equations: Elliptic Problems
2.1. Introduction
2.2. Finite difference approximations to elliptic equations
2.3. Finite volume approximation of elliptic equations
2.4. Finite element approximations of elliptic equations
2.5. Numerical comparison of FD, FV and FE methods
2.6. Spectral methods
2.7. Poisson-Boltzmann equation; minimization of a convex function, gradient descent algorithm
2.8. Neumann conditions: the optimization perspective
2.9. Charge distribution on a cord
2.10. Stokes problem
3 Numerical Simulations of Partial Differential Equations: Time-dependent Problems
3.1. Diffusion equations
3.2. From transport equations towards conservation laws
3.3. Wave equation
3.4. Nonlinear problems: conservation laws
Appendices
Appendix 1: Solving Linear Systems
A1.1. Condition number of a matrix
A1.2. Spectral radius
A1.3. Conjugate gradient
Appendix 2: Numerical Integration
Appendix 3: A Peetre–Tartar Equivalence Theorem
Appendix 4: Schauder’s Theorem
Appendix 5: Fundamental Solutions of the Laplacian in Dimension 1 and 2
A5.1. Dimension 1
A5.2. Dimension 2
A5.3. Higher dimensions
Bibliography
Index
End User License Agreement
Cover
Table of Contents
Begin Reading
C1
iii
iv
v
ix
x
xi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
427
428
429
430
431
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
455
456
G1
G2
G3
G4
Series EditorJacques Blum
Thierry Goudon
First published 2016 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUKwww.iste.co.uk
John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USAwww.wiley.com
© ISTE Ltd 2016
The rights of Thierry Goudon to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2016949851
British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-84821-988-5
Early he rose, far into the night he would wait, To count,to cast up, and to calculate, Casting up, counting,calculating still, For new mistakes for ever met his view.Jean de La Fontaine
(The Money-Hoarder and Monkey, Book XII, Fable 3).
This book was inspired by a collection of courses of varied natures and at different levels, all of which focused on different aspects of scientific computing. Therefore, it owes much to the motivation of students from the universities of Nice and Lille, and the Ecole Normale Supérieure. The writing style adopted in this book is based on my experience as a longtime member of the jury for the agrégation evaluations, particularly in the modeling examination. In fact, a substantial part of the examples on the implementation of numerical techniques was drawn directly from the texts made public by the evaluation’s jury (see http://agreg.org), and a part of this course was the foundation for a series of lectures given to students preparing for the Moroccan agrégation evaluations. However, some themes explored in this book go well beyond the scope of the evaluations. They include, for example, the rather advanced development of Hamiltonian problems, the fine-grained distinction between the finite-difference method and the finite-volume method, and the discussion of nonlinear hyperbolic problems. The latter topic partially follows a course given at the IFCAM (Indo-French Centre for Applied Mathematics) in Bangalore. A relatively sophisticated set of tools is developed on this topic: this heightened level can be explained by the fact that the questions it explores are of great practical importance. It provides a relevant introduction for those who might want to learn more, and it prepares them for reading more advanced and specialized works.
Numerical analysis and scientific calculus courses are often considered a little scary. This fear is often due to the fact that the subject can be difficult on several counts:
– problems of interest are very strongly motivated by their potential applications (for example, in physics, biology, engineering, finance). Therefore, it is impossible to restrict the discussion strictly to the field of mathematics, and the intuitions motivating the math are strongly guided by the specificities of its applications. As a result, the subject requires a certain degree of scientific familiarity that goes beyond mere technical dexterity.
– Numerical analysis uses a very ample technical background. This is a subject that we cannot address by utilizing a small and previously delimited set of tools. Rather, we must draw from different areas of mathematics
1
, sometimes in rather unexpected ways, for example by using linear algebra to analyze the behavior of numerical approximations to differential equations. However, this somewhat roundabout way of finding answers is what makes the subject so exciting.
– Finally, it is often difficult to produce categorical statements and conclusions. For example, although it can be shown that several numerical schemes produce an approximate solution that “converges” towards the solution of the problem of interest (when the numerical parameters are sufficiently small), in practice, some methods are more suitable than others, according to qualitative criteria that are not always easy to formalize. Similarly, the choice of method may depend on the criteria that are considered most important for the target application context. Many questions do not have definite, clear-cut answers. The answer to the question “how should we approach this problem?” is often “it depends”: numerically simulating a physical phenomenon through calculations performed by a computer is a real, delicate and nuanced art. This art must be based on strong technical mastery of mathematical tools and deep understanding of the underlying physical phenomena they study.
The aim of this book is to fully address these challenges and, by design, to “mix everything up”. Therefore, the book will include many classical results from analysis and algebra, details for certain equation resolution algorithms, examples from science and technology, and numerical illustrations. Some “theoretical” tools will be introduced by studying an application example, even if it means repurposing it for an entirely different field. Nevertheless, the book does follow a certain structure, which is organized into three main sections, focused on numerical solutions to (ordinary and partial) differential equations. The first chapter addresses the solution of ordinary differential equations, with a very broad overview of its essential theoretical basis (for example, Cauchy–Lipschitz, qualitative analysis, linear problems). This chapter details the analysis of classical schemes (explicit and implicit Euler methods), and distinguishes various concepts of stability, which are more or less relevant depending on the context. This set of concepts is illustrated by a series of examples, which are motivated mostly by the description of biological systems. A large section, with fairly substantial technical content, is devoted to the particular case of Hamiltonian systems. The second chapter deals with numerical solutions to elliptic boundary value problems, once again with a detailed exploration of basic functional analysis tools. Although the purpose is mostly restricted to the one-dimensional framework and to the model problem on ]0, 1[ with homogeneous Dirichlet conditions, different discretization families are distinguished: finite differences, finite volumes, finite element and spectral methods. Techniques related to optimization are also presented through the simulation of complex problems such as Boltzmann–Poisson equations, load optimization and Stokes’ problem. The last chapter deals with evolutionary partial differential equations, again addressing only the one-dimensional case. Questions of stability and consistency are addressed in this chapter, first for the heat equation and then for hyperbolic problems. The transport and wave equations can be considered “classics”. In contrast, discussion of nonlinear equations, scalars or systems with the simulations of Euler equations for gas dynamics as a final target, leads to more advanced topics. The book does not contain exercise problems. However, readers are invited to carry out the simulations illustrated in the book on their own. This work of numerical experimentation will allow readers to develop an intuition of the mathematical phenomena and notions it presents by playing with numerical and modeling parameters, which will lead to a complete understanding of the subject.
My colleagues and collaborators have had an important influence on building my personal mathematical outlook; they helped me discover points of view that I was not familiar with, and they have made me appreciate notions that I admit had remained beyond my understanding during my initial schooling and even during the early stages of my career. This is the place to thank them for their patience with me and for everything they have taught me. In particular, I am deeply indebted to Frédéric Poupaud and Michel Rascle, as well as Stella Krell and Magali Ribot in Nice, Caterina Calgaro and Emmanuel Creusé in Lille, Virginie Bonnaillie - Noёl, Frédéric Coquel, Benoît Desjardins, Frédéric Lagoutière, and Pauline Lafitte in Paris. I also thank my colleagues on the agrégation jury, especially Florence Bachman, Guillaume Dujardin, Denis Favennec, Hervé Le Dret, Pascal Noble and Gregory Vial. A large number of developments were directly inspired by our passionate conversations. Finally, Claire Scheid, Franck Boyer and Sebastian Minjeaud were kind and patient enough to proofread some of the passages of the manuscript; their advice and suggestions have led to many improvements.
Thierry GOUDONAugust 2016
1 The following quote is quite telling: […] in France, there was even some snobbishness surrounding pure mathematics: when a gifted student was identified, he was told: “Do your PhD in pure mathematics”. On the other hand, average students were advised to focus on applied mathematics, under the rationale that that was “all they were capable of doing”! But the opposite is true: it is impossible to do applied mathematics without first knowing how to do pure mathematics properly. J.A. Dieudonné, [SCH 90, p. 104].
The most important result from the theory of ordinary differential equations ensures the existence and uniqueness of solutions to equations of the form
where
Here I is an open interval of ℝ containing tInit, and Ω is an open set of ℝD containing yInit. The variable t ∈ I is called the time variable, and the variable y is referred to as the state variable. When the function f depends only on the state variable, equation [1.1] is said to be autonomous. We say that a function t ↦ y(t) is a solution of [1.1] if
–
y
is defined on an interval
J
that contains
t
Init
and is included in
I
;
–
y
(
t
Init
) =
y
Init
and for all
t
∈
J
,
y
(
t
) ∈ Ω;
–
y
is differentiable on
J
and for all
t
∈
J
,
y
′(
t
) =
f
(
t, y
(
t
)).
THEOREM 1.1 (Picard–Lindelöf1).– Assume that f is a continuous function on I × Ω and that for every (t⋆, y⋆) ∈ I × Ω, there exist ρ⋆ > 0 and L⋆ > 0, such that B(y⋆, ρ⋆) ⊂ Ω, [t⋆ − ρ⋆, t⋆ + ρ⋆] ⊂ I, and if y, z ∈ B(y⋆, ρ⋆) and |t − t⋆|≤ ρ⋆, then we have
Then for every (t⋆, y⋆) ∈ I × Ω, there exist r⋆ > 0 and h⋆ > 0, such that if |yInit − y⋆| ≤ r⋆ and |tInit − t⋆| ≤ r⋆, the problem [1.1] has a solution y :]tInit − h⋆, tInit + h⋆[→ Ω, which is a class C1 function.
This solution is unique in the sense that if z is a function of class C1 defined on ]tInit − h⋆, tInit + h⋆[ satisfying [1.1], then y(t) = z(t) for all t ∈]tInit − h⋆, tInit + h⋆[.
Finally, if f is a function of class Ck on I × Ω, then y is a function of class Ck+1.
This statement calls for a number of comments, which we present in detail here.
1)
Theorem 1.1
assumes that the function
f
satisfies a certain regularity property with respect to the state variable, this property is stronger than mere continuity:
f
must be Lipschitz continuous in the state variable, at least locally.
2
In particular, note that
if f is a function of class C
1
on I
× Ω
, then it satisfies the assumptions of
theorem 1.1
. This regularity hypothesis cannot be completely ruled out. However, we will see later that it can be relaxed slightly.
2)
Theorem 1.1
only defines the solution in a neighborhood of the initial time
t
Init
. Once the question of existence–uniqueness is settled, it can be worthwhile to take interest in the solution’s
lifespan
: is the solution only defined on a bounded interval, or does it exist for all times? We will see that the answer depends on estimates that can be established for the solution of
[1.1]
.
The “classic proof” for the Picard–Lindelöf theorem is based on a fixed point argument that requires the following statement.
THEOREM 1.2 (Banach theorem).– Let E be a vector space with norm ∥·∥, for which it is assumed that E is complete. Let be a strict contraction mapping, that is to say, such that there exists 0 < k < 1, which satisfies the following inequality for all x, y ∈ E:
Then, has a unique fixed point in E.
PROOF.– Let us begin by establishing uniqueness, assuming existence: if x and y satisfy and , then , which implies that x = y because 0 < k < 1. In order to show existence, let us examine the sequence defined iteratively by , starting at any x0 ∈ E. We have
Since 0 < k < 1, the series converges. It follows that the sequence (xn)n∈ℕ is Cauchy in the complete space E. So, it has a limit x and by continuity of the mapping , we obtain .
PROOF OF THEOREM 1.1.– The proof of theorem 1.1 is based on a functional analysis argument: the subtle trick works with a vector space whose “points” are functions. In this case, it is important to distinguish between:
– the function
t
↦
y
(
t
), which is a point in the functional space (here
C
0
([
t
Init
,
T
[; ℝ
D
), for example);
– and its value
y
(
t
) for a fixed
t
, which is a point in the state space ℝ
D
.
We will justify theorem 1.1 in the case where f is globally Lipschitz with respect to the state variable: we assume that f is defined on ℝ × ℝD and that there exists an L > 0, such that for all x, y ∈ ℝD and any t ∈ ℝ, we have
This technical limitation is important, but enables us to only focus on the key elements of the proof. A proof of the general case can be found in [BEN 10] or [ARN 88, Chapter 4], and later we present a somewhat different approach, which starts from the perspective of numerical approximations. The starting point of the proof is to integrate [1.1] to obtain
such that the solution y of [1.1] is interpreted as a fixed point of the mapping
We will see that this point of view, which transforms from a differential equation to an integral equation (the formulation of [1.3]), is also useful for finding numerical approximations to the solutions of [1.1]. It is now necessary to construct a functional space and a norm in order for to be a contraction. Thus, the sequence defined by y0 given in C0(R; ℝD) and yn+1 = (yn) will converge to a fixed point of , which will be the solution to [1.1] (Picard method). We only focus on time t ≥ tInit. We introduce the auxiliary function
and we set
with M > 0 that remains to be defined. We denote the subspace of functions z ∈ C0([tInit, ∞[; ℝD), such that ∥z∥ < ∞ as . With norm∥·∥, this space is complete. If , with z ∈ , we can write
and deduce that . Indeed, this function t ↦ y(t) is continuous and satisfies
for every t ≥ tInit. Finally, using integration by parts, we calculate
Thus, we have ∥y∥ < ∞. Similarly, we have
By choosing M > L, the mapping appears as a contraction in the complete space . The Banach theorem ensures the existence and uniqueness of a fixed point, since the relation proves that t ↦ y(t) is a continuous and even C1 function because f is continuous. We can easily adapt the proof in order to expand the resulting solution to time t ≤ tInit. Interestingly, by assuming f is globally Lipschitz continuous with respect to the state variable, see [1.2], it has been possible to directly show that the solution is defined for any time. This fact is important and should be justified on its own.
THEOREM 1.3 (Picard–Lindelöf theorem, assuming global Lipschitz continuity).– Let f be a continuous function defined on ℝ × ℝD, which satisfies [1.2]. Then, for every yInit ∈ ℝD, the equation [1.1] has a unique solution y of class C1 defined on ℝ.
Let us now return to the comments for theorem 1.1 on the regularity of the function f. First, in problem [1.1], if we interpret the equation as [1.3], the assumption that f is continuous in the time variable might be weakened; it suffices to assume integrability. For example, the proof for theorem 1.3 can be slightly modified in order to justify the existence and uniqueness of a fixed point of the mapping in a space of continuous functions on [tInit, ∞[ by assuming that there exists t ↦ L(t), a function locally integrable on [tInit, ∞[, such that for every t ≥ tInit, and all x, y ∈ ℝD, we have
This hypothesis generalizes [1.2] (which amounts to the case of L(t) = L). We therefore obtain a function y as a solution to [1.3], which is continuous but not necessarily C1, and the equation [1.1] is only satisfied in a generalized sense. Next, assume that the function f defined on ℝ × ℝD is only continuous and bounded on ℝ × ℝD (there exists C > 0, such that for every t, x, we have |f(t, x)| ≤ C). We introduce a regularizing sequence by defining, for ∈ ℕ \ {0},
where is a smooth function with compact support in ℝD, such that
We define
(For more details about these convolution regularization techniques, the reader may refer to [GOU 11, Section 4.4]). Thus, for every fixed , f is continuous and globally Lipschitz in the state variable, because by writing
we obtain
where we use Fubini’s theorem (for functions with positive values) and then a variable change z′ = (x − z − θ(x − y)), dz′ = D dz. By theorem 1.1 for every ∈ ℕ \ {0}, there exists a function t ↦ y(t) of class C1, which is defined on ℝ and is a solution to
However, f is uniformly bounded with respect to ; in particular, we have |f(t, x)| ≤ C. From this, we deduce that for every 0 < T < ∞, the set {t ∈ [−T, T] ↦ y(t), ∈ ℕ \ {0}} is equibounded and equicontinuous in C0([−T, T]). The Arzela–Ascoli theorem (see [GOU 11], theorem 7.49 and example 7.50) allows us to extract a subsequence (yk)k∈ℕ that converges uniformly on [−T, T], whereas limk→∞k = +∞. We can therefore see that
Since yk(s) tends to y(s) and f is continuous, the integrand tends to 0 when k → ∞, and, moreover, it is still dominated by 2Cρ(z) ∈ L1(ℝD). Lebesgue’s theorem implies that limk→∞fk(s, yk(s)) = f(s, y(s)). Letting k tend to +∞ in the integral relation
for every t ∈ [−T, T], with tInit < T < ∞, by the Lebesgue theorem, we obtain
and finally we can conclude that y is a solution of [1.1]. The continuity of f is therefore sufficient to show the existence of a solution to the equation [1.1]. This is the Cauchy–Peano theorem. However, the solution is not in general unique. A simple counter example, in dimension D = 1, is given by the equation
It is clear that t ↦ y(t) = 0 is a solution on ℝ. But it is not the only one because t ↦ z(t) = t2/4 is also a solution. The assumptions of theorem 1.1 are not satisfied because is continuous but not Lipschitz around y = 0 (the derivative is which tends to +∞ when y → 0).
Thus, we note that on its own, the continuity of f in the state variable is not enough to ensure the truth of an existence–uniqueness statement as strong as theorem 1.1, as shown by the example . However, we can slightly relax the assumption of regularity with respect to the state variable stated in theorem 1.1 and justify the existence–uniqueness of solutions to the differential problem. Sometimes, more sophisticated statements like these are necessary. We will study one such example in detail. The following statement is crucial in the analysis of the equations for incompressible fluid mechanics [CHE 95].
DEFINITION 1.1.– Let μ : [0, ∞[→ [0, ∞[ be a continuous, strictly increasing function, such that μ(0) = 0. We denote by Cμ(Ω) the set of functions u that are continuous on Ω, and for which there exists a C > 0, such that
for all x, y ∈ Ω. Cμ(Ω) is a Banach space for the norm
Therefore, it is convenient to consider the right hand side of equation [1.1] as “a function in the state variable, with the time variable as a parameter”: for (almost) every t ∈ I, we assume that y ↦ f(t, y) is a function in Cμ(Ω; ℝD). We say that f ∈ L1(I; Cμ(Ω)) when there exists an L ∈ L1(I) with strictly positive values, such that for all x, y ∈ Ω
THEOREM 1.4 (Osgood theorem).– Let μ : [0, ∞[→ [0, ∞[ be a continuous, strictly increasing function, such that μ(0) = 0. Assume further that
Let I be an interval of ℝ that contains tInit and f ∈ L1(I; Cμ(Ω)). Then, for every yInit ∈ Ω, there is an interval containing tInit, such that the equation [1.1] has a unique solution t ↦ y(t) defined on . Equation [1.1] is understood in the sense of [1.3] being satisfied. (We say that y is a mild solution of [1.1]).
PROOF.– In fact, this statement raises two questions of a somewhat different nature. We can easily adapt the proof of the Cauchy–Peano theorem to demonstrate the existence of a continuous solution to [1.3] with the simple assumption that f ∈ L1(I; C(Ω)). However, this does not justify uniqueness, which has been found to be wrong, with the mere continuity of f. Extending uniqueness to functions satisfying [1.4] is a result due to [OSG 98]. Another problem is to justify the convergence of the sequence of Picard iterations. Given y0 ∈ C0[tInit, ∞[), then
converges to one (and therefore the) solution of [1.3]. This issue has been studied for its part in [WIN 46].
Let us first prove uniqueness. Suppose that there are two solutions, y1 and y2, to [1.3], and define z = |y2 − y1|. Therefore, for t ≥ tInt, we obtain
We then focus on functions α : [tInit, ∞[→ [0, ∞], such that
Let us first assume that α0 > 0. We define
This function is continuous, non-negative, and satisfies Ω(tInit) = 0. In addition, since Ω is defined as the integral of a positive function, it is non-decreasing and therefore differentiable almost everywhere, with
since L(t) ≥ 0 and μ are non-decreasing. Because α0 + Ω(t) ≥ α0 > 0 and μ is strictly increasing, we have μ(α0 + Ω(t)) > μ(0) = 0, making the following relation true:
Recalling that Ω(tInit) = 0 and with the change of variables σ = α0 + Ω(s), dσ = Ω′(s) ds, it becomes
In particular, the function t ↦ z(t) = |y2(t) − y1(t)| satisfies this relation for any α0 > 0. Suppose that z is not equal to zero. In this case, we can find t⋆ > tInit, η > 0, such that z(t) > 0 on an interval of the form [t⋆ − η, t⋆] ⊂ [tInit, t⋆]. Accordingly, the function Ω, which is associated with α = z, is strictly positive at t⋆. We obtain
for every 0 < α0 < Ω(t⋆), recalling that L is integrable. Letting α0 tend to 0, we are led to a contradiction with [1.5].
We then examine the behavior of the sequence (yk)k∈ℕ defined by [1.6]. We first check that, when T is small enough, this sequence is well defined and remains bounded in C([tInit, T]): we set M > 0 and show that for every k ∈ ℕ and t ∈ [tInit, T],
Indeed, we have
If |yk(t) − yInit|≤ M, for a given M > 0, since μ is increasing, it follows that
is satisfied for all tInit ≤ t ≤ T. Let us now introduce T > 0, such that
which is permissible because . Given this preliminary result, we note that for all integers k, p and t ∈ [tInit, T],
Since μ is monotonous, we can say that zk(t) = supp|yk+p(t) − yk(t)| satisfies
Applying the Fatou lemma [GOU 11, lemma 3.27] to the sequence of positive functions μ(2M) − μ(zk(t)), we can deduce that Z(t) = limsupk→∞zk(t) satisfies
The same reasoning that made it possible to demonstrate the uniqueness of the solution to [1.3] applies here and allows us to conclude that Z(t) = 0 on [tInit, T]. Therefore, (yk)k∈ℕ is a Cauchy sequence in the complete space C([tInit, T]). This sequence has a limit y for uniform convergence on [tInit, T]. By making k → ∞ in [1.6], we effectively show that y satisfies [1.3].
It is important to recall that theorem 1.1 only ensures the existence of a solution to [1.1] defined in a neighborhood of the initial time tInit. This is an inherently local result. Additional information is required to prove that the solution is globally defined. The case of a function f that is globally Lipschitz continuous with respect to the state variable is one of those special situations in which the solution of [1.1] is defined for all times. However, in dimension D = 1, the example of
shows that the solution can explode in finite time, even if the second member f is a function of class C1 on ℝ × ℝD. In this case, we, in fact, have
which is therefore not defined beyond3T⋆ = tInit + 1/yInit.
Defining a solution for [1.1], therefore, requires determining both an interval containing tInit and a function y defined on (with values in Ω). Given two solutions and for [1.1], we can say that extends if
We say that is a maximal solution if the solution can only be extended by itself.
PROPOSITION 1.1.– Let f satisfy the assumptions of theorem 1.1. For every tInit ∈ I and yInit ∈ Ω, there is a unique maximal solution for [1.1] and is an open interval of ℝ.
PROOF.– The set of solutions of [1.1] contains at least ({tInit}, yInit). Let us consider two solutions and . In particular, and are two intervals containing tInit. Let . Then, is also an interval that at least contains tInit, and we have . Let . By theorem 1.1, is an open set of ℝ containing tInit (there is a solution of the differential equation with initial values (t, y(t) = ỹ(t)), which is uniquely defined on an open interval ]t − η, t + η[, with η depending on (t, y(t)). Since y and ỹ are themselves solutions to this problem, we have y(s) = ỹ(s) for every s ∈]t− η, t+η[…).
However, is also closed in by the continuity of functions ỹ and y: if (tn)n∈ℕ is a sequence of elements of , which tends to , then ỹ(t) = y(t) and (we can also note that is the inverse image of the closed set {0} by the continuous function y − ỹ). It follows that = , by characterization of connected sets of ℝ. We define as the union of all the intervals in which we can define a solution of [1.1]. For , there exists a unique solution sy(s) of [1.1] defined in a neighborhood of s = t; we thus define ỹ(t) = y(t). By construction, is the maximal solution of [1.1].
We introduce the set
As a result of theorem 1.1, is non-empty. We define
Let T, T′ ∈ , T < T′. By uniqueness of the solution of the differential equation associated with the point (tInit, yInit), it follows that ([tInit, tInit + T′], yT′) extends ([tInit, tInit + T], yT); in particular, is an interval. Let tInit < t < tInit + T⋆. Then, there exists a , such that t < T + tInit < T⋆ + tInit and yT is the unique solution of [1.1] defined on [tInit, tInit + T]. Therefore, we conclude that there exists a unique solution to [1.1] associated with (tInit, yInit) and defined on the interval [tInit, tInit + T⋆[, which is an open set of [tInit, +∞[⋂I. Finally, if T⋆ is a finite element of , then (T⋆, ỹ(T⋆)) ∈ I × Ω and theorem 1.1 extends the solution beyond T⋆. We conclude that is an open interval.
NOTE.– An important special case that allows us to estimate solutions is when there exists a “stationary point”. Indeed, if f(t, x0) = 0 for every t ∈ I, then the constant function tx0 is a solution to [1.1], with yInit = x0 being the initial data (for any initial time tInit). Uniqueness implies that no other solution of [1.1] can pass through x0.
Let be the maximal solution of [1.1]. We can write If T⋆ is finite, we call right end the associated set
We then compare with the definition set I of the data for the problem [1.1].
THEOREM 1.5.– We have
as well as analogous definitions and results for the “left end”.
PROOF.– Suppose that T⋆ < ∞ and ω⋆ ≠ Ø, that is, b ∈ ω⋆. By definition, we always have . More specifically, if we denote the maximal solution of [1.1] as (]T⋆, T⋆[, y), there exists a sequence (tn)n∈ℕ, such that
Suppose that T⋆ ∈ I and ω⋆ ∈ Ω. We can therefore find τ, r > 0, such that
– ;
– for every and
r
> 2
Mτ
;
–
f
is locally Lipschitz continuous in the state variable on .
We define . Then, given that , we can see that . By construction, we have . The local existence theorem ensures the existence of a function t ↦ ψ(t), defined in a neighborhood of s, which is a solution of ψ′(t) = f(t, ψ(t)), with ψ(s) = z. Moreover, the neighborhood can be chosen in such a way that for every t ∈ .
Furthermore, there exists an integer N, such that |tN − T⋆| ≤ τ/3 and . We can find a function t ↦ ψ(t) that satisfies ψ′(t) = f(t, ψ(t)) and ψ(tN) = y(tN). By writing
we can see that if |t − tN |≤ 2τ/3,
That is, . It follows that ψ is defined on an interval [tN − 2τ/3, tN + 2τ/3].
We have two solutions to the differential equation x′(t) = f(t, x(t)), which pass through y(tN) at the instant tN : the maximal solution y and the solution ψ that was just presented. By uniqueness, they must be one and the same. However, T⋆ = T⋆ − tN + tN < tN + 2τ/3, so we could have extended the maximal solution, which is a contradiction.
The statement of theorem 1.5 can be a bit abstruse, but it is possible to deduce some more practical formulations.
COROLLARY 1.1.– If T⋆ < ∞, then
– either
y
′(
t
) =
f
(
t, y
(
t
)) is unbounded in a neighborhood of
T
⋆
,
– or else
t
↦
y
(
t
) has a limit
b
∈ ℝ
D
when
t
→
T
⋆
, and (
T
⋆
,
b
) ∉
I
× Ω.
PROOF.– Suppose that T⋆ is finite and that t ↦ y′(t) = f(t, y(t)) is bounded by M > 0. It follows that |y(t) − y(s)|≤ M |t − s|. We can therefore infer that for every sequence (tn)n∈ℕ that converges to T⋆, (y(tn))n∈ℕ is a Cauchy sequence and thus has a limit. Moreover, this limit does not depend on the sequence considered. In other words, there exists a b ∈ ℝD, such that limt→T⋆y(t) = b. Theorem 1.5 ensures that (T⋆, b) ∉ I × Ω. This result is subtle: there is no guarantee that y is not extensible by continuity in T⋆ and that, by that same token, f(t, y) can be extended by continuity in (T⋆, b).
COROLLARY 1.2 (blow up criterion).– If f is defined on ℝ × ℝD and one of the ends T⋆ or T⋆, denoted as , is finite, we have .
PROOF.– Let (tn)n∈ℕ be a sequence that converges to T⋆ < ∞, such that (y(tn))n∈ℕ remains bounded. We can thus extract a subsequence, such that limk→∞y(tnk) = b ∈ ℝN. By theorem 1.5, (T⋆, b) is not in the domain of f. However, since the domain is the entire space ℝ × ℝD, we have a contradiction.
It is useful to have simple and representative examples in mind for the variety of situations that can occur (in one dimension):
– The function
f
(
t, y
) =
y
2
