Numerical Analysis with Applications in Mechanics and Engineering - Petre Teodorescu - E-Book

Numerical Analysis with Applications in Mechanics and Engineering E-Book

Petre Teodorescu

0,0
116,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

NUMERICAL ANALYSIS WITH APPLICATIONS IN MECHANICS AND ENGINEERING

A much-needed guide on how to use numerical methods to solve practical engineering problems

Bridging the gap between mathematics and engineering, Numerical Analysis with Applications in Mechanics and Engineering arms readers with powerful tools for solving real-world problems in mechanics, physics, and civil and mechanical engineering. Unlike most books on numerical analysis, this outstanding work links theory and application, explains the mathematics in simple engineering terms, and clearly demonstrates how to use numerical methods to obtain solutions and interpret results.

Each chapter is devoted to a unique analytical methodology, including a detailed theoretical presentation and emphasis on practical computation. Ample numerical examples and applications round out the discussion, illustrating how to work out specific problems of mechanics, physics, or engineering. Readers will learn the core purpose of each technique, develop hands-on problem-solving skills, and get a complete picture of the studied phenomenon. Coverage includes:

  • How to deal with errors in numerical analysis
  • Approaches for solving problems in linear and nonlinear systems
  • Methods of interpolation and approximation of functions
  • Formulas and calculations for numerical differentiation and integration
  • Integration of ordinary and partial differential equations
  • Optimization methods and solutions for programming problems

Numerical Analysis with Applications in Mechanics and Engineering is a one-of-a-kind guide for engineers using mathematical models and methods, as well as for physicists and mathematicians interested in engineering problems.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 580

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

series

Title Page

Copyright

Preface

Chapter 1: Errors in Numerical Analysis

1.1 Enter Data Errors

1.2 Approximation Errors

1.3 Round-Off Errors

1.4 Propagation of Errors

1.5 Applications

Further Reading

Chapter 2: Solution of Equations

2.1 The Bipartition (Bisection) Method

2.2 The Chord (Secant) Method

2.3 The Tangent Method (Newton)

2.4 The Contraction Method

2.5 The Newton–Kantorovich Method

2.6 Numerical Examples

2.7 Applications

Further Reading

Chapter 3: Solution of Algebraic Equations

3.1 Determination of Limits of the Roots of Polynomials

3.2 Separation of Roots

3.3 Lagrange'S Method

3.4 The Lobachevski–Graeffe Method

3.5 The Bernoulli Method

3.6 The Bierge–Viète Method

3.7 Lin Methods

3.8 Numerical Examples

3.9 Applications

Further Reading

Chapter 4: Linear Algebra

4.1 Calculation of Determinants

4.2 Calculation of the Rank

4.3 Norm of a Matrix

4.4 Inversion of Matrices

4.5 Solution of Linear Algebraic Systems of Equations

4.6 Determination of Eigenvalues and Eigenvectors

4.7 QR Decomposition

4.8 The Singular Value Decomposition (SVD)

4.9 Use of the Least Squares Method in Solving the Linear Overdetermined Systems

4.10 The Pseudo-Inverse of a Matrix

4.11 Solving of the Underdetermined Linear Systems

4.12 Numerical Examples

4.13 Applications

Further Reading

Chapter 5: Solution of Systems of Nonlinear Equations

5.1 The Iteration Method (Jacobi)

5.2 Newton's Method

5.3 The Modified Newton Method

5.4 The Newton–Raphson Method

5.5 The Gradient Method

5.6 The Method of Entire Series

5.7 Numerical Example

5.8 Applications

Further Reading

Chapter 6: Interpolation and Approximation of Functions

6.1 Lagrange's Interpolation Polynomial

6.2 Taylor Polynomials

6.3 Finite Differences: Generalized Power

6.4 Newton's Interpolation Polynomials

6.5 Central Differences: Gauss's Formulae, Stirling's Formula, Bessel's Formula, Everett's Formulae

6.6 Divided Differences

6.7 Newton-Type Formula with Divided Differences

6.8 Inverse Interpolation

6.9 Determination of the Roots of an Equation by Inverse Interpolation

6.10 Interpolation by Spline Functions

6.11 Hermite's Interpolation

6.12 Chebyshev's Polynomials

6.13 Mini–Max Approximation of Functions

6.14 Almost Mini–Max Approximation of Functions

6.15 Approximation of Functions by Trigonometric Functions (Fourier)

6.16 Approximation of Functions by the Least Squares

6.17 Other Methods of Interpolation

6.18 Numerical Examples

6.19 Applications

Further Reading

Chapter 7: Numerical Differentiationand Integration

7.1 Introduction

7.2 Numerical Differentiation by Means of an Expansion into a Taylor Series

7.3 Numerical Differentiation by Means of Interpolation Polynomials

7.4 Introduction to Numerical Integration

7.5 The Newton–Côtes Quadrature Formulae

7.6 The Trapezoid Formula

7.7 Simpson's Formula

7.8 Euler's and Gregory's Formulae

7.9 Romberg's Formula

7.10 Chebyshev's Quadrature Formulae

7.11 Legendre's Polynomials

7.12 Gauss's Quadrature Formulae

7.13 Orthogonal Polynomials

7.14 Quadrature Formulae of Gauss Type Obtainedby Orthogonal Polynomials

7.15 Other Quadrature Formulae

7.16 Calculation of Improper Integrals

7.17 Kantorovich's Method

7.18 The Monte Carlo Method for Calculation of Definite Integrals

7.19 Numerical Examples

7.20 Applications

Further Reading

Chapter 8: Integration of Ordinary Differential Equations and of Systems of Ordinary Differential Equations

8.1 State of the Problem

8.2 Euler's Method

8.3 Taylor Method

8.4 The Runge–Kutta Methods

8.5 Multistep Methods

8.6 ADams's Method

8.7 The Adams–Bashforth Methods

8.8 The Adams–Moulton Methods

8.9 Predictor–Corrector Methods

8.10 The Linear Equivalence Method (LEM)

8.11 Considerations about the Errors

8.12 Numerical Example

8.13 Applications

Further Reading

Chapter 9: Integration of Partial Differential Equations and of Systems of Partial Differential Equations

9.1 Introduction

9.2 Partial Differential Equations of First Order

9.3 Partial Differential Equations of Second Order

9.4 Partial Differential Equations of Second Order of Elliptic Type

9.5 Partial Differential Equations of Second Order of Parabolic Type

9.6 Partial Differential Equations of Second Order of Hyperbolic Type

9.7 Point Matching Method

9.8 Variational Methods

9.9 Numerical Examples

9.10 Applications

Further Reading

Chapter 10: Optimizations

10.1 Introduction

10.2 Minimization Along a Direction

10.3 Conjugate Directions

10.4 Powell's Algorithm

10.5 Methods of Gradient Type

10.6 Methods of Newton Type

10.7 Linear Programming: The Simplex Algorithm

10.8 Convex Programming

10.9 Numerical Methods for Problems of Convex Programming

10.10 Quadratic Programming

10.11 Dynamic Programming

10.12 Pontryagin's Principle of Maximum

10.13 Problems of Extremum

10.14 Numerical Examples

10.15 Applications

Further Reading

Index

Copyright © 2013 by The Institute of Electrical and Electronics Engineers, Inc.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey. All rights reserved

Published simultaneously in Canada

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging-in-Publication Data:

Teodorescu, P. P.

Numerical Analysis with Applications in Mechanics and Engineering / Petre Teodorescu,

Nicolae-Doru Stanescu, Nicolae Pandrea.

pages cm

ISBN 978-1-118-07750-4 (cloth)

1. Numerical analysis. 2. Engineering mathematics. I. Stanescu, Nicolae-Doru. II. Pandrea, Nicolae. III. Title.

QA297.T456 2013

620.001'518- dc23

2012043659

Preface

In writing this book, it is the authors' wish to create a bridge between mathematical and technical disciplines, which requires knowledge of strong mathematical tools in the area of numerical analysis. Unlike other books in this area, this interdisciplinary work links the applicative part of numerical methods, where mathematical results are used without understanding their proof, to the theoretical part of these methods, where each statement is rigorously demonstrated.

Each chapter is followed by problems of mechanics, physics, or engineering. The problem is first stated in its mechanical or technical form. Then the mathematical model is set up, emphasizing the physical magnitudes playing the part of unknown functions and the laws that lead to the mathematical problem. The solution is then obtained by specifying the mathematical methods described in the corresponding theoretical presentation. Finally, a mechanical, physical, and technical interpretation of the solution is provided, giving rise to complete knowledge of the studied phenomenon.

The book is organized into 10 chapters. Each of them begins with a theoretical presentation, which is based on practical computation—the “know-how” of the mathematical method—and ends with a range of applications.

The book contains some personal results of the authors, which have been found to be beneficial to readers.

The authors are grateful to Mrs. Eng. Ariadna–Carmen Stan for her valuable help in the presentation of this book. The excellent cooperation from the team of John Wiley & Sons, Hoboken, USA, is gratefully acknowledged.

The prerequisites of this book are courses in elementary analysis and algebra, acquired by a student in a technical university. The book is addressed to a broad audience—to all those interested in using mathematical models and methods in various fields such as mechanics, physics, and civil and mechanical engineering; people involved in teaching, research, or design; as well as students.

PETRE TEODORESCU

NICOLAE-DORU STNESCU

NICOLAE PANDREA

Chapter 1: Errors in Numerical Analysis

In this chapter, we deal with the most encountered errors in numerical analysis, that is, enter data errors, approximation errors, round-off errors, and propagation of errors.

1.1 Enter Data Errors

Enter data errors appear, usually, if the enter data are obtained from measurements or experiments. In such a case, the errors corresponding to the estimation of the enter data are propagated, by means of the calculation algorithm, to the exit data.

We define in what follows the notion of stability of errors.

Definition 1.1

A calculation process is stable to errors if, for any , there exists such that if for any two sets and of enter data we have then the two exit sets and corresponding to and respectively, verify the relation

Observation 1.1

The two norms and of the enter and exit quantities, respectively, which occur in Definition 1.1, depend on the process considered.

Intuitively, according to Definition 1.1, the calculation process is stable if, for small variations of the enter data, we obtain small variations of the exit data.

Hence, we must characterize the stable calculation process. Let us consider that the calculation process is characterized by a family of functions defined on a set of enter data with values in a set of exit data. We consider such a vector function of vector variable where is a domain in (we propose to have enter data and exit data).

Definition 1.2

is a Lipschitz function (has the Lipschitz property) if there exists constant, so as to have for any (the first norm is in and the second one in ).

It is easy to see that a calculation process characterized by Lipschitz functions is a stable one.

In addition, a function with the Lipschitz property is continuous (even uniform continuous) but the converse does not hold; for example, the function is continuous but it is not Lipschitz. Indeed, let us suppose that is Lipschitz, hence that it has a positive constant such that

1.1

Let us choose and such that Expression (1.1) leads to

1.2

from which we get

1.3

From the choice of and it follows that

1.4

so that relations (1.3) and (1.4) lead to

1.5

which is absurd. Hence, the continuous function is not a Lipschitz one.

1.2 Approximation Errors

The approximation errors have to be accepted by the conception of the algorithms because of various objective considerations.

Let us determine the limit of a sequence using a computer; it is supposed that the sequence is convergent. Let the sequence be defined by the relation

1.6

We observe that the terms of the sequence are positive, excepting eventually . The limit of this sequence, denoted by , is the positive root of the equation

1.7

If we wish to determine with two exact decimal digits, then we take an arbitrary value of for example, and calculate the successive terms of the sequence (Table 1.1).

Table 1.1 Calculation of with Two Exact Decimal Digits

1.3 Round-Off Errors

Round-off errors are due to the mode of representation of the data in the computer. For instance, the number 0.8125 in base 10 is represented in base 2 in the form and the number 0.75 in the form Let us suppose that we have a computer that works with three significant digits. The sum becomes

1.8

Such errors may also appear because of the choice of inadequate types of data in the programming realized on the computer.

1.4 Propagation of Errors

Let us consider the number and let be an approximation of it.

Definition 1.3
(i) We call absolute error the expression

1.9

(ii) We call relative error the expression

1.10

1.4.1 Addition

Let be the numbers for which the relative errors are while their absolute errors read

The relative error of the sum is

1.11

and we may write the relation

1.12

that is, the modulus of the relative error of the sum is contained between the lower and the higher values in the modulus of the relative errors of the component members.

Thus, if the terms are positive and of the same order of magnitude,

1.13

then we must take the same number of significant digits for each term the same number of significant digits occurring in the sum too.

If the numbers are much different among them, then the number of the significant digits after the comma is given by the greatest number (we suppose that ). For instance, if we have to add the numbers

1.14

both numbers having five significant digits, then we will round off to two digits (as ) and write

1.15

It is observed that addition may result in a compensation of the errors, in the sense that the absolute error of the sum is, in general, smaller than the sum of the absolute error of each term.

We consider that the absolute error has a Gauss distribution for each of the terms given by the distribution density

1.16

from which we obtain the distribution function

1.17

with the properties

1.18

The probability that is contained between and with is

1.19

Because is an even function, it follows that the mean value of a variable with a normal Gauss distribution is

1.20

while its mean square deviation reads

1.21

Usually, we choose as being the mean square root

1.22

1.4.2 Multiplication

Let us consider two numbers for which the relative errors are while the approximations are respectively. We have

1.23

Because and are small, we may consider hence

1.24

so that the relative error of the product of the two numbers reads

1.25

Similarly, for numbers of relative errors we have

1.26

Let be a number that may be written in the form

1.27

The absolute error is

1.28

while the relative one is

1.29

where we have supposed that has significant digits.

If is the round-off of at significant digits, then

1.30

The error of the last significant digit, the , is

1.31

Let now be two numbers of relative errors and let be the relative error of the product We have

1.32

Moreover, takes its greatest value if and are negative; hence, we may write

1.33

where the error of the digit on the position is

1.34

On the other hand,

1.35

where or the most disadvantageous case being that described by

The function

1.36

defined for , , will attain its maximum on the frontier of the above domain, that is, for or It follows that

1.37

and hence

1.38

so that the error of the digit of the response will have at the most six units.

If then the most disadvantageous case is given by

1.39

when

1.40

that is, the digit of is given by an approximation of four units.

Let now be numbers; then

1.41

the most disadvantageous case being that in which numbers are equal to 1, while one number is equal 10. In this case, we have

1.42

If all the numbers are equal, then the most disadvantageous situation appears for and hence it follows that

1.43

1.4.3 Inversion of a Number

Let be a number, its approximation, and its relative error. We may write

1.44

hence

1.45

so that the relative error remains the same.

In general,

1.46

1.4.4 Division of Two Numbers

We may imagine the division of by as the multiplication of by so that

1.47

hence, the relative errors are summed up.

1.4.5 Raising to a Negative Entire Power

We may write

1.48

so that the relative errors are summed up.

1.4.6 Taking the Root of Order

We have, successively,

1.49

1.50

The maximum error for the digit is now obtained for entire, and is given by

1.51

1.4.7 Subtraction

Subtraction is the most disadvantageous operation if the result is small with respect to the minuend and the subtrahend.

Let us consider the subtraction in which the first four digits of each number are known with precision; concerning the fifth digit, we can say that it is determined with a precision of 1 unit. It follows that for 20.003 the relative error is

1.52

while for 19.998 the relative error is

1.53

The result of the subtraction operation is while the last digit may be wrong with two units, so that the relative error of the difference is

1.54

that is, a relative error that is approximately 8000 times greater than or

It follows the rule that the difference of two quantities must be directly calculated, without previously calculating the two quantities.

1.4.8 Computation of Functions

Starting from Taylor's relation

1.55

where is a point situated between and it follows that the absolute error is

1.56

while the relative error reads

1.57

where defines the real interval of ends and

1.5 Applications

Problem 1.1

Let us consider the sequence of integrals

1.58

(i) Determine a recurrence formula for

Solution:

To calculate we use integration by parts and have

1.59

(ii) Show that does exist.

Solution:

For we have

1.60

hence for any It follows that is a decreasing sequence of real numbers.

On the other hand,

1.61

so that is a positive sequence of real numbers.

We get

1.62

so that is convergent and, moreover,

1.63

(iii) Calculate

Solution:

To calculate the integral we have two methods.

Method 1.

1.64

1.65

1.66

1.67

1.68

1.69

1.70

1.71

1.72

1.73

1.74

1.75

1.76

1.77

It follows that

1.78

Method 2. In this case, we replace directly the calculated values, thus obtaining

1.79

1.80

1.81

1.82

1.83

1.84

1.85

1.86

1.87

1.88

1.89

1.90

1.91

1.92

We observe that, because of the propagation of errors, the second method cannot be used to calculate

Problem 1.2

Let the sequences and be defined recursively:

1.93

1.94

(i) Calculate

Solution:

We have, successively,

1.95

1.96

1.97

1.98

1.99

1.100

1.101

(ii) Calculate for

Solution:

There result the values

1.102

1.103

1.104

1.105

1.106

1.107

1.108

(iii) Calculate for

Solution:

In this case, we obtain the values

1.109

1.110

1.111

1.112

1.113

1.114

1.115

We observe that the sequences and converge to for while the sequence is divergent for

Problem 1.3

If the independent aleatory variables and have the density distributions and respectively, then the aleatory variable has the density distribution

1.116

(i) Demonstrate that if the aleatory variables and have a normal distribution by zero mean and standard deviations and then the aleatory variable has a normal distribution.

Solution:

From equation (1.116) we have

1.117

We require the values , and real, such that

1.118

from which

1.119

with the solution

1.120

We make the change of variable

1.121

and expression (1.118) becomes

1.122

(ii) Calculate the mean and the standard deviation of the aleatory variable of point (i).

Solution:

We calculate

1.123

1.124

(iii) Let be an aleatory variable with a normal distribution, a zero mean, and standard deviation Calculate

1.125

and

1.126

Solution:

Through the change of variable

1.127

it follows that

1.128

Similarly, we have

1.129

On the other hand,

1.130

so that

1.131

(iv) Let fixed. Determine so that

1.132

Solution:

Proceeding as with point (iii), it follows that

1.133

so that we obtain the inequality

1.134

from which

1.135

(v) Calculate

1.136

and

1.137

Solution:

We again make the change of variable (1.127) and obtain

1.138

Point (ii) shows that

1.139

hence, it follows that

1.140

On the other hand, we have seen that and we may write

1.141

Immediately, it follows that

1.142

(vi) Let and be two aleatory variables with a normal distribution, a zero mean, and standard deviation Determine the density distribution of the aleatory variable as well as its mean and standard deviation.

Solution:

It is a particular case of points (i) and (ii); hence, we obtain

1.143

that is, a normal aleatory variable of zero mean and standard deviation

(vii) Let and be numbers estimated with errors and respectively, considered to be aleatory variables with normal distribution, zero mean, and standard deviation Calculate the sum so that the error is less than a value

Solution:

The requested probability is given by

1.144

Taking into account the previous results, we obtain

1.145

1.146

so that

1.147

Further Reading

Acton FS (1990). Numerical Methods that Work. 4th ed. Washington: Mathematical Association of America.

Ackleh AS, Allen EJ, Hearfott RB, Seshaiyer P (2009). Classical and Modern Numerical Analysis: Theory, Methods and Practice. Boca Raton: CRC Press.

Atkinson KE (1989). An Introduction to Numerical Analysis. 2nd ed. New York: John Wiley & Sons, Inc.

Atkinson KE (2003). Elementary Numerical Analysis. 2nd ed. New York: John Wiley & Sons, Inc.

Bakhvalov N (1976). Méthodes Numérique. Moscou: Editions Mir (in French).

Berbente C, Mitran S, Zancu S (1997). Metode Numerice. Bucureti: Editura Tehnic (in Romanian).

Burden RL, Faires L (2009). Numerical Analysis. 9th ed. Boston: Brooks/Cole.

Chapra SC (1996). Applied Numerical Methods with MATLAB for Engineers and Scientists. Boston: McGraw-Hill.

Cheney EW, Kincaid DR (1997). Numerical Mathematics and Computing. 6th ed. Belmont: Thomson.

Dahlquist G, Björck (1974). Numerical Methods. Englewood Cliffs: Prentice Hall.

Démidovitch B, Maron I (1973). Éléments de Calcul Numérique. Moscou: Editions Mir (in French).

Epperson JF (2007). An Introduction to Numerical Methods and Analysis. Hoboken: John Wiley & Sons, Inc.

Gautschi W (1997). Numerical Analysis: An Introduction. Boston: Birkhäuser.

Greenbaum A, Chartier TP (2012). Numerical Methods: Design, Analysis, and Computer Implementation of Algorithms. Princeton: Princeton University Press.

Hamming RW (1987). Numerical Methods for Scientists and Engineers. 2nd ed. New York: Dover Publications.

Hamming RW (2012). Introduction to Applied Numerical Analysis. New York: Dover Publications.

Heinbockel JH (2006). Numerical Methods for Scientific Computing. Victoria: Trafford Publishing.

Higham NJ (2002). Accuracy and Stability of Numerical Algorithms. 2nd ed. Philadelphia: SIAM.

Hildebrand FB (1987). Introduction to Numerical Analysis. 2nd ed. New York: Dover Publications.

Hoffman JD (1992). Numerical Methods for Engineers and Scientists. New York: McGraw-Hill.

Kharab A, Guenther RB (2011). An Introduction to Numerical Methods: A MATLAB Approach. 3rd ed. Boca Raton: CRC Press.

Krîlov AN (1957). Lecii de Calcule prin Aproximaii. Bucureti: Editura Tehnic (in Romanian).

Kunz KS (1957). Numerical Analysis. New York: McGraw-Hill.

Levine L (1964). Methods for Solving Engineering Problems Using Analog Computers. New York: McGraw-Hill.

Marinescu G (1974). Analiz Numeric. Bucureti: Editura Academiei Române (in Romanian).

Press WH, Teukolski SA, Vetterling WT, Flannery BP (2007). Numerical Recipes: The Art of Scientific Computing. 3rd ed. Cambridge: Cambridge University Press.

Quarteroni A, Sacco R, Saleri F (2010). Numerical Mathematics. 2nd ed. Berlin: Springer-Verlag.

Ralston A, Rabinowitz P (2001). A First Course in Numerical Analysis. 2nd ed. New York: Dover Publications.

Ridgway Scott L (2011). Numerical Analysis. Princeton: Princeton University Press.

Sauer T (2011). Numerical Analysis. 2nd ed. London: Pearson.

Simionescu I, Dranga M, Moise V (1995). Metode Numerice în Tehnic. Aplicaii în FORTRAN. Bucureti: Editura Tehnic (in Romanian).

Stnescu ND (2007). Metode Numerice. Bucureti: Editura Didactici Pedagogic (in Romanian).

Stoer J, Bulirsh R (2010). Introduction to Numerical Analysis. 3rd ed. New York: Springer-Verlag.

Chapter 2: Solution of Equations

We deal with several methods of approximate solutions of equations, that is, the bipartition method, the chord (secant) method, the tangent method (Newton), and the Newton–Kantorovich method. These are followed by applications.

2.1 The Bipartition (Bisection) Method

Let us consider the equation1

2.1

where , , , continuous on , with a single root , , on the interval .

First, we verify if or ; if this occurs, then the algorithm stops. Otherwise, we consider the middle of the interval , . We verify if is a solution of equation (2.1); if , the algorithm stops; if not, we calculate . If , then we consider the interval on which we have the true solution; if not, we consider the interval . Thus, the interval is diminished to or , its new length being equal to . We thus obtain a new interval , where or , and we apply the procedure described above. The procedure stops when a certain criterion (e.g., the length of the interval is less than a given ) is fulfilled.

As we can see from this exposition, the bipartition method consists in the construction of three sequences , , and , , as follows:

2.2

The bipartition method is based on the following theorem.

Theorem 2.1

The sequences , , , , given by formulae (2.2), are convergent, and their limit is the value of the unique real root of equation (2.1) on the interval .

Demonstration. Let us show that

2.3

for any .

To fix the ideas, we suppose that and . If , then

2.4

whereas if , we get

2.5

Hence, in general,

2.6

It is obvious that

2.7

From the definition of the sequence , , it follows that or . We may write

2.8

hence, the sequence , , is monotone increasing. Analogically, we obtain the relation

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!