Fracture Mechanics 1 - Ammar Grous - E-Book

Fracture Mechanics 1 E-Book

Ammar Grous

0,0
139,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

This first book of a 3-volume set on Fracture Mechanics is mainly centered on the vast range of the laws of statistical distributions encountered in various scientific and technical fields. These laws are indispensable in understanding the probability behavior of components and mechanical structures that are exploited in the other volumes of this series, which are dedicated to reliability and quality control.
The author presents not only the laws of distribution of various models but also the tests of adequacy suited to confirm or counter the hypothesis of the law in question, namely the Pearson (x2) test, the Kolmogorov-Smirnov (KS) test, along with many other relevant tests.
This book distinguishes itself from other works in the field through its originality in presenting an educational approach which aims at helping practitioners both in academia and industry. It is intended for technicians, engineers, designers, students, and teachers working in the fields of engineering and vocational education. The main objective of the author is to provide an assessment of indicators of quality and reliability to aid in decision-making. To this end, an intuitive and practical approach, based on mathematical rigor, is recommended.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 297

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Preface

Chapter 1. Elements of Analysis of Reliability and Quality Control

1.1. Introduction

1.2. Fundamental expression of the calculation of reliability

1.3. Continuous uniform distribution

1.4. Discrete uniform distribution (discrete U)

1.5. Triangular distribution

1.6. Beta distribution

1.7. Normal distribution

1.8. Log-normal distribution (Galton)

1.9. The Gumbel distribution

1.10. The Frechet distribution (E2 Max)

1.11. The Weibull distribution (with three parameters)

1.12. The Weibull distribution (with two parameters)

1.13. The Birnbaum–Saunders distribution

1.14. The Cauchy distribution

1.15. Rayleigh distribution

1.16. The Rice distribution (from the Rayleigh distribution)

1.17. The Tukey-lambda distribution

1.18. Student’s (t) distribution

1.19. Chi-square distribution law (χ2)

1.20. Exponential distribution

1.21. Double exponential distribution (Laplace)

1.22. Bernoulli distribution

1.23. Binomial distribution

1.24. Polynomial distribution

1.25. Geometrical distribution

1.26. Hypergeometric distribution (the Pascal distribution)

1.27. Poisson distribution

1.28. Gamma distribution

1.29. Inverse gamma distribution

1.30. Distribution function (inverse gamma distribution probability density)

1.31. Erlang distribution (characteristic of gamma distribution, Г)

1.32. Logistic distribution

1.33. Log-logistic distribution

1.34. Fisher distribution (F-distribution or Fisher–Snedecor)

1.35. Analysis of component lifespan (or survival)

1.36. Partial conclusion of Chapter 1

1.37. Bibliography

Chapter 2. Estimates, Testing Adjustments and Testing the Adequacy of Statistical Distributions

2.1. Introduction to assessment and statistical tests

2.2. Method of moments

2.3. Method of maximum likelihood

2.4. Moving least-squares method

2.5. Conformity tests: adjustment and adequacy tests

2.6. Accelerated testing method

2.7. Trend tests

2.8. Duane model power law

2.9. Chi-Square test for the correlation quantity

2.10. Chebyshev’s inequality

2.11. Estimation of parameters

2.12. Gaussian distribution: estimation and confidence interval

2.13. Kaplan-Meier estimator

2.14. Case study of an interpolation using the bi-dimensional spline function

2.15. Conclusion

2.16. Bibliography

Chapter 3. Modeling Uncertainty

3.1. Introduction to errors and uncertainty

3.2. Definition of uncertainties and errors as in the ISO norm

3.3. Definition of errors and uncertainty in metrology

3.4. Global error and its uncertainty

3.5. Definitions of simplified equations of measurement uncertainty

3.6. Principal of uncertainty calculations of type A and type B

3.7. Study of the basics with the help of the GUMic software package: quasi-linear model

3.8. Conclusion

3.9. Bibliography

Glossary

Index

First published 2013 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd

27-37 St George’s Road

London SW19 4EU

UK

www.iste.co.uk

John Wiley & Sons, Inc.

111 River Street

Hoboken, NJ 07030

USA

www.wiley.com

© ISTE Ltd 2013

The rights of Ammar Grous to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2012950202

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN: 978-1-84821-440-8

Preface

This book is intended for technicians, engineers, designers, students, and teachers working in the fields of engineering and vocational education. Our main objective is to provide an assessment of indicators of quality and reliability to aid in decision making. To this end, we recommend an intuitive and practical approach, based on mathematical rigor.

The first part of this series (Volume 1) shows the fundamental basis of data analysis in both quality control and in studying the mechanical reliability of materials and structures. Results from laboratory and workshop are discussed in accordance with the technological procedures inherent to the subject matter. We also discuss and interpret the standardization of manufacturing processes as a causal link with geometric and dimensional specifications (GPS, or Geometrical Product Specification). This is moreover the educational novelty of this work, which is compared here with consulted praiseworthy publications.

We discuss many laboratory examples, thereby covering a new industrial organization of work. We also use mechanical components from our own real mechanisms, which we built and designed at our production labs. Finite element modification is thus relevant to real machined pieces, controlled and soldered in a dimensional metrology laboratory.

We also discuss mechanical component reliability. Since statistics are common to both this field and quality control, we will simply mention reliability indices in the context of using the structure for which we are performing the calculations.

Scientists from specialized schools and corporations often take interest in the quality of measurement, and thus in the measurement of uncertainties. The so-called cutting-edge endeavors, such as the aeronautics, automotive and nuclear industries, to only mention a few, put an increasing emphasis on the just measurement. This text’s educational content is noteworthy due to the following:

1) the rigor of the probabilistic methods which support statistical–mathematical treatments of experimental or simulated data;
2) the presentation of varied lab models which come at the end of each chapters: this should help the student to better understand how to:
- define and justify a quality and reliability control target;
- identify the appropriate tools to quantify reliability with respect to capabilities;
- interpret quality (capability) and reliability (reliability indices) indicators;
- choose the adequation test for the distribution (whether justified or used a priori);
- identify how trials can be accelerated and their limits;
- analyze the quality and reliability of materials and structures;
- size and tolerance (GPS) design structures and materials.
What about uncertainty calculations in applied reliability?

The fracture behavior of structures is often characterized (in linear mechanics) by a local variation of the material’s elastic properties. This inevitably leads to sizing calculations which seek to secure the structures derived from the materials. Much work has been, and still is, conducted in a wide range of disciplines from civil engineering to the different variants of mechanics. Here, we do not consider continuum mechanics, but rather probabilistic laws for cracking. Some laws have been systematically repeated to better approach reliability.

Less severe adequation tests would confirm the fissure propagation hypothesis. In fields where safety is a priority, such as medicine (surgery and biomechanics), aviation, and nuclear power plants, to mention but three, theorizing unverifiable concepts would be unacceptable. The relevant reliability calculations must therefore be as rigorous as possible.

Defining safety coefficients would be an important (or even major) element of structure sizing. This definition is costly and does not really offer any real guarantee on safety previsions (unlike security previsions). Today, the interpretation and philosophy of these coefficients is reinforced by increasingly accurate probabilistic calculations. Well-developed computer tools largely contribute to the time and effort of calculation. Thus, we will use software commonly found in various schools (Auto Desk Inventor Pro and ANSYS in modelization and design; MathCAD, GUM, and COSMOS in quality control, metrology, and uncertainty calculations).

Much work has been done to rationalize the concept of applied reliability; however, no “united method” between the mechanical and statistical interpretations of rupture has been defined as yet. Some of the many factors for this non-consensus are unpredictable events which randomly create the fault, its propagation, and the ensuing damage. Many researchers have worked on various random probabilistic and deterministic methods. This resulted in many simulation methods, the most common being the Monte Carlo simulation.

In this book, we present some documented applied cases to help teachers succinctly present probabilistic problems (reliability and/or degradation). The intuitive approach takes on an important part in our problem-solving methods, and it is moreover one of the important considerations to present as one contribution through this volume. Many commendable works and books have talked about reliability, quality control, and uncertainty perfectly well, but as separate entities. However, our task here is to verify measurements and ensure that the measurand is well-taught. As Lord Kelvin said, “if you cannot measure it, you cannot improve it”. Indeed, measuring identified quantities is an unavoidable part of laboratory life. Theoretical confirmation of physical phenomena must go through measurement reliability and its effects on the function are attributed to the material and/or structure, among other things.

Mechanical models (rupture criteria) of continuum mechanics discussed in Chapter 1, Volume 2 make up a reference pool of work used here and there in our case studies, such as the Paris–Erdogan law, the Manson–Coffin law, S-N curves (Wöhler curves), Weibull law (solid mechanics), etc. We could probably (and justly) wonder in what way this chapter is appropriate in works dedicated to reliability. The reason is that these criteria are deliberately targeted. We used them here to avoid the reader having to “digress” into specialized books.

Establishing confidence in our results is critical. Measuring a characteristic does not simply mean finding the value of the characteristic. We must also give it an uncertainty so as to show the measurement’s quality. In this book, we will show educational laboratory examples of uncertainty (GUM: Guide to the Expression of Uncertainty in Measurement).

Why then publish another book dedicated to quality control, uncertainties, and reliability?

Firstly, why publish a book which covers two seemingly distinct topics (quality control and reliability including uncertainties)? Because both these fields rely on probabilities, statistics, and a similar method in describing their hypothesis. In quality control, the process is often already known or appears to be under control beforehand, hence the intervention of capability indices (MSP or SPC). Furthermore, the goal is sometimes the competitiveness between manufactured products. Security is shown in secondary terms. Indeed, it is in terms of maintainability and durability that quality control joins reliability as a means to guarantee the functions attributed to a mechanism, component, or even the entire system.

When considering the mechanical reliability of materials and structures, the reliability index is inherently a safety indicator. It is often very costly in terms of computation time and very serious in matters of consequence. The common aspect between both fields is still the probabilistic approach. Probabilities and statistical–mathematical tools are necessary to supply theoretical justifications for the computational methods. Again, this book intends to be pragmatic and leaves reasonable room for the intuitive approach of the hypotheses stated throughout.

Finally, we give a brief glossary to standardize the understanding of terms used in dimensional analysis (VIM: Vocabulaire International de Métrologie) and in structural mechanical reliability. This is the best way of having a good agreement on the international terminology of words used to indicate a mesurand, reliability index or even a succinct definition of the capability indicators largely used in quality control.

A. GROUS November 2012

Chapter 1

Elements of Analysis of Reliability and Quality Control

1.1. Introduction

The past few decades have been marked by a particularly intense evolution of the models of probability theory and of the theory of applied statistics. In terms of the reliability of materials and structures, the fields of application studying the future or replacement of traditional coefficients of security are complicating the empirical and economic sides of the laws of calculation. Regarding the approach to fatigue by fracture mechanics, practice in the laboratory and in construction produces a broad range of problems for which the use of probabilistic and statistical methods proves to be fruitful and sometimes even indispensable. The development of probability theory and of applied statistics, the abundance of new results, and unforeseen catastrophes cause us to pay particular attention to security matters, but with an acute technical and economical sense of concern. From this, comes the abounding introduction of reliability analysis methods.

Numerous studies of research have turned toward probability mechanics, which aim to predict the behavior of components and to establish a decision-making support system (expert systems). This text commits itself to using the criteria of fracture mechanics to predict, decipher, analyze, and model the reliability of mechanical components. The reliability of structural components uses an ensemble of mathematical and numerical links, which serve to estimate the probability that a component (structure) will reach a certain state of conventional failure. This is achieved using the probabilistic property of resistant elements of a structure as well as the load that is applied to it.

Contrary to a prescriptive classical approach, risk is estimated while maintaining that however conservative a regulation may be, it cannot be guaranteed to ensure complete safety. On the other hand, it is necessary for the user of reliability techniques to define what they consider the failure of a structure or of a component to entail. If in certain cases that could effectively correspond to the formation of a mechanism of failure, we will define many components and structures as failure criterion, a certain degree of which we will call damage.

The important parameters, which include resistance (R) of structure or stresses (S) applied to it, cannot be defined uniquely in terms of nominal or weighted values, but in terms of random variables characterized by their means, their covariance–variances, and their laws of distribution. The estimation of the reliability of a real structure (components) can generally only be approached using an intermediary of a model that is more or less simplified. The analysis of the model will be carried out with the help of mathematical algorithms, which are often given as approximation techniques where rigorous calculation leads to detrimental calculation times.

To evaluate the weak probabilities of component failure, it is normal to begin with the arbitrary integration of probability densities in the domain of failure, i.e. using simulation techniques. The advantage of this way of going about things resides in the necessity of knowing a form explicit to the domain of failure. The disadvantage is the weakness of the speed of convergence. Added to this is the high cost in calculation time. Numerous distribution laws have been used to model the reliability of components and structures. These distributions have not appeared with reliability but with fundamental research relative to diverse fields of application. We will present a few of these. The principal laws of distribution to model the reliability of the components are:

Table 1.1. Main distributions of probability used in reliability and in quality control

Title

Ordinary by distribution of

Discrete

distributions with

finite

support

Bernoulli, discrete uniform, binomial, hypergeometric (Pascal distribution)

Discrete

distributions with

countable

support

Geometric, Poisson negative binomial, logarithmic

Continuous

distributions with

compact

support

Continuous uniform, triangular, beta

Continuous

distributions with

semi-infinite

support

Exponential, gamma (or Erlang), inverse gamma, Chi square (Pearson, χ

2

),

Weibull

(2 and 3 parameters), Rayleigh,

log-normal

(Galton), Fisher, Gibbs, Maxwell–Boltzmann, Fermi–Dirac, Bose–Einstein, negative binomial, etc.

Continuous

distributions with

infinite support Density laws

Normal

(Gauss–Laplace), asymmetric normal, Student, uniform, stable, Gumbel student (maxi-mini), Cauchy (Lorentz), Tukey-lambda,

Birnbaum–Saunders

(fatigue), double exponential (Laplace), logistics

We present below some educational pathways to choose the distributional law correctly and to better model the lifespan of a component or of a structure. Correctly choosing a probabilistic model or experimental data would be in greater harmony with the theoretical justification imposed by distribution law, but is not always a straightforward thing to do. We have, as a test, the exaggeration surrounding Gauss distribution, which is used in almost all theories. Distribution models of lifespan are chosen according to:

– a physical/statistical argument which corresponds theoretically to a mechanism of failure inherent to a model of life distribution;
– a particular model previously used with success for the same thing or a mechanism with a similar fault;
– a model which ensures a decent theory/practice applicable to the data relating to the failure.

Regardless of the method chosen for a significant probabilistic model, it is important to justify the choice fully. For example, it would be inappropriate to approach a model of which the rate of ruin (failure) was constant with an exponential distribution, as this final law is better suited to the sorts of faults that occur when damage is accidental. Galton’s (log-normal) and Weibull’s models are flexible and fit in well with the fault trends even in cases of weak experimental data. This applies especially when they are projected via the models of acceleration1 when they are used in very different conditions from test data. These two models are very useful in reducing the failure rates at different magnitudes.

Physical acceleration indicates that the functioning of a component produces the same faults, but at a faster rate. Faults can be due to fatigue, corrosion, diffusion, migration, etc. A factor of acceleration is the constant product between two levels of constraints. True acceleration occurs when the variation of the constraint is equivalent to the transformation of the timescale in the case of failure. The transformations used are often linear, which implies that “the time of failure” at the high level of constraints will be multiplied by a constant, i.e. by the factor of acceleration, in view of obtaining the time equivalent of failure under the constraint used. We will use the following notations:

1.1.2. Expression for linear acceleration relationships

Acceleration factor, Fa, affects constraints according to the following proportions:

[1.1]

Each failure mode possesses its own true acceleration. The failure data must be equally separated by a failure mode according to the pertinence of their inherent true acceleration. If there is acceleration, the data from different units of constraint have the same mode in sampling probability data. True acceleration demands the constraint of the physical process causing the change or the degradation that leads to failure. As a rule, different modes of failure will be affected differently by constraints and will have different acceleration factors. It is improbable that a simple factor of acceleration would apply to more than one mechanism of failure. Generally, different modes of failure will be affected by constraints in different ways. They will have different factors of acceleration.

A consequence of the linear acceleration relations shown above is that the form parameter for the key models of life distribution (Weibull and log-normal) does not change for the units functioning under different constraints. The compilations on the probability paper of data of units of different constraints align themselves (form a line) roughly in parallel. Some parametric models have successfully been used as population models for time-to-fail resulting in a vast range of failure mechanisms. Sometimes, there are probabilistic contradictions based on the physics of failure modes, which tend to justify the choice of model.

1.2. Fundamental expression of the calculation of reliability

Introducing reliability as an essential tool in the quality of a component begins at the stage of its very design. It is undeniable that reliability imposes itself as a discipline that rigorously analyzes faults, as it is based on experimental data. As reliability is directly linked to quality, the distributional laws used are often the same or related to each other. We know instinctively that components are numerous and complex in a mechanism (structures). As a result, calculations of reliability become less easily recognizable, as restrictive hypotheses mask them.

The fact that a component is of good quality does not indicate that it is reliable. There is no correlation between the two, just as reliability is not a synonym of quality. When it comes to reliability, we make use essentially of the failure rate, of the probability of failure, of security margins, or of reliability indicators. When testing quality, we essentially use machine capabilities [Cm, Cmk] or process capabilities [Cp, Cpk]. In the following section, we will present the common criteria for reliability, where F(τ) indicates the function of the probability of failure (or of breakdown) of a component or of a structure assembled in parallel or in series. It represents the probability of having at least one fault before time (τ).

[1.2]

where R(τ) indicates reliability [0 ≤ R(τ) ≤ 1] associated with F(τ). R(τ) is also known as the survival function and represents the function probability (service) without fault during period [0, τ]. Reliability is defined as the complement of the CDF. Hence, the second term of the following equation:

[1.3]

The failure rate Z(τ) is in fact the relationship between the density of probability of failure in comparison with reliability: f(τ)/R(τ). It is the expression of the probability that a faulty component will fail during (Δτ) under the condition that it does not present a fault before τ. To put it another way, it is the frequency of appearance of failure of a component. The expression of the failure rate is:

[1.4]

Average lifetime θ (average time until failure, mean time between failures [MTBF]) is:

[1.5]

The essential property of useful lifeλ according to an exponential distribution is:

[1.6]

MTBF θ is expressed as:

[1.7]

Probability density function f(τ) represents the probability of failure (fault) of a component at time τ. It is, in fact, the derivative of the CDF F(τ).

[1.8]

The distribution function (probability of failure, fault) F(τ) is written as:

[1.9]

Reliability (probability of survival) R(τ) is written as:

[1.10]

The failure rate (fault) is represented as follows:

[1.11]

Estimate: if τf is the duration for which an assembly of components will work and κ is the number of faults observed, we will propose the following estimator :

[1.12]

Test:

Interval of confidence if the trial is censored at the threshold {1 – (α1 + α2)} :

Interval of confidence if the trial is truncated at the threshold {1 – (α1 + α2)} :

Using the statistical tables of Karl Pearson’s distribution χ2 (see Appendices, Volume 2 and 3), for the censored trial, we will obtain the following:

The most frequently used distribution laws in our case studies are:

– Continuous uniform distribution, U (τ, α, β)
– Discrete uniform distribution U (k, α, β, n)
– Triangular distribution
– Beta distribution B (τ, p, q)
– Normal distribution (Laplace–Gauss)
– Log-normal distribution (Galton)
– Gumbel distribution (Emil Julius)
– Random variable according to one of the Gumbel’s distributions (E1Maximum and/or E1Minimum)
– Weibull distribution (with two parameters also known as E2 Min)
– Weibull distribution (with three parameters also known as E3 Min)
– Birnbaum–Saunders distribution (fracture by fatigue)
– Rayleigh distribution (Lord Rayleigh)
– Rice distribution (signal treatment)
– Cauchy distribution (Lorentz)
– Tukey-lambda distribution
– Binomial distribution (Bernoulli’s schema)
– Polynomial distribution
– Geometrical distribution
– Hypergeometric distribution (Pascal’s)
– Exponential distribution
– Double exponential distribution (Laplace)
– Logistic distribution
– Log-logistic distribution
– Poisson distribution
– Gamma distribution (Erlang)
– Inverse gamma distribution
– Frechet distribution

1.3. Continuous uniform distribution

[1.13]

1.3.1. Distribution function of probabilities (density of probability)

[1.14]

Graph and calculation of f(τ, α, β)

Figure 1.1.Graph showing the function of the distribution of the continuous U distribution

1.3.2. Distribution function

[1.15]

Graph and calculation showing the distribution function F(τ, α, β):

Inverse distribution function of cumulative probabilities is given in Figure 1.3:

Figure 1.4. shows random numbers between 0 and τ [rnd (τ)] which follow a continuous U distribution.

Figure 1.2. Graph showing the distribution function of the continuous U distribution

Figure 1.3.Graph showing the inverse distribution function of continuous U distribution

Figure 1.4.Graph of [rnd (τ)] between 0 and τ [rnd (τ)] following a continuous U distribution

Random numbers (m) which follow a continuous uniform distribution:

1.4. Discrete uniform distribution (discrete U)

This is also a probabilistic distribution, which indicates the identical realization of probability in this way (equiprobability) to each value of a finite set of possible values. This presents itself according to this schema: A random variable (RV) which can have (n) possible equiprobable values k1, k2, k3,…, kn (i.e. equal probabilities) follows a uniform distribution. We can see that at any value ki is equal to (1/n). It is a classic example of the “throw of the dice” when the score is 1/6. There are cases when the values of this RV which follows discrete U distribution are real. We qualify this in deterministic terms by the application of the distribution function, formalized below:

Support: k ∈ [α, α + 1, …, β – 1, β]

[1.16]

[1.17]

From the expression , we get H(τ – τi) the distribution function in steps (of Heaviside step). It is, in fact, a determinist distribution centered in τ0, which is known in (mechanical) physics. It represents the Dirac mass in τ0. The distribution function f(τ α, β) (density of probability) of a discrete U distribution on the interval [α, β] is written as:

[1.18]

The distribution function F(τ, α, β) discrete U distribution on the interval [α, β] (cumulative probabilities) is then written as:

[1.19]

Parameters of the behavior of discrete U distribution:

Density of probability of the discrete U function:

[1.20]

Distribution function of the discrete U function:

[1.21]

1.5. Triangular distribution

Triangular law can be either maximally or minimally connected to its mode. It has two versions: a discrete distribution and a continuous distribution.

1.5.1. Discrete triangular distribution version

The discrete triangular distribution with integer positive parameters α is defined for any integers τ between –a and +α by:

[1.22]

1.5.2. Continuous triangular law version

Continuous triangular distribution on the support [α, β] and of mode γ is defined by the following density on [α, β]. In many fields, triangular distribution is considered as a simplified version of beta distribution.

1.5.3. Links with uniform distribution

Distribution function (function of mass):

[1.23]

CDF: Formula and graphs:

[1.24]

Figure 1.5. Functions respective of cumulative distribution of triangular distribution

1.6. Beta distribution

In probability as well as in descriptive statistics, beta distribution comes from a family of continuous probability laws defined in the field as x∈ (0; 1]. With these two distinct parameters (p and q), it is an interesting example of Dirichlet distribution. The main characteristics of the distribution are:

Excess of kurtosis of beta distribution is thus written as:

1.6.1. Function of probability density

The general formula for the function of probability density (mass function) of beta distribution is written as:

[1.25]

The function of distribution from the MathCAD program dbeta (τ, p, q) gives results for treating random variables (τ) directly.

[1.26]

[1.27]

The density of beta distribution can take different forms according to p and q:

Table 1.2. Density of beta distribution in different forms according to p and q

For: τ from (0 1] and for the distinct parameters (p and q) of the beta distribution, we propose:

The resulting graph from this allows us to read (and to see) the following pattern:

Figure 1.6. Graph showing the beta distribution according to shape parameters (p, q)

1.6.2. Distribution function of cumulative probability

[1.28]

where B(p, q) is the beta function.

[1.29]

where Bτ (p, q) is the incomplete beta function and Iτ (p, q) is the incomplete regularized beta function. The pbeta CDF pbeta (τ, p, q) directly provides the results of the treatment of the random variable (τ).

Beta distribution is not used in mechanical reliability. We deliberately have not followed conventional formulae. The reader will be able to refer to specialized manuals in this field for more detailed information.

Figure 1.7.Graph showing cumulative beta distribution according to shape parameters (p, q)

Figure 1.8.Graph showing the beta inverse cumulative distribution according to shape parameters (p, q)

The vector of (m) random numbers having beta distribution probability is written as rbeta (m, p, q). (r) for random numbers, where m is a positive integer.

1.6.3. Estimation of the parameters (p, q) of the beta distribution

According to empirical law, we propose:

Variance: The method of moments gives the following estimations:

1.6.4. Distribution associated with beta distribution

1.7. Normal distribution

Normal distribution is the most well-known type of distribution. Ever since Moivre (1738), this distribution has been used in reliability and in quality control as a limit of binomial distribution. We also use it as a model for the distribution of measurement errors, which in metrology is conventionally called a “true” value. It also plays an important role in the asymptotic behavior of other probability distributions.

In reliability, Gauss’ distribution is widely used to represent the distribution of the lifespan of components toward the end of their useful life (component fatigue). We can find the explanation for this from looking at ever-increasing failure rates, Z(τ). The literature suggests only using this in reliability if average lifespan is superior to at least three times the standard deviation.

1.7.1. Arithmetic mean

For a grouping of (n) values by the class of distribution frequencies, we propose the following as the arithmetic mean:

with (ni) the size of central class xi. In a situation where the (n) values (xi) are not grouped by class of statistical series, we will use the expression of variance. We will do the same for the expressions of variance. For a grouping of (n) values by class of distribution frequencies, we propose the following arithmetic mean:

By analogy of the above, in a situation where the (n) values (xi) are not grouped by class according to statistical series, we will use this expression, with N, as a unit of full size or balance of (V):

For the estimation of sample data, (n) represents the extent of sample data that is not grouped by class. The calculations, which present this, are shown below. In cases where data sample sizes do not exceed 15 values, we recommend scaling up the calculations based on Table 1.3 (from the literature):

Table 1.3.Table of values of βm (see Volume 2 and 3 Appendices Table A.1 and formula)

For a sample size <15 values, we will calculate the following: σ*σ×βm. The normal centered variable is reduced to:

[1.30]

[1.31]

Probability density (distribution function of probability) is:

[1.32]

1.7.2. Reliability

As we know that the normal variable is defined between [−∞ and +∞], the use of this distribution seems completely restrictive, as it has been proven to be impossible to postulate a negative lifespan.

[1.33]

Function of probability density for reduced centered distribution is:

[1.34]

CDF (cumulative probability) takes form [1.35], which represents failure rate:

[1.35]

The curves representing these functions are shown on the following graphs:

Figure 1.9.Graphs showing cumulative distribution functions

1.7.3. Stabilization and normalization of variance error

[1.36]

[1.37]

If comparing standard residual dispersion and each of the independent variables or the predicted values does not bring any perceptible model to light, the variance of the error terms is likely to be constant. As for comparing residual dispersions with x1 (Figure 1.10), the points disperse increasingly from left to right, indicating that error (ε) increases with x1.

Figure 1.10.Distribution of errors (on variance) in normal distribution

On a similar graph for x2, we clearly see (looking at the points in a lozenge shape in the above graph) that the model is not clear. Elsewhere, the model is once again evident on the graph of residual dispersion (circular points on the graph) for the predicted values:

Figure 1.11.Distribution of errors in normal distribution

To obtain a valid regression, we must neutralize the increasing difference in error while transforming the dependent data. The difference stabilizing the transformation is represented above in order of rising severity.

[1.38]

Above, we have determined new residues against the predicted values, to show the effect of stabilization more clearly (see the graph on which the points are represented by little blue squares).

Figure 1.12.Calculation of residues compared to variance error

It is noticeable that certain transformations are quite strong. They actually accentuate error. For example, the graph has the task of presenting a curve