140,99 €
This second book of a 3-volume set on Fracture Mechanics completes the first volume through the analysis of adjustment tests suited to correctly validating the justified use of the laws conforming to the behavior of the materials and structures under study. This volume focuses on the vast range of statistical distributions encountered in reliability. Its aim is to run statistical measurements, to present a report on enhanced measures in mechanical reliability and to evaluate the reliability of repairable or unrepairable systems. To achieve this, the author presents a theoretical and practice-based approach on the following themes: criteria of failures; Bayesian applied probability; Markov chains; Monte Carlo simulation as well as many other solved case studies. This book distinguishes itself from other works in the field through its originality in presenting an educational approach which aims at helping practitioners both in academia and industry. It is intended for technicians, engineers, designers, students, and teachers working in the fields of engineering and vocational education. The main objective of the author is to provide an assessment of indicators of quality and reliability to aid in decision-making. To this end, an intuitive and practical approach, based on mathematical rigor, is recommended.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 393
Veröffentlichungsjahr: 2013
Contents
Preface
Glossary
Chapter 1 Fracture Mechanisms by Fatigue
1.1. Introduction
1.2. Principal physical mechanisms of cracking by fatigue
1.3. Modes of fracture
1.4. Fatigue of metals: analytical expressions used in reliability
1.5. Reliability models commonly used in fracture mechanics by fatigue
1.6. Main common laws retained by fracture mechanics
1.7. Stress intensity factors in fracture mechanics
1.8. Intrinsic parameters of the material (C and m)
1.9. Fracture mechanics elements used in reliability
1.10. Crack rate (life expectancy) and s.i.f. (Kσ)
1.11. Elements of stress (S) and resistance theory (R)
1.12. Conclusion
1.13. Bibliography
Chapter 2 Analysis Elements for Determining the Probability of Rupture by Simple Bounds
2.1. Introduction
2.2. Second-order bounds or Ditlevsen’s bounds
2.3. Hohenbichler’s method
2.4. Hypothesis test, through the example of a normal average with unknown variance
2.5. Confidence interval for estimating a normal mean: unknown variance
2.6. Conclusion
2.7. Bibliography
Chapter 3 Analysis of the Reliability of Materials and Structures by the Bayesian Approach
3.1. Introduction to the Bayesian method used to evaluate reliability
3.2. Posterior distribution and conjugate models
3.3. Conditional probability or Bayes’ law
3.4. Anterior and posterior distributions
3.5. Reliability analysis by moments methods, FORM/SORM
3.6. Control margins from the results of fracture mechanics
3.7. Bayesian model by exponential gamma distribution
3.8. Homogeneous Poisson process and rate of occurrence of failure
3.9. Estimating the maximum likelihood
3.10. Repair rate or ROCOF
3.11. Bayesian case study applied in fracture mechanics
3.12. Conclusion
3.13. Bibliography
Chapter 4 Elements of Analysis for the Reliability of Components by Markov Chains
4.1. Introduction
4.2. Applying Markov chains to a fatigue model
4.3. Case study with the help of Markov chains for a fatigue model
4.4 Conclusion
4.5 Bibliography
Chapter 5 Reliability Indices
5.1. Introduction
5.2. Design of material and structure reliability
5.3. First-order reliability method
5.4. Second-order reliability method
5.5. Cornell’s reliability index
5.6. Hasofer-Lind’s reliability index
5.7. Reliability of material and structure components
5.8. Reliability of systems in parallels and series
5.9. Conclusion
5.10. Bibliography
Chapter 6 Fracture Criteria Reliability Methods through an Integral Damage Indicator
6.1. Introduction
6.2. Literature review of the integral damage indicator method
6.3. Literature review of the probabilistic approach of cracking law parameters in region II of the Paris law
6.4. Crack spreading by a classical fatigue model
6.5. Reliability calculations using the integral damage indicator method
6.6. Conclusion
6.7. Bibliography
Chapter 7 Monte Carlo Simulation
7.1. Introduction
7.2. Simulation of a singular variable of a Gaussian
7.3. Determining safety indices using Monte Carlo simulation
7.4. Applied mathematical techniques to generate random numbers by MC simulation on four principle statistical laws
7.5. Conclusion
7.6. Bibliography
Chapter 8 Case Studies
8.1. Introduction
8.2. Reliability indicators (λ) and MTBF
8.3. Parallel or redundant model
8.4. Reliability and structural redundancy: systems without distribution
8.5. Rate of constant failure
8.6. Reliability applications in cases of redundant systems
8.7. Reliability and availability of repairable systems
8.8. Quality assurance in reliability
8.9. Birnbaum–Saunders distribution in crack spreading
8.11. Simulation methods in mechanical reliability of structures and materials: the Monte Carlo simulation method
8.12. Elements of safety via the couple: resistance and stress (R, S)
8.13. Reliability trials
8.14. Reliability application on speed reducers (gears)
8.15. Reliability case study in columns under stress of buckling
8.16. Adjustment of least squared for nonlinear functions
8.17. Conclusion
8.18. Bibliography
Appendix
Index
First published 2013 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd
27-37 St George’s Road
London SW19 4EU
UK
www.iste.co.uk
John Wiley & Sons, Inc.
111 River Street
Hoboken, NJ 07030
USA
www.wiley.com
© ISTE Ltd 2013
The rights of Ammar Grous to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2012949779
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISBN: 978-1-84821-441-5
Preface
This book is intended for technicians, engineers, designers, students, and teachers working in the fields of engineering and vocational education. Our main objective is to provide an assessment of indicators of quality and reliability to aid in decision-making. To this end, we recommend an intuitive and practical approach, based on mathematical rigor.
The first part of this book shows the fundamental basis of data analysis in both quality control and in studying the mechanical reliability of materials and structures. Laboratory and workshop results are discussed in accordance with the technological procedures inherent to the subject matter. We also discuss and interpret the standardization of manufacturing processes as a causal link with geometric and dimensional specifications (GPS, or Geometrical Product Specification). This is moreover the educational novelty of this work, in comparison here with consulted praiseworthy publications.
We discuss many laboratory examples, thereby covering a new, industrial organization of work. We also use mechanical components from our own real mechanisms, which we built and designed at our production labs. Finite element modification is thus relevant to real machined pieces, controlled and soldered in a dimensional metrology laboratory.
We also discuss mechanical component reliability. Since statistics are common to both this field and quality control, we will simply mention reliability indices in the context of using the structure for which we are performing the calculations.
Scientists from specialized schools and corporations often take an interest in the quality of measurement, and thus in the measurement of uncertainties. The so-called cutting-edge endeavors such as aeronautics, automotive, and nuclear industries, to mention but a few, put an increasing emphasis on the just measurement. This text’s educational content stands out due to the following:
The fracture behavior of structures is often characterized (in linear mechanics) by a local variation of the material’s elastic properties. This inevitably leads to sizing calculations which seek to secure the structures derived from the materials. Much work has been, and still is, conducted in a wide range of disciplines from civil engineering to the different variants of mechanics. Here, we do not consider continuum mechanics, but rather probabilistic laws for cracking. Certain laws of statistical distributions are systematically repeated to better approach reliability.
Less severe adequation tests would confirm the fissure propagation hypothesis. In fields where safety is a priority, such as medicine (surgery and biomechanics), aviation and nuclear power plants, to mention but three, theorizing unverifiable concepts would be unacceptable. The relevant reliability calculations must therefore be as rigorous as possible.
Defining safety coefficients would be an important (or even major) element of structure sizing. This definition is costly, and does not really offer any real guarantee on safety previsions (unlike security previsions). Today, the interpretation and philosophy of these coefficients is reinforced by increasingly accurate probabilistic calculations. Well-developed computer tools largely contribute to the time and effort of calculations. Thus, we will use software commonly found in various schools (Auto Desk Inventor Pro and ANSYS in modelization and design; MathCAD, GUM, and COSMOS in quality control, metrology, and uncertainty calculations).
Much work has been done to rationalize the concept of applied reliability; however, no “unified method” between the mechanical and statistical interpretations of rupture has yet been defined. Some of the many factors for this non-consensus are unpredictable events which randomly create the fault, its propagation, and the ensuing damage. Many researchers have worked on various random probabilistic and deterministic methods. This resulted in many simulation methods, the most common being the Monte Carlo simulation.
In this book, we present some documented applied cases to help teachers succinctly present probabilistic problems (reliability and/or degradation). The intuitive approach takes on an important part in our problem-solving methods, and among various points, the main goal of this book is to give this humble contribution. Many commendable works and books have talked about reliability, quality control, and uncertainty perfectly well, but as separate entities. However, our task here is to verify measurements and ensure that the measurand is well-taught. As Lord Kelvin said, “if you cannot measure it, you cannot improve it”. Indeed, measuring identified quantities is an unavoidable part of laboratory life. Theoretical confirmation of physical phenomena must go through measurement reliability and its effects on the function are attributed to the material and/or structure, among other things.
Mechanical models (rupture criteria) of continuum mechanics discussed in Chapter 10 make up a reference pool of work used here and there in our case studies, such as the Paris–Erdogan law, the Manson–Coffin law, S-N curves (Wöhler curves), Weibull law (solid mechanics), etc. We could probably (and justly) wonder in what way is this chapter appropriate in works dedicated to reliability. The reason is that these criteria are deliberately targeted. We used them here to avoid the reader having to “digress” into specialized books.
Establishing confidence in our results is critical. Measuring a characteristic does not simply mean finding the value of the characteristic. We must also give it an uncertainty so as to show the measurement’s quality. In this book, we will show educational laboratory examples of uncertainty (GUM: Guide to the Expression of Uncertainty in Measurement).
Firstly, why publish a book which covers two seemingly distinct topics (quality control and reliability including uncertainties)? Because both these fields rely on probabilities, statistics, and a similar method in describing their hypothesis. In quality control, the process is often already known or appears to be under control beforehand, hence the intervention of capability indices (SPC: statistical process control). Furthermore, the goal is sometimes the competitiveness between manufactured products. Security is shown in secondary terms. Indeed, it is in terms of maintainability and durability that quality control joins reliability as a means to guarantee the functions attributed to a mechanism, component, or even the entire system.
When considering the mechanical reliability of materials and structures, the reliability index is inherently a safety indicator. It is often very costly in terms of computation time and very serious in matters of consequence. The common aspect between both fields is still the probabilistic approach. Probabilities and statistical– mathematical tools are necessary to supply theoretical justifications for the computational methods. Again, this book intends to be pragmatic and leaves reasonable room for the intuitive approach of the hypotheses stated here and there.
Finally, we provide a succinct glossary to smooth the understanding of dimensional analysis (IMV: International Metrology Vocabulary) and structural mechanical reliability. This educational method allows us to “agree” on the international language used to define the mesurand, reliability index, or a succinct definition of the capability indicators largely used in quality control.
In terms of safety, component reliability (for both materials and structures) is absolutely essential in the field of safety and performance.
The field of reliability is used in many fields of engineering, from civil engineering to mechanical and electrical engineering: it is thus manifold. It often aims at the estimation of the functions at various phases of the lifecycle of components, subject to study. Reliability users increasingly depend on reproducible software, though they struggle to determine whether the component is active or passive, the size of the experience return and its imperative validation, the phenomena which tend to decrease the likelihood of failure or the reliability index, etc.
This book uses some methods provided here and there to estimate operational or target reliabilities. The apparent controversy between frequential and Bayesian probabilistic approaches are irrelevant in our humble opinion if we know how to set the problem a priori. Setting boundaries for the likelihood of rupture (failure or even degradation) is worth doing. As for us, we prefer calculating rupture through the damage indicator integral, made explicit by Madsen’s work.
Just as estimating reliability can allow us to understand the history to better anticipate the future, we must show pragmatism in measuring the factors responsible for the likely rupture. Since the measurement is always inherently flawed and uncertain, we must include uncertainty calculations in our reliability methods. Without such calculations, our results would lead to doubtfulness.
First, vocabulary: reliability has its own, specific terminology (see glossary) which, like for metrology, affects the decision’s terms. Thus, we will abide by the EN 13306 standard (see Table A.45 of the Appendix of Volume 3). The definitions for reliability, durability, failure, and degradation can be found in the appendix and glossary.
Reliability data is necessary to:
Analysis and validation are done by analyzing the experience feedback with respect to critical failure criteria, such as failure modes, the mean time between failures (MTBF), probability of failure on demand (Ps) and its reliability index according to a “selected criterion”, the repair and/or material unavailability time, the confidence intervals, or even the sample size.
We note that reliability is usually taken into account from design, based on the specifications. It is calculated and compared to the allocated reliability (reliability demand). It includes all phases of life (design, manufacture, and development trials).
During exploitation (operation), the planned reliability is calculated and compared to a threshold (e.g. rate of failure) such as physical calculations, with the intention to extend it beyond the lifespan (cycle) planned during design.
Reliability is mostly measured, therefore making its metrology a serious business; hence, the calculation of its uncertainties including instrument and measurement equipment calibration.
Among the diverse difficulties which are imperative upon the reliability function, we hold, among others, the type of component (repairable or not repairable, redundant active or passive) and even some controversial methods or model (frequencial/Bayesian).
The component can be active:
The component can be active and passive:
As we are a physicist, practical mathematician-statistician or engineer, sometimes, in many schools of thought, “controversies” appear in the method or model (ex. frequency/Bayesian) used. In our book, we will try to remain pragmatic and synthetize our opinions.
From a physicist’s perspective, the experimental conditions of data-gathering are known, and their uncertainties well-bound. Its so-called frequential analysis is based only on objective data, because they were measured correctly. We know that measurements are costly and time-consuming. If the data from “our physicist’s” experiments are insufficient and if the process turns out to be non-repetitive or the number of parameters to estimate is high, the frequential approach falsely introduces confirmation bias in the analysis. The paradox is that the calculations are correct, but they only answer to a logically mathematical demand. In other words, the mathematics are correct but are superficially stuck on an inappropriate case, hence a rejection of the solution and the birth of controversy...
The engineering approach is attractive due to its “applied arts and crafts” aspect (i.e. learning). Its analysis includes the knowledge that we must apply an “a priori” law, which must by definition be biased. Without rejecting the Bayesian approach, this is where we favor the engineering approach because it uses decision-making tools for which preferences are clearly expressed. At the end of this approach, the uncertainty function greatly helps make the decision...
Finally, it is important to specify and frame the problem well: its context, hypotheses, available data, etc. Simulations (using software) are a helpful educational tool, but they should not be treated as replacements for real experiments. Relying on real data from the experiment feedback of the collection conditions is more suitable. Indeed, experiments and “real” data are a strategic necessity in case of preemptive validation.
In this book, we show (see Chapters 1 and 2) the qualitative analysis elements preceding quantitative, deterministic, and probabilistic analysis. The laws and tests shown in the first two chapters of Volume 1 are required reading for any probabilistic study of physical phenomena, and it falls to us to be pragmatic.
Regardless of the approach used, we must analyze the sensibility of factors and always apply common sense. Among many other methods of analysis, reliability is a tool for understanding the past. For example, many failures, degradations, and ruptures or ruin (damage) cannot be explained by deterministic models such as aging, mechanisms of degradation, models and laws (see Chapter 1, “Fracture mechanisms by fatigue”), etc. Studying reliability allows us to find the components and subcomponents to critique, the important variables (initial faults, factor of intensity of constraint f.i.c., etc.) for which uncertainties should be reduced, and so on through a sound knowledge of physical phenomena.
Reliability anticipates and prepares for the future in order to improve performance and safety by optimizing exploitation strategies.
However, reliability alone cannot replace an experimental understanding of physical phenomena.
A. GROUSNovember 2012
Glossary
Hard materials also show good abrasion resistance; in other words, they are not easily worn down by friction. In practice they are harder to grind down.
“Acceptable risk” describes the structural and non-structural measures to be put in place to reduce probable damage to a reference level. A risk scale is often associated with dangers in order to classify them in order of seriousness.
Is a (dimensionless) attribute of dependability. It is the capacity of a system to properly deliver the service (quality) when the user has need of it. Availability is a unitless measurement; it corresponds to the ratio of uptime to total execution time of the system.
Imaginary cause of something that occurs for no apparent or explicable reason (dictionary definition).
(Bayesian) probability of a consequence when the causal event will definitely occur. If we suppose that a fracture has reached the limit suggested by a pre-established hypothesis, the probability of cracking is a conditional probability.
Maintenance performed when a breakdown is detected, aimed at restoring a product to a state where it can fulfill its required function.
This denotes the ability of a material to withstand damage from the effects of the chemical reaction of oxygen with the metal. A ferrous metal which is resistant to corrosion does not rust.
Irreversible evolution of one or more characteristics of a product related to time, to the duration of use or to an external cause – alteration of function, constant phenomenon, physical aging
The property which enables users to justifiably place their faith in the service provided to them: reliability, availability, safety, maintainability, security.
When a material is heated, it expands slightly; this is called dilation. Conversely, if it shrinks (clue to cold), this is a contraction. The level of dilation and contraction of a metal affects its weldability. The more the metal is expanded or contracted, the greater the risk of cracks or deformations appearing.
Integral function of the probability density (or cumulative probability function), calculated in order of ascending values of the random variable. It expresses the probability of the random variable assuming a value less than or equal to a given value.
Represents the ability of a metal to be deformed without breaking. It can be stretched, elongated, or subjected to torsion forces. Ductile materials are difficult to break because the cracks or defects created by a deformation do not easily propagate.
The ability of a product to perform its required function, in given conditions of use and maintenance, until a critical state is reached.
Ability of a material to return to its original form after a deformation.
Alteration or suspension of the ability of a system to perform its required function(s) to the levels of performance defined in the technical specifications.
Fault resistance is implemented to detect and handle errors.
Logical diagram using a tree structure to represent the causes of failures and their combinations leading to a feared state (Bayes). Fault trees enable us to calculate the unavailability or the reliability of the system model.
A method for systematic risk analysis of the causes and effects of failures that might affect the components of a system. FMECA analyzes the seriousness of each type of failure. It enables us to evaluate the impact of such failures on the reliability and safety of the system.
Frailty describes the characteristic of a metal that breaks easily on impact or from a deformation. It deforms little or not at all, and is easily broken.
The ability of a body to resist penetration by another body harder than it. It is also characterized by its scratch resistance.
Describes any event, unpredictable phenomenon, or human activity which would result in the loss of human lives, or damages to commodities or the environment.
The HAZ represents the heat affected region of the base metal that was not melted during welding process. Metallurgists define usually the HAZ as the area of base material which has had its microstructure and properties altered by welding or heat.
One of the aspects of dependability. The maintainability of a system expresses its capacity for repair and evolution, with maintenance supposedly completed under certain conditions with prescribed procedures and means.
A characteristic which permits the metal to be molded. It is the relative resistance of a metal subjected to compression forces. The malleability of a material increases with increasing temperature.
Used to evaluate the dependability of systems in a quantitative manner, this technique is based on the hypothesis that failure and repair rates are constant and that the stochastic process modeling its behavior is Markovian (a memoryless process). When the space of the potential states of the system is a discreet set, the Markovian process is called a Markov chain.
The measuring instrument which replicates or permanently provides different types of values during use, each with an assigned value.
The science of measurements and its different applications, which encompasses all theoretical and practical aspects of measuring, regardless of the uncertainty of the measurement or the domain to which it relates.
A value to be measured.
Proximity between a measured value and the true value of a measurand.
The measurement of accuracy does not produce a value and is not expressed numerically. A measurement is sometimes considered accurate if it offers a smaller uncertainty.
Although linked to the concepts of correctness and fidelity, it is better not to use the term “measuring accuracy” for measuring correctness or the term measuring fidelity for measuring accuracy.
Measuring accuracy is occasionally associated with the proximity between the measured values attributed to the measurand.
Usually a device used for making measurements, on its own or possibly in conjunction with other devices.
This is the measuring fidelity according to a set of repeatable conditions.
This is the measuring fidelity according to a set of reproducibility conditions.
Non-negative parameter which characterizes the dispersion of values attributed to a measurand, arising from information used according to the method (e.g. A or B of the GUM).
Method for identifying and evaluating hazards, their causes, their consequences and the seriousness of these consequences. The aim of this analysis is to determine the appropriate methods and corrective actions to eliminate or control dangerous situations or potential accidents.
To avoid loss of function; thus, it is a probabilistic notion, one of anticipation and prediction. Such maintenance is performed at predetermined intervals, in accordance with prescribed criteria, intended to reduce the probability of failure or degradation of the function of a product.
Statistical concept which can either express a degree of confidence or a measurement of uncertainty (subjective probability) or be taken as the limit of a relative frequency in an infinite series (statistical probability).
This is a function describing the relative likelihood of a random variable assuming a particular value. It assigns a probability to each value of a random variable.
Process in which the result varies even if the input data set remains identical (a protocol leads to different results).
The reliability of a system (work) is its aptitude to meet its design objectives over a specified period of time, in the environmental conditions to which it is subject. It is based on the probabilities used to evaluate it.
Reliability is one of the aspects of dependability. It corresponds to the continuity of service that the system must provide to its users, with the system being considered as irreparable. Any accidental failure is taken into account, regardless of its severity. Reliability measures the rate of failure, the inverse of the MTTF (mean time to failure).
Risk is “a more or less predictable potential danger”, or in other words a drawback which is more or less probable to which we are exposed. The scientific definition of risk involves an aspect of hazard and an aspect of loss, both expressed as probabilities.
System inevitably contains design errors, regardless of the amount of validation work done. The “zero error” criterion is not a realistic goal, in view of the development costs this would entail. Thus, it is important for so-called critical systems to evaluate the risks for the users of the following systems:
FMECA: failure mode, effects, and criticality analysis.
SEEA: software error effects analysis.
PHA: preliminary hazard analysis.
Procedure to determine the probability of a hazard occurring and its possible consequences.
This approach is mainly used in the oil, nuclear, and rail transport sectors. In practice, this procedure facilitates the monitoring of studies.
We distinguish between safety and security. Thus:
Safety guards against catastrophic failures, for which the consequences are unacceptable in relation to the risk.
Security relates to the prevention of unauthorized access to information.
Software inevitably contains design errors, no matter how strict the rules of design and validation. The ability of a software suite to provide acceptable service in spite of its residual errors defines its reliability.
s.i.f (ΔK) is a function of the stress, crack size and crack shape. Stress intensity factors do not have variability. They have uncertainty and modeling errors. The crack shape may be unknown and be approximated by a semicircle.
The ability of materials to resist shock without breaking or chipping.
Trend tests are used in reliability to obtain indicators of reliability, from data on failures, and determine fluctuations in reliability over time.
An event that should not occur or which should be improbable in view of the objectives in terms of dependability.
Real scalar value defined and adopted by convention, to which we can compare any other similar value to express the ratio between the two values as a number.
Property of a phenomenon, (body, length, weight) expressed quantitatively by a number and a reference.
CITAC
Cooperation on International Traceability in Analytical Chemistry
CSA
Canadian Standardisation Association
EA
European Cooperation for Accreditation
Eurachem
Focus for Analytical Chemistry in Europe
EUROLAB
European Federation of National Associations of Measurements, Testing and Analytical Laboratories
GUM
Guide to the expression of Uncertainty in Measurement (the reference document recognized by the CSA, EUROLAB, Eurachem, and EA)
IEC
International Electrotechnical Commission
ILAC
International Laboratory Accreditation Cooperation
ISO
International Organization for Standardization
S (or
σ
)
Standard deviation
SAS
Le Service d’Accréditation Suisse
(Swiss Accreditation Service)
U
Uncertainty
VIM
International Vocabulary of basic and general terms in Metrology
1 For statistical terminology refer to ISO 5725-1:1994 and ISO 5725-2:1994.
In the not so distant past, we often built not from precise calculations, but by intuition. Carpenters did not question the resistance of the wood they used to build their ships. However, there is no doubt that it is necessary to calculate beforehand, if we are to combine safety with economy in our works of art and professional projects. That is not to say that we can be one hundred percent sure about calculations, because they merely turn out to be the product of the transformation of figures that we put in. The figures themselves can also be marred by diverse mistakes, or not correspond with reality. Moreover, if we forget to calculate a particular part of the problem, there is no automatic mechanism which signals this omission. Therefore, we must go by calculations to obtain a satisfactory level of safety, bearing in mind the imprecision of figures, the irregularities of behavior in constructions, and even the defects of theoretical hypotheses.
The main problem, then, is to study how the construction of stability is modified by the random characteristics of the variables that govern it. We will first attempt to point out the importance of proper usage of materials and their propensity to crack. In this chapter, several important points concerning the analysis of the factors of cracking will be presented. Examples are based on a law renowned for being “simple”, but which is representative of crack propagation in zone II (see Figure 1.18). For a greater understanding of behaviors during diverse fracture mechanisms, we will refer to works specialized in continuum mechanics. The reasons governing our choice of welded structures can be explained by their practical importance in metallic works and installations (offshore, building, cars, and other devices assembled by welding). Also, welded structures show a considerable amount of sensitivity in terms of failure (damage), due to fatigue of the notch stemming from potential penetration lacunae (L), located at the root of the weld bead.
Fatigue, like a succession of mechanisms, constitutes a process (distortions, loading), which modifies the properties of a material. This causes cracks that, over time, tend toward the fracturing of a material and/or of the structure. Although the range of stress is smaller than the resistance of the traction, the fact remains that it has a considerable influence on the reliability of the structure. There are stages which occur over time, ranging from activation, slow propagation to final fracture, used to predict the behavior of the structure. These phases are taken into consideration by most of the models concerned with cracking. This is, in fact, the reason we thought this chapter would be useful in a work dedicated to reliability and quality control. In fatigue, damage occurs in zones where the alternating stress is at its most intense: diverse cavities, notches, blowholes once they have been welded, strong heterogeneity of the material, etc.
Moreover, from microscopic examination of facture, it is clear that typical facies are parallel to the crack propagation, followed by a tear, which is the final fracture. The most significant life expectancy (slow to moderate speed of crack propagation) corresponds to the activation of the crack. Life expectancy is relatively low. The problem lies with activation. In this phase, the material is subjected to damage, which cannot be detected by the naked eye. However, since the structure is not constantly under the microscope, it is beneficial to predict these phenomena using reliability calculations. It is this link between cause and effect which justifies reliability calculations. The stress intensity factor (s.i.f., ΔK) starts at the foot of the weld bead [MAD 71, WAT 73, LAS 92]. The structure remains sensitive in terms of its resistance, presenting a high risk in fatigue.
This section is intended to support the calculations of reliability indexes. Experimentally, it has been demonstrated that the presence of a crack in a part (structure) considerably modifies its resistance [GRO 98, LAS 92]. Additionally, we know that a crack could become unstable during loading. This crack could be propagated in increasing measures, before a brutal fracture occurs. To evaluate the residual strength of a cracked compound, fracture mechanics should be employed. The calculations of cracked solids are based on the crack (sometimes microscopic) in terms of a tributary surface discontinuity with forced decohesion between the neighboring atoms. Among the numerous studies on the topic, the work of Griffith [GRI 21], who pioneered the model on the crack resistance, is the most important. This original paper concerned fragile materials (e.g. glass). Irwin et al., in 1948, applied Griffith’s work to solid components (structures). The following chart provides a short overview of the work on cracked components.
Figure 1.1.Simplified illustration of fragile fracture in mechanics
Estimates of the longevity of fatigue are based on rigorous calculations, hence the method employed for finite elements (Figure 1.4). Of course, there are other analytical methods for simple cases (boundary integral equations, photoelasticity, or extensometers), as experimental approaches. With the aim of determining the number of da/dN cycles per fracture, many laboratory tests were carried out on smooth test pieces during periodic loading. The literature confirms that during traction, at each one-fourth of a cycle, testing for traction gives a result which correlates to the maximum stress. Wöhler’s curve can then be used to find the link between the alternating stress and the number of cycles per fracture, from which the relation of load (R) indicates the quotient between the minimum stress and the maximum stress.
Resistance to fatigue is often modified by a host of factors, such as concentration of stress, temperature, loading, the topography of the surface (rugosity), and random phenomena (wind, ice, waves, etc.). This inevitably leads us to additive considerations of conventional and classic calculations for the resistance of materials. It now becomes even more necessary to consider the statistical aspect of the test results for fatigue. For example:
At first glance, the problem of the depth of the initial crack (a0) is simple. It becomes complicated when the variable representing the initial crack not only remains random but is also dependent on other parameters of the crack law:
[1.1]
where:
da/dN is the geometric ratio of the number of cycles (mm/cycles);
C and m are intrinsic parameters of the material (adimensional);
ΔK is the s.i.f. ( MPa× (m)1/2 ≈ tenacity).
[1.2]
where:
Δσ is the stress amplitude in the normal direction of the crack (MPa);
a (or a0/T) is the crack size (mm);
ξ(a) is the indication of corrected geometry (form factor).
The questions we ask, the traditional objectives of fracture mechanics, can be summarized as follows:
It is easy to want to stay in the comfort zone of elastics (E), because the domain is so well-researched. Though, of course, this is not always the case. If a part is plastically deformed or remains unchanged during a given loading, there are always criteria to explain the nature of, what is called, plastic flow. The two most well-known criteria are TRESCA [TRE 81] and von Mises’ (1913) criteria. It is customary to represent the elastic limit as Re, which appears after the limit of a plastic deformation, in the case of traction according to a single axis (xx). The stress is written as σxx and is inferior to Re. Once modified by the safety factor (s), the stress becomes σxx ≤ Re/s. This is the acceptable limit. Once applied to materials and to structures, this relation will take into account the diverse variations (forms, notches, and fillets) which are essentially the origin of concentrations of stress, hence the relation [1.2].
When the elastic limit is exceeded, plastic deformation occurs. It can therefore be shown by σxx≥ Re. These cases are encountered in the manufacturing process, during folding, stamping, laminating, or forging. The explanation for this resides in plastic deformation, which is shearing because the atoms slide and cause what is known as scission. The latter is maximal if the sliding angle (λ) is at 45° relative to the traction axis (xx).
Figure 1.2.Simplified illustration of fracture criteria for mechanical plasticity
For a material that is stressed during loading, which has a known critical value (KIC or GIC), in mode I the graph takes into account external factors such as the rate of loading and temperature, to name but two examples. These factors are not tributary to the geometry of the solid component. In reality, for a crack component, the tenacity KIC depends on the degree of biaxiality (see Figure 1.2 (top)), and even on the degree of triaxiality and of the stress of the cfr (crack front). In fact, this depends on the capacity of the solid component to endure plastic deformation in the cfr.
This expresses the ellipse equation, hence the use of Mohr’s circle. This topic could be developed further; however, it does not fall within the scope of this work. Therefore, the manuals concerned with the resistance of materials will be considered. To summarize, the literature proposes the following effective stresses:
For planar bidirectional stress:
Finite element modeling (see Figure 1.3) represents the equivalent field of stress with a color chart. The metallic parts which are subjected to repeated or alternating efforts can break, even if the maximum effort is inferior to the elastic limit. The life span of these parts far exceeds that of the lowest effort (Wöhler or S–N curves).
Fatigue tests are carried out by subjecting a metallic test piece to traction/compression or alternating bending efforts. For most steels there is a critical effort, below which the fracture appears only after a considerable amount of time. This effort is the fatigue limit of steel.
The rupture originates from a minuscule crack which progressively expands until a brutal fracture occurs. We calculate the metallic parts subjected to repeated efforts, so that at no point the effort, by a square millimeter, exceeds the fatigue limit. This requires the parts of different sections to be connected to a fillet with a large radius of curvature and the state of the surface to be cared for.
For each cycle of the law of cracking, it is possible to say whether the structure has broken or not. This leads to separating the space into two distinct regions, as is shown in the following figures of models by finite elements (software ANSYS). In conditions of speed, deformation, and temperature, materials show plastic deformation at the tip of the crack, which is sufficiently small to be handled with linear elastic theory.
Paris’ law takes into account the stage of slow crack propagation by fatigue (activation stage of the crack in propagation phase). The crack is likely to propagate in three directions, which are linked to the applied efforts. Three modes of deformation can be distinguished as shown in Figure 1.3. According to the mode of crack propagation, three s.i.f. K can be defined. In the singular zone, the stress field shows a singularity in at the tip of the crack.
It is generally accepted that the crack propagates due to a combination of stresses, according to the three following modes:
Mode I or opening: The normal traction stress is applied to the plane of the crack. In mode I, KI corresponds to the s.i.f. in the mode of opening of the crack edges (this fracture is extremely dangerous).
Mode II or straight slip: The shearing stress works in parallel to the plane of the crack and is perpendicular to the front of the crack. In mode II, KII corresponds to the s.i.f. in the mode of shearing on the plane of the crack edges.
Mode III or screw slip: The shearing stress works in parallel to the plane of the crack and in parallel to the front of the crack. In mode III, KIII corresponds to the s.i.f. in the mode outside the plane of the crack edges.
Figure 1.3.Modes of deformation of a cracked body
Factor KI varies according to the nominal stress σn applied to a part which is half the length (a) of the crack. In the case of an infinite elastic medium, we use:
[1.3]
For parts with finite dimensions, it has been demonstrated that:
[1.4]
where ξ(a) is a corrected coefficient of the geometry that allows KI to take the following corrected KIC values:
Using Irwin’s theory of elasticity, we present, in deformation or in planar stress, displacements ui and stresses σij, in the singular zone, according to the mode considered.
Mode I is a mode of opening the crack, where the displacements are parallel to the direction of propagation. The following equations can be used:
[1.5]
Mode II is a mode of opening the shearing on the plane, where the displacements of the crack are parallel to the direction of propagation. We use:
[1.6]
[1.7]
Fracture can be mixed. In this case, we proceed to the additivity of the displacements. The combination of modes I and II gives, for example:
[1.8]
The mathematical equations of displacements Ui and the stresses σij, in Irwin’s sense, are written as follows:
[1.9]
The stress equations, according to Irwin, are written as follows:
[1.10]
where:
r and θ are the radius and the angle, respectively, in polar coordinates;
ν and E are Poisson’s coefficient and Young’s modulus, respectively.
It is worth pointing out that in the case of anti-planar loading, the only displacement component remains U3. The respective expressions of displacements and stresses are therefore the following:
[1.11]
The s.i.f. (KI, KII, and KIII) remain independent of r and θ. They are distribution functions of external efforts and crack geometry. Griffith’s theory was the first energetic approach on a cracked body. Moreover, there are other means of characterizing the singularity of the stress field in the neighborhood of the crack front (n.c.f.) or of studying the contour integral, which is Rice’s [RIC 68] integral. The aforementioned concepts are only useful for isotropic materials, which have an elastic behavior. Factors KI, KII, and KIII characterize both the detail of the geometry and of the crack, as well as the nature of the stress. Preventing fracture by fatigue means mastering parameters such as:
