Reliability of Nuclear Power Plants -  - E-Book

Reliability of Nuclear Power Plants E-Book

0,0
126,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Since the 1970s, the field of industrial reliability has evolved significantly, in part due to the design and early operation of the first generation nuclear power plants. Indeed, the needs of this sector have led to the development of specific and innovative reliability methods, which have since been taken up and adapted by other industrial sectors, leading to the development of the management of uncertainties and Health and Usage Monitoring Systems. In this industry, reliability assessment approaches have matured. There are now methods, data and tools available that can be used with confidence for many industrial applications. The purpose of this book is to present and illustrate them with real study cases.

The book addresses the evolution of reliability methods, experience feedback and expertise (as data is essential for estimating reliability), the reliability of socio-technical systems and probabilistic safety assessments, the structural reliability and probabilistic models in mechanics, the reliability of equipment and the impact of maintenance on their behavior, human and organizational factors and the impact of big data on reliability. Finally, some R&D perspectives that can be developed in the future are presented. Written by several engineers, statisticians and human and organizational factors specialists in the nuclear sector, this book is intended for all those who are faced with a reliability assessment of their installations or equipment: decision-makers, engineers, designers, operation or maintenance engineers, project managers, human and organizational factors specialists, experts and regulatory authority inspectors, teachers, researchers and doctoral students.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 524

Veröffentlichungsjahr: 2022

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.


Ähnliche


Reliability of Multiphysical Systems Set

coordinated byAbdelkhalak El Hami

Volume 14

Reliability of Nuclear Power Plants

Methods, Data and Applications

Edited by

André Lannoy

First published 2022 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK

John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2022The rights of André Lannoy to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s), contributor(s) or editor(s) and do not necessarily reflect the views of ISTE Group.

Library of Congress Control Number: 2022938736

British Library Cataloguing-in-Publication DataA CIP record for this book is available from the British LibraryISBN 978-1-78630-761-3

Foreword by Philippe Le Poac

The authors asked me to write this foreword in my capacity as President of the IMdR (Institut pour la Maîtrise des Risques [Institute for Risk Management]), but also perhaps in my capacity as someone who has spent 40 years of his professional life at CEA, le Commissariat à l’Energie Atomique (French Atomic Energy Commission), created on the order of General de Gaulle on the October 18, 1945, and which became the Commissariat à l’Energie Atomique et aux Energies Alternatives (Alternative Energies and Atomic Energy Commission) in 2010.

You must therefore forgive me if this foreword is strewn with personal recollections.

Operational safety and dependability are parts of the DNA of the IMdR. And operational safety and dependability include reliability, maintainability, availability and security.

The first chapter recalls the history of the notion of reliability and the word “reliability”. There are domains where reliability is independent of security. We may even think of cases where systems had functionnally failed but were still safe as they had been safely switched off. One essential characteristic of the nuclear domain is that reliability is inseparable from security.

I am obliged to talk about vocabulary. For my entire professional life, like all nuclear professionals, I used these words, sûreté nucléaire an old translation of the English words, “nuclear safety”.

Law 2006-686 of June 13, 2006 relating to Transparency and Security in the Nuclear field, called the TSN law, officially created the ASN (Autorité de SûretéNucléaire [Nuclear Safety Authority]) and defines nuclear safety as the technical provisions and organizational measures relating to the design, construction, operation, halting and dismantling of static nuclear facilities, as well as to the transport of radioactive substances, taken with a view to preventing accidents or limiting their effects.

After I became President of the IMdR and was faced with other industrial terminologies, I noticed that the English word safety meant sécurité. Outside the nuclear domain, it is accepted that sécurité (safety) involves the protection of individuals, property and the environment from the risks that a system could create. And sûreté (security) involves the fight against any ill-intent that could threaten the system. The authors, aware of this linguistic problem, moreover, use both words security/safety in several places.

Chapter 3 presents the principles of calculating reliability in PSA, (Probabilistic Safety Assessment); at level 1, its main objective is to calculate the risk of meltdown. These PSAs rely on two types of model: fault trees and event trees.

The starting point for an event tree is what is called an initiating event. This may be a simple fault, such as a broken pipe, or the result of a fault tree. It is the same for events corresponding to the branching of event trees.

The representation in graph form of a fault tree and of event tree recall a “bowtie” currently used outside the field of nuclear science in studies on danger imposed by the European SEVESO directive for chemical, oil and gas facilities.

To construct event trees, the nuclear domain makes intense use of physical modeling and digital simulation, which has led to considerable progress because of the enormous increase in computer processing capacities.

Thus, the deterministic thermohydraulic code CATHARE, developed during a collaboration between CEA, EDF, AREVA and the IRSN, makes it possible, paired with a neutron code, to calculate precisely the evolution of temperature and pressure in the primary circuit in normal or accidental situations.

Reliability requires input data: feedback and expertise (Chapter 2). Operation feedback includes “event” and “material” feedback.

There are active components that are not reparable but which can be changed during corrective or preventive maintenance. There are also active components that are reparable. For these components, statistical data will make it possible to evaluate their reliability.

One particular case concerns active components on standby: they are not usually in use, but should be usable to ensure safety functions when needed. These components should be tested and failures during tests feed into the data needed to evaluate reliability.

Chapter 5 presents the main statistical and probabilistic modeling used for the reliability of industrial equipment, underlining the diversity of approaches due to their nature, the impact of maintenance, the quality of feedback and the complexity of the real world.

The effect of maintenance for reparable equipment is modeled, when the maintenance to be corrective, that is, carried out after a failure is detected and intended to return the reparable equipment to a state in which it can accomplish the required function, or when the maintenance to be preventive, that is, carried out at predetermined intervals or according to prescribed criteria and intended to reduce the likelihood of component failure.

It is noted that all the frequential approaches rely exclusively on the data available from the feedback: the quality of the results of the reliability study depends strongly on the quality of the data.

HUMS (Health and Usage Monitoring Systems) monitoring data are collected with the help of an increasing number of sensors with an increasingly tight time sampling. The data mass is becoming so large that the authors of this book speak of a transition from too little to too much and use one chapter (Chapter 7) to focus on methods developed to tackle the impact of big data on reliability. Many questions arise: it is a question of collecting relevant data and of being able to pre-treat and validate massive data. If machine learning techniques can help, there is no question of putting blind faith in black box models. Influencing factors, unsuspected at the outset, can be distinguished. But it will only be possible to have trust in big data algorithms if their results are explicable and explained. And the authors insist on the need for recourse to physical models and to analyses of organizational and human factors.

When the data are insufficient, Bayesian methods are needed, combined with expert opinion.

There are instances where feedback reveals no failures. This is the case when the equipment studied is very reliable or designed with a broad security margin.

It is also the case with a non-reparable passive component such as the reactor vessel.

Structural reliability is therefore necessary. This is the subject of Chapter 4.

Failure of a structure occurs when a stress written S exceeds a resistance written R. The stress-resistance method permits comparison with a numerical mechanical code for the structures. The stress and the resistance are expressed by variables that are considered random. The probability of failure is then determined by propagating the uncertainties in the physico-mathematical simulation code. The number of calculations using the Monte Carlo method becomes prohibitive for rare events and other methods are presented.

Mechanical tests make it possible to determine the material’s resistance variables.

An important part of nuclear safety relies on the integrity of the vessel. The mechanical resistance of this thick metal structure has given rise to important research in fracture mechanics, the science of cracks propagation. The probabilistic approach to the risk of a sudden fracturing of the vessel consists of propagating uncertainties in the mechanical model of fracturing for various thermo-hydraulic transients. The toughness of the material, a measure of its resistance to crack propagation, is the first variable considered as random. Studies carried out by CEA, EDF and AREVA concluded that a probabilistic representation of resistance can be given by a three-parameter Weibull law.

I offer a personal aside. As a young engineer who had just left college, I discovered linear elastic fracture mechanics (LEFM) and the ASTM E399 standard (Standard Test Method for Linear-Elastic Plane-Strain Fracture Toughness of Metallic Materials) for determining the KIC, the critical stress intensity factor, a measure of the material’s resistance. I was one of the French researchers who developed Elasto Plastic Fracture Mechanics (EPFM) in France in the middle of the 1970s to extend Linear Elastic Fracture Mechanics and consider the plasticity of the steel from which the vessel is made. And I carried out, with others, the first research on the J-integral. Much later, I had the honor of directing the Department of Nuclear Materials at the Saclay CEA Center. Research on the J-integral resulted in the meantime in the ASTM E813 standard (test method for JIC, a measure of fracture toughness).

Reliability security/safety is not only evaluated at the moment of design and implementation. They are evaluated periodically. Extending the lifetime of nuclear power plants necessitates studying the ageing of materials and more particularly the effects of radiation. Indeed, since extending lifetime amounts to reducing resistance or, in other words, increasing fragility, the evolution of this characteristic over time and as the cumulative effects of radiation have to be studied.

To do this, samples are irradiated in a test reactor or in a power generating reactor. It is then necessary to measure the properties of the irradiated samples in “hot labs”, that is, in laboratories equipped with hot cells, able to receive highly radioactive materials in complete safety, and remote manipulators making it possible to proceed to experiments.

But the reliability and security/safety of complex sociotechnical systems is not only the concern of the engineering sciences. Human and social sciences also contribute. The accident at Three Mile Island in 1979, which occurred in the wake of inappropriate actions by the operators, has shown the importance of the human factor.

Chapter 6 addresses the human and organizational dimensions of reliability and nuclear safety.

The organizational dimension has been added because it is not only individual human error, which is always possible, that should be taken into account. The whole of the organism should be examined since reliability and security/safety are also dependent on management, who make decisions, arbitrate, allocate resources, manage skills organize feedback, and develop values and culture.

The TSN law cited above says that nuclear security includes nuclear safety, radioprotection and the prevention of and the fight against malicious activity, as well as civil security actions in case of accident.

Nuclear security therefore includes nuclear safety, but adds other considerations such as the fight against malicious activity.

A human, a friend of the system, can commit an involuntary error but there may also be other humans, enemies of the system, who may seek to harm the system voluntarily. Reliability and security/safety also depend on the lines of defense, both technical and organizational, that will have been planned against malicious attacks, whether physical or based on information technology.

The present work considers reliability studies carried out in the domain of nuclear electricity production but some approaches and methods may be applicable to other industrial sectors. Society, which has always been demanding over nuclear power considering the risks it runs, is becoming more so with other economic activities. Experience acquired in the nuclear domain could usefully inspire those in charge of reliability, operational safety and complex sociotechnical security systems.

Philippe LE POAC

President of the IMdR

Foreword by Antoine Grall

Society is attentive to risks for individuals and the environment, and faced with ever more sophisticated production systems and subject to the goals of optimization and cost management, operational safety is taking an ever greater place. Many unwanted events can, when they are considered individually, be evaluated as very unlikely and inconsequential. Nevertheless, within complex systems, cumulative effects and the effects of interactions can be very significant. An operational safety approach does not aim to eliminate all risk, but rather to keep it under control. This means determining and characterizing a design and an operational mode allowing a good balance between different criteria linked to operation of the system and to its interaction with the environment. The exact characterization of the “right” balance can be highly variable from one application domain to the other depending on the importance given to economic, reliability and environmental aspects, etc.

This book focuses on electricity production systems and on the nuclear domain. “Reliability” and “nuclear energy” are terms that naturally go together. No one can deny the important role of the progress undergone in methods and studies on operational safety in the context of work carried out in the sector of nuclear power production. This book, coordinated by André Lannoy, exclusively brings together contributions from authors with recognized competencies in reliability and significant experience in the nuclear field. It demonstrates the broad spectrum and complexity of the questions tackled.

Should we believe that there is a reliability specific to the nuclear domain? In other words, is it relevant for the reader to immerse themselves in the different chapters if they are interested in reliability in association with other application domains? The answer, in my opinion, is definitely affirmative. Independently of the subject studied, any operational safety engineer has the same set of techniques and methods on which they can call. Their work involves using these as well as possibly starting from a refined understanding of the dysfunctional mechanisms at play. A nuclear energy production plant represents a complex system, within which various issues co-exist, transverse to many domains, linked for example to the analysis of passive components (structures), dynamic systems (active components and control-command), human performance, etc. In the field of nuclear safety, there is a continuous demand for improvement that leads to periodic re-examinations and to consideration of the evolution of knowledge. Points to emphasize, procedures and the regulatory framework may be specific but junior and confirmed senior specialists will be able to draw significant benefit from the shared expertise offered by the authors of the different chapters.

Many aspects are addressed throughout these pages and the frameworks for reading may be diverse. The data are present at every stage of the reliability studies. Whether it comes from accelerated testing, from expertise, from operational experience feedback or from sensors in real time, it is vital to use it. Three chapters are particularly associated with data. Chapter 2 provides a general perspective on feedback and the associated concepts, highlighting contributions from events and from materials. Chapter 5 gives the reader an overall vision of the main stochastic modeling frameworks based on the data and used in reliability. They have the aim of describing the behavior of specific sub-systems or components and in practice are chosen, adjusted and validated through confrontation with the data observed. Finally, Chapter 7 is directed toward a current issue, of collecting and processing the massive surveillance and monitoring data. This new challenge has emerged in particular because of modern monitoring devices. It is a particular challenge for the field of reliability, which has historically had to deal with little data.

Monitoring and feedback data are clearly not the only source of information available on the state and behavior of a system or a component. As for passive components in particular, physical or mechanical behavioral models are available. Analyses of the reliability of structures represent one particularly interesting example of association between these deterministic behavioral models and uncertainties on the variables. The methodological framework is presented in Chapter 4 and applied to examples.

An electricity production system based on nuclear energy is a complex system and two chapters focus on two particular aspects of this complexity. The first aspect tackles probabilistic studies in safety. These studies encompass very large models linking fault trees and event trees. Given the large number of components interacting, combinational problems are often present. In this context, it is particularly interesting to understand practical application. More recent approaches are also presented. The second aspect of complexity evoked hinges on the human and organizational dimensions that require a global and systemic vision. In particular, they mobilize competencies from the field of human and social sciences and demonstrate how much reliability is destined to be an interdisciplinary field. The question of making human performance reliable is discussed with its different aspects.

To conclude, I would like to emphasize the benefits of this book for anyone interested in the field of reliability. As a teacher to future researchers, I can only recommend that students read it. Reading it may also make it possible to acquire a broad view of the many aspects of reliability as well as to complete, very favorably, an essential theoretical initiation that can be followed in an academic framework. Through clear and precise presentations, the book carries the dimension of applicability on real systems of methods that are often presented with simple illustrative examples.

As a researcher who cares about the applicability of his work, it seems helpful to me to be attentive to the perspectives developed and to the avenues for R&D mentioned by the various authors. They are linked to methodological obstacles that we must succeed in lifting and they give precise indications as to the hypotheses present in the existing theoretical frameworks that are the most restrictive.

Professor Antoine GRALL

Technical University of Troyes

Deputy director of the doctoral school

“Science for the Engineer”

May 2022

Preface

The authors of this edited collection wish to honor Henri Procaccia, a former expert research engineer, former group leader and former deputy head of department at EDF R&D, cofounder, honorary member and member of the ESReDA (European Safety, Reliability & Data Association) governing committee, member of the IMdR (Institut pour la Maîtrise des Risques [Institute for Risk Management]), for his pioneering role in the field of reliability in nuclear power.

Henri Procaccia was one of the active players in reliability and we are indebted to him for a great deal of research addressing this subject in depth. From the start of the 1970s, well before the coupling of the first pressurized water reactor (using US technology) in France at Fessenheim, Henri Procaccia worked on system reliability and on the structuring of operation feedback for pressurized water reactors and fast neutron reactors.

In the 1980s, the low quantity of feedback data issuing from power stations that were still at their initial stage led him to adhere to the Bayesian approach, which he promoted both in his teaching and practice. He was responsible for the first data handbook on reliability, the SRDF (Collection and Analysis System for Reliability Data). At the end of the 1980s, on the basis of his skill in physical testing, he convinced the EDF to engage in analyzing structural reliability and in fatigue monitoring.

In the 1990s, his field of investigation primarily focused on optimizing maintenance and on decision support. In all these topics, reliability emerged as an essential parameter. The authors are grateful to Henri Procaccia for having thus aided the development of reliability methods, their application and their use. He was also motivated by a profound sense of humanity.

May 2022

Acknowledgments

The authors of this book would warmly like to thank Monsieur Abdelkhalak El Hami, Professor at INSA, Rouen and Director of the Department of Mechanics and “Industrial performance and innovation”, for suggesting that we write a state of the art of knowledge on the theme of reliability in nuclear power plants. We are indebted to him and his services for technical support in creating this book.

First of all we thank Mohamed Eid, former engineer-researcher at CEA Saclay, teacher and now consultant and President of the ESReDA, for his encouragement, advice, especially on methods, support with which he has been so generous, precise and perspicacious reviewing of this book, and so for the essential contribution due to which we have been able to complete this book.

We must also thank:

– Hervé Boll, Deputy Director of Programmes at EDF R&D, for reviewing Chapters 3 to 5 on systems, structures and components.

– Vincent Sorel, Project Manager, Sûreté Nouveau Nucléaire (New Nuclear Safety) and PSA expert, Cécile Luzoir, director of EPS Agressions (PSA aggressions), both at the EDF Technical Department, and Julien Tissot, head of the Group “EPS et disponibilité des systèmes” (PSA and system availability) in the “PERICLES” department at EDF R&D, for reviewing Chapter 3 of this book.

– Philippe Bryla, Mechanics and Materials expert at the EDF Hydro General Technical Department, Group C2M, for reviewing part of Chapter 4.

– Vincent Chabridon and Roman Sueur, research engineers at the PRISME (Performance, Industriel Risk and Monitoring for Maintenance and Operation) department at EDF R&D, for their relevant and constructive comments on Chapter 5.

– Franck Anner, at IRSN, head of the human and organizational expertise office, Sarah Fourgeaud, head of the Homme, Organisation, Technologie (Man, Organization, Technology) Service, Olivier Dubois, Deputy-Director of safety expertise at REP, and at CEA, Didier Balestrieri, Head of the Service de Soutien et de Gestion de Crise (SSGC) (Crisis Support and Management Service) at the Direction de la Sécurité et de la Sûreté Nucléaire (DSSN) (Department of Security and Nuclear Safety) at CEA, for their reviews on Chapter 6.

These comments and conversations and these varied experiences have enriched our thinking, and have enabled us to complete our approach to reliability and nuclear safety.

Thanks to Alexandra Jordan who carried out the work of translating this book into English, while providing us with her insightful and precise insights and suggestions.

Thank you again.

May 2022

Author Biographies

Emmanuel Ardillon, graduated from the École Nationale des Ponts et Chaussées (1990), holder of an M.Phil in “Probabilities and Applications” from the University of Paris VI, is a research engineer at EDF R&D (PRISME Department) in the field of reliability of structures, probabilistic approaches to physical phenomena and uncertainty processing. He is the author of structural reliability analyses (methodology, applications) in the nuclear and hydraulic field, research project manager, editor of the collective work Structural Reliability Analyses into System Risk Assessment (ESReDA, 2010), contributor to the collective work La fiabilité en mécanique – Des méthodes aux applications (Presses des Mines, 2018) and leader of the working group “structural reliability and safety” of the Institut pour la Maîtrise des Risques (IMdR).

Marc Bouissou, graduate of the École Nationale Supérieure des Mines de Paris (1980), holder of an ability to conduct research from the University of Science and Technology of Lille and qualified for the post of University Professor in 2008, is a senior research engineer at EDF R&D (PERICLES Department). He has devoted most of his career to the development of innovative methods and tools for the automation of system dependability studies. He has more than 100 publications to his credit in this field, including several book chapters, and he has written the book “Gestion de la complexité dans les études quantitatives de sûreté de fonctionnement de systèmes” (Lavoisier, 2008). He has given numerous courses to master’s students on systems reliability, in particular when he was a part-time professor at the École Centrale Paris from 2009 to 2015. Since its creation, he has been leading the working IMdR’s group “methodological research”.

Nicolas Dechy, graduate of the École des Mines de Douai (1999), is a research engineer at the Institut de Radioprotection et de Sûreté Nucléaire (IRSN) within the Human Organization and Technology Service of the Nuclear Safety Pole, where he has been carrying out expertise since 2010 as a specialist in organizational and human factors, and where he assesses risk management by operators. Since 1999, in a design office (AINF), then at INERIS, he has carried out studies, expertise and research on risk management, in particular on experience feedback, accident analysis, crisis, management maintenance and subcontracting. He contributes to the reflections, exchanges and activities of the associations IMdR, ESReDA, Institut pour une Culture de Sécurité Industrielle-Fondation pour une Culture de Sécurité Industrielle (ICSI-FonCSI) and Collectif Heuristique pour l’Analyse Organisationnelle de Sécurité (CHAOS).

Yves Dien began his career at EDF R&D in the early 1980s, notably through his participation in a design and ergonomic assessment project for a fully computerized control room for nuclear power plants. He was also involved in the design and evaluation of assistance to nuclear power plant operators (incident and accident control procedures, computerized tools in conventional control rooms, etc.). Then, for 10 years, he was responsible for nuclear affairs for the countries of Central and Eastern Europe (European TACIS and PHARE funds, bilateral conventions). Finally, back at EDF R&D, he oversaw a project on the organizational factors of incidents, accidents and crises. He contributed to the discussions and reflections of the IMdR and ESReDA associations, as well as the ICSI-FonCSI. Currently retired, he is a member of the CHAOS association which conducts research on the organizational factors of industrial security.

Mohamed Eid graduated in Nuclear Energy Engineering from the Department of Nuclear Engineering of the University of Alexandria (Egypt) in 1977, then Doctor of Engineering in Nuclear Reactor Physics from the University of Paris-XI in 1985; he joined the CEA as a research engineer where he developed his multidisciplinary expertise in neutronics, thermo-hydraulics, radiation protection and probabilistic risk analysis. He was in charge of continuing education courses in simulation by the Monte Carlo method and its applications in dependability since 1987 in several engineering schools: École Centrale Paris, École Nationale Supérieure de Techniques Avancées and Supaero. He also worked as an Associate Professor at the Institut National des Sciences Appliquées in Rouen between 2005 and 2020. He has published around 50 scientific articles and co-authored several books in English. He is a member of several editorial boards of scientific journals. He is an active member of European Safety and Reliability Association (ESRA) and IMdR and President of ESReDA. After his retirement from CEA in 2020, he founded his consulting and studies firm in Risk Modeling & Analysis “RiskLyse”. Within the frame of this book, he advised all the authors and re-read several chapters, in particular Chapters 1, 3, 5.

André Lannoy is a research engineer, doctor of detonics, previously scientific advisor to EDF R&D, author or co-author of numerous books and communications, moderator of the IMdR product commission (projects, scientific days, training), member of the working groups “feedback from technical experience” and “structural reliability and safety” from IMdR and honorary member of ESReDA”.

Emmanuel Remy, graduate of the Institute of Statistics of the University of Paris (2001), is an expert research engineer at EDF R&D (PRISME, Performance, Industrial Risk and Monitoring for Maintenance and Operation) in the field of probabilities and statistical methods for equipment of nuclear, hydraulic and wind power generation. In addition to his applied research (in close collaboration with the academic world), and teaching activities, he has contributed to the group “reliability and uncertainties” of the French Statistical Society since its creation in 2009 and, since 2014, he has co-coordinated the working group “feedback from technical experience” from the IMdR”.

Jean-François Vautier is a specialist in Organizational and Human Factors at the CEA. For 20 years, he has been running the skills center and the network there. Since 2013, he has also been one of the facilitators of the working group “organization and risk management”. For the Technical Engineering Editions, he directs a collection of collective works that follow on from the working intergroup organized by the IMdR, as well as a collection on complex systems. He has published articles on Systemics in addition to others on organizational and human factors.

1Aims and Introduction

This introductory chapter discusses the aim of this book, which is to explain the methods and reliability studies employed in the nuclear power production sector. It is not a book on advanced reliability but rather a guide to good practice in current use, in which it is permitted to propose R&D actions to make further progress. One section retraces very briefly the evolution of ideas and methods in the field of reliability since earlier times. With the design and implementation of nuclear power plants in the 1950s, then their operation and maintenance, concern about reliability have become very significant for the challenges of safety and performance. Methods, tools and knowledge have moreover greatly evolved since the 1970s. The content of this book is presented at the end of this chapter.

1.1. The aims of this work

This edited collection follows a suggestion by the editor in chief of a collected series of books on “applied reliability” on behalf of iSTE and Wiley & Sons editions, Professor Abdelkhalac El Hami, Director of the Mechanics and Performance Department at INSA, Rouen, Normandy (Institut National des Sciences Appliquées; National Institute of Applied Sciences). The collected series of books concerns reliability as it is practiced, estimated, approached, interpreted and integrated into the various industrial sectors (aero-space, car-making, nuclear, oil and gas, etc.).

The present work emphasizes reliability studies carried out in the nuclear energy production sector, given that the approaches, methods and results described in it can be applied to other domains such as hydraulic production, wind power, etc., but also to other sectors of processing industries or transport, such as offshore platforms or rail transport, and so on.

Even if these approaches and methods are clearly generic, the reader who uses them must nevertheless check that the context is appropriate and the operational, maintenance and environmental conditions are substantially similar. This potential use must also conform to regulatory context of the industrial sector.

This book is not a piece of academic or advanced research on reliability. It sketches the state of the art on industrial practice on reliability to date. It presents the main approaches and methods used at the present moment in the sector of nuclear energy production. In this sense, it can be considered as a guide to good practice or rather as a vade mecum that can easily be consulted. The authors however do not refrain from listing views of progress and R&D, or indeed from proposing new developments.

1.2. Reliability, an application of probability theory

1.2.1. What is reliability?

The definition given by standard EN 13306 (edition 2, 2010) is as follows: aptitude d’un bien à accomplir une fonction requise, dans des conditions données, durant un intervalle de temps donné (the ability of an item to perform a required function under given conditions for a given time interval).

The first remark is that the standard refers to an “item”. This term may not be unanimously accepted in the reliability community. Although it satisfies working reliability specialists, design reliability specialists or modelers might have preferred the term “functional entity”.

The second remark to be made is that reliability is first of all a qualitative notion. It must be verified that the item concerned really guarantees all its functions considered necessary to provide a given service.

The third remark is that reliability lends itself well to the use of a probabilistic metric; it designates the value of the probability of something being in good working order during a well-defined interval of time. Reliability can therefore be quantified in the form of a probability or a probabilistic performance indicator. From 1969, Kaufmann presented it as a confidence indicator, which he called the technical confidence [KAU 69]. Lemaire [LEM 05] refers to the definition taken from standard AFNOR NF X50-120 from 1988: “the ability of an item to accomplish a required function in given conditions, over a given duration… the term is also used as a characteristic designating a probability of success or a percentage of success”.

Another quantitative characteristic of reliability is the instantaneous failure rate λ (t), which designates the proportion, per unit of time, of devices that, once they have survived to an arbitrary instant, t, are no longer “living” at instant t + dt. λ(t) is a reliability indicator indicating imminent failure at instant t.

We can distinguish several reliabilities depending on the phases of the lifecycle of an item [LAN 06]:

– comparative reliability, calculated from past feedback data, used in preliminary studies in the pre-project phase;

– allocated reliability, requirement, reference threshold value, defined in the specification phase;

– planned or theoretical reliability, calculated during design, and compared to the allocated reliability;

– practical reliability or operational reliability or measured reliability, estimated from operational feedback, when the equipment is in use;

– predictive reliability, anticipated reliability or extrapolated reliability or reliability prognosis, estimated from an operational feedback analysis at a given instant and considering contexts and future conditions of use.

1.2.2. The early days of reliability

Security and safety have always been pressing needs for societies [LAN 18]. Since the 3rd millennium BC, the resistance of houses to the threat of collapse was a great worry for Cycladic civilizations. Around 1730 BC, the sixth king of Babylon, Hammurabi, developed one of the first regulations intended to guarantee a level of security in homes (articles 229 and 230 of the code of Hammurabi, the Louvre). This is the first legal code. It remained in force for a millennium in Mesopotamia [LEM 05].

Vitruvius (1st-century BC) appears to be the first proponent of reliability in history (De architectura). He seems to have been the first to be concerned with dependability:

– needs and utility (utilitas) (he was the first to carry out a functional analysis, and we recall that any study of reliability, whatever it is, begins with a functional analysis);

– reliability, durability and the robustness of the item (firmitas);

– capitalization of existing knowledge, the codification of constructive provisions, resistance to natural harms;

– esthetics (venustas).

The word fiabilité (reliability) appeared in the course of the second half of the 13th century. In the 20th century, it became a very commonly use word for mastering technological risk. It comes from the Latin fides, faith, confidence, which produces confidence, good faith, and from the Latin fido, to trust, to have confidence in, to count on.

It was only in the wake of “the geometry of chance”, begun in 1654 by Blaise Pascal, that the first stage of probabilistic studies focused on the risks caused by gambling and led to the establishment of mortality statistics. At the start of the 18th century, Daniel Bernoulli [BER 38] defined risk and considered it as a two-dimensional measure: “the probability or likelihood of an event” and the gravity of its consequences.

The posthumous work by Thomas Bayes [BAY 62], An essay toward solving a problem in the doctrine of chance, was published in 1763. This work would revolutionize reliability studies in the 20th century. The Bayesian method has now become unavoidable.

In 1776, Buffon [BUF 76] observed that a guild, the carpenters had a very imperfect knowledge of force and resistance of wood, which depended on the organization of the wood and on particular circumstances. Over several years, he carried out tests on wooden beams (he recorded: length, breadth, mass, load, breaking time, strain). These were, to our knowledge, the first mechanical reliability tests. At this time, regression methods did not exist, so it was not possible to develop a law governing lifetime. Reliability can therefore also be obtained experimentally.

And it was only after the 19th-century railways and then from the 1930s that reliability studies developed from needs linked to the aeronautics and electrical industries to civil engineering structures. We note Max Mayer’s [MAY 26] proposition to use average values and parameter variations in design.

Between 1930 and 1940, Sir Alfred Grenville Pugsley expressed the first quantified risk objectives. He demanded that an airplane’s accident rate “considering all the causes of breakdown likely to cause to an accident” should not exceed 10–5 per hour, including 10–7 per hour for causes linked to the airplane’s structure [VIL 88]. Pugsley also applied probabilistic objectives to shape civil engineering at the end of the 1940s.

1.2.3. The birth of modern reliability

This period extended from 1937 to 1948 and the war years would create a considerable boom in reliability research.

In 1937, Bruno De Finetti [DEF 37] became pre-eminent in “subjective operational” probability design.

In 1939, the Swedish engineer and mathematician Weibull published his work on the distribution known as Weibull distribution, used for mechanical tests, especially fatigue tests [WEI 39]. Now, it is also much used in probability. It can be noted that Weibull thought that his law could not be used for reliability. It was used for this purpose, however, after the War. Other distributions would then be used in the field of reliability.

The Second World War then began. Industry had to deal with intense production for military ends. And the idea that production chains are as reliable as their weakest link came into question. Collaborating with Wernher Von Braun on V1 manufacturing chains, Eric Pieruschka, in collaboration with Robert Lusser, perfected the calculation of a chain’s reliability as being the product of the probabilities of survival of each of the chain’s components. Operational research appeared in 1940 in England, and then in the United States for military ends: for the United Kingdom, it was a matter of best using its military means, which were insufficient at the time (airplanes, anti-aircraft forces, naval methods). The fundamental idea was to take as much care as possible in using the means available for design and construction.

1.2.4. The development of modern reliability 1948–1960

In 1949 Murphy’s law was announced, perhaps better known by the name of the “maximum poisoning law” or “buttered toast law”: as soon as it is possible for anything to go wrong, it will! A law never demonstrated but always experimentally verified.

At the start of the 1950s, Epstein and Sobel [EPS 53] would present their work on the different distribution forms of a material’s lifespan. The exponential law of lifetimes corresponds to the particular instance of the hypothesis of a constant instantaneous failure rate, particularly well adapted to electronic equipment, and very convenient for modeling. The use of Weibull’s law would come to the foreground in the 1990s, with studies on ageing.

Moreover, the threats of the Cold War would, on the one hand, increase the development of effective arms systems for dissuasion and, on the other hand, create a lively emulation between the United States and the USSR over questions of national prestige. The fallout from military and space technology for telecommunications was plentiful (intelligence satellites for transmitting images, geolocation, mobile data, mobile telephony, etc.).

The US army developed FMEA, then FMECA. Military reference Mil-P-1629, entitled “Procedures for Performing a Failure Mode, Effects and Criticality Analysis”, is dated November 9, 1949. This method was employed as a technique for evaluating failures to determine the reliability of a piece of equipment and a system, especially electronic systems. This method is now universally used in all sectors. It was developed by the aviation industry and the US army, with the aim of making reliable and securing aerospace flying equipment. Its use was then extended to industry to optimize the operational safety of components and systems representing a significant risk for individuals and the environment. Three concepts would come to complete that of reliability: availability, maintainability and support logistics, thus expressing the dependability.

The first studies on the human factor were launched in the United States.

The AIEE (American Institute of Electrical Engineers), became the IEEE (Institute of Electrical and Electronics Engineers) in 1963, and in 1954 held its first conference on reliability. It also saw the first use of Markov processes in industry.

In 1955, the first reliability study was published in France by CNET (National Center for Telecommunications Studies), the pioneer of reliability studies. It was then that the French word “fiabilité” (reliability) appeared, under the leadership of Paul Blanquart and Guy Peyrache [LAN 08]. The Académie Française went on to accept the term “fiabilité” in October 1965.

1.2.5. The advent of reliability specialists 1960–1974

With the design and implementation of nuclear power plants, security/safety concerns became significant. Potential accidents are classified, placed in a hierarchy and analyzed in detail, as much for their seriousness as for their frequency. So was born the defense in depth principle in the nuclear industry. The development of other types of industries (car-making, chemical, oil and gas, etc.) will take advantage of this impulse.

These advances therefore concern methodological progress: knowledge, approaches, methods, etc., but also very many advances for the whole of society (and considerable for industrial societies): the increase in lifetime, industries and safer transport, scientific progress, technical progress, greater security/safety linked to better prevention and protection policies, the approval needed for industries involving hazardous processes (the aim being to limit the potential harmful impacts from the design stage), quality approaches, actions and demonstrations by consumers that became widespread in the period between 1970 and 1976 for the defense of consumers rights and a better quality of products and food (Ralph Nader), etc.

Barlow and Proshan [BAR 65] show the impact of maintenance on reliability. It is only possible to calculate the reliability of an item if we estimate the effects of maintenance.

The very well-known Mil-Hdbk-217 (edition B) on electronic reliability was published in 1974.

Franck Reginald Farmer [FAR 67] was interested in the acceptability of risks and tackled the question of the maximal credible accident. From his point of view, all potentially dangerous events should be considered. The risk is represented by a curve called fC, frequency–consequences, showing at the same time events with a low probability and severe consequences and frequent events with low-level consequences. This notion was introduced in 1967 for iodine 131 fallout from nuclear reactors. We therefore find fC curves and FN (frequency–number of fatalities) curves traced in a log–log plot. This plot is traced during most risk analyses because of its simple and informative character. We therefore speak of the Farmer diagram curve or plot that considers three domains: (i) events with high frequency and high severity, (ii) events with average or median frequency, but substantial severity, and (iii) events with low frequency and extreme seriousness (the case of a major accident). These notions were reprised more recently, with [TAB 10, LEM 14] distinguishing the medianistan (the first two domains, where the probability can be estimated by frequential or Bayesian methods), the third domain, called extremistan where the decision will be made with regard to the severity of the major accident. These are the first steps in the probabilistic approach.

We also cite the first treaties on reliability in French: the translation into French of Bazovsky [BAZ 66], Chapouille and De Pazzis [CHA 68], Schwob-Peyrache [SCH 69], and Ligeron-Marcovici [LIG 74].

1.2.6. The “safety culture decade” 1975–1990

Studies on nuclear safety are intensifying because of the report by Rasmussen [USN 75] where the concept of a fault tree (for causes) and the event tree (for the consequences) appeared. These methods would also be used to study the Canvey Island oil complex (1978), which also saw the development of the use of physical models. These were the first safety evaluations published, deterministic and probabilistic. These analyses show the strategic benefit of operational feedback that was beginning to develop in most industries. Operation feedback is in fact the base material, the input data for all risk and dependability studies and for controlling risks.

Human factors, the importance of which was signaled in Wash 1400, have emerged in French nuclear safety institutions shortly after the accident at the nuclear power plant at Three Mile Island. Several analyses of this accident highlighted the importance of the role of humans as one of the essential links in safety. There then appeared ergonomists and specialists in human sciences, nuclear safety and human factor. The best-known work [SWA 83] is a fundamental reference for all later works. The methodology called THERP (Technology for Human Error Rate Prediction) estimates a probability of human error (which can be defined by a human action that has the potential to degrade a system or overwhelm a situation). Human is considered as one of the system components. These data are used universally in probabilistic safety evaluations.

We can point out the publication of the book by Michel Gondran and Alain Pagès on the reliability of systems in the 1980 [GON 80] and the publication of the synthesis work by Alain Villemeur in 1988 [VIL 88] on dependability methods in industrial systems.

1.2.7. Maximizing efficiency, performances and profits 1990–2007

The leitmotiv to which company managers referred at this time was clearly “still as good or better and still cheaper”. This was about meeting the expectations of the triumphant market economy, always growing, and the quality of services and products. It was a constant quest for better efficiency, for better performances, for greater profitability. The “bare necessity” sought in design was then transposed, especially in maintenance. The RCM (reliability centered maintenance) method is developed and applied, on the one hand, to lower the costs of preventive maintenance and increasing availability and profitability, but also to better ensure the safety of industrial systems. At the end of the 1990s, many methods were devoted to the study of ageing in systems, structures and components (SSC) taking account of the amount of capital invested in nuclear power, refineries or in air, rail and maritime transport. New concepts appeared with lifecycle management (LCM). Re-design, replacements, refurbishment, maintenance programs and investments were optimized by asset management (AM). These studies were the first acts of predictive maintenance.

Moreover, since existing methods making it possible to make provisional reliability calculations for electronic components had not been updated for several years, various manufacturers desired a new methodology that would be both precise and realistic. It was in 1999 that a study named FIDES was specified, which would be developed over 3 years to become a reference work from 2005.

This period was also the period of the Gulf War in Kuwait. The world noted the logistical capacities of the US army. Logistical support suddenly became a considerable industrial challenge and design processes integrated logistical support, such as maintenance, which was optimized from the beginning.

The works of Reason [REA 97] establish a classification of human errors by distinguishing errors called active errors made by the operator from latent errors linked to the organization. These works would highlight the importance of organizational factors, which came to complete human factors (Organizational and Human Factors).

This period was marked by the success of entry into the year 2000, showing the usefulness and effectiveness of dependability methods.

1.2.8. The return to safety, risk aversion 2007–2020

This period was marked by an ever greater aversion to risk. Already at the start of the 2000s, events had occurred that were out of the ordinary: the attacks of September 11, 2001, the tsunami of 2004 in south-west Asia, etc. The general public lost confidence in the competence, capacities and seriousness of scientists, experts, politicians, economists, etc., to overcome unthinkable events and made them all responsible for these catastrophes. Distrust could be seen between the general public and this intellectual elite that made decisions on major projects. The aversion increased even more with the subprime mortgage crisis in 2008, Xynthia in 2010, Deep Water Horizon in 2010, Fukushima in 2011, the rail disasters of 2013, Tianjin in 2015, the collapse of the Genoa viaduct (Morandi Bridge) in 2018, etc.

It was particularly after the fatal collapse of a highway bridge in Minneapolis in 2007, that authorities and business leaders became aware that available budgets no longer made it possible to carry out grand projects, which would lead to the development of studies on structural reliability, on predictive maintenance for SSC, on ageing and durability, and on the optimized management of industrial assets throughout the lifecycle, from preliminary design to deconstruction.

1.3. Generating nuclear power

Enrico Fermi and his team at the University of Chicago were the first to develop an atomic pile. The first controlled fission reaction with electricity production was obtained on December 2, 1942. This first reactor was a fast neutron reactor.

An initial connection with an electricity network was made on December 20, 1951. This was the EBR 1 (Experimental Breeder Reactor 1) test at the Idaho National Laboratory, with an electrical production of 200 kW (for 1.4 thermal MW).

The first connections dated from the 1950s:

– Obninsk, in the USSR, on June 27, 1954 (5 electrical MW);

– Sellafield (United Kingdom), in 1956;

– Shippingsport (USA) in December 1957 (68 electrical MW);

– Marcoule G2 and Marcoule G3 (France); respectively, in April 1959 (39 electrical MW) and April 1960 (40 electrical MW).

Construction works on Chinon A1 began in February 1957 and the power plant was connected to the network in June 1963 (70 electrical MW). Nuclear sectors were then developed, especially in the 1970s, due to the oil crisis.

On December 31, 2017, worldwide there were 448 nuclear power plants (including 292 PWRs [pressurized water reactor] and 75 BWRs [boiling water reactors]) connected to an electricity network, and 59 (including 49 PWRs and 4 BWRs) were under construction. Note that 166 power plants had already been shut down. The share of nuclear power in global electricity production is 10.4%.

Since the first construction of nuclear power plants, operators and authorities have not stopped being concerned with the challenges of security and safety. Studies on nuclear safety had been systematized since the first connections. Reliability studies were developed from the end of the 1970s, leading to the publication of a book on system reliability [GON 80] and the implementation of focus groups across the nuclear industry. At the start, there were concerns above all about technical aspects, at component level as well as at system level. Since the publication of the Wash 1400 report [USN 75], which also highlighted the importance of the human factor, confirmed some years later by the TMI 2 accident (1978), the question of human reliability became an essential subject. It was in the years 1970–1980 that Probabilistic Safety Assessment (PSA) appeared, which continually progressed and developed over these years.

We distinguish three levels of PSA: