141,99 €
An indispensable guide for engineers and data scientists in design, testing, operation, manufacturing, and maintenance
A road map to the current challenges and available opportunities for the research and development of Prognostics and Health Management (PHM), this important work covers all areas of electronics and explains how to:
Prognostics and Health Management of Electronics also explains how to understand statistical techniques and machine learning methods used for diagnostics and prognostics. Using this valuable resource, electrical engineers, data scientists, and design engineers will be able to fully grasp the synergy between IoT, machine learning, and risk assessment.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 1542
Veröffentlichungsjahr: 2018
Cover
Dedication
About the Editors
List of Contributors
Preface
About the Contributors
Acknowledgment
List of Abbreviations
Chapter 1: Introduction to PHM
1.1 Reliability and Prognostics
1.2 PHM for Electronics
1.3 PHM Approaches
1.4 Implementation of PHM in a System of Systems
1.5 PHM in the Internet of Things (IoT) Era
1.6 Summary
References
Chapter 2: Sensor Systems for PHM
2.1 Sensor and Sensing Principles
2.2 Sensor Systems for PHM
2.3 Sensor Selection
2.4 Examples of Sensor Systems for PHM Implementation
2.5 Emerging Trends in Sensor Technology for PHM
References
Chapter 3: Physics‐of‐Failure Approach to PHM
3.1 PoF‐Based PHM Methodology
3.2 Hardware Configuration
3.3 Loads
3.4 Failure Modes, Mechanisms, and Effects Analysis (FMMEA)
3.5 Stress Analysis
3.6 Reliability Assessment and Remaining‐Life Predictions
3.7 Outputs from PoF‐Based PHM
3.8 Caution and Concerns in the Use of PoF‐Based PHM
3.9 Combining PoF with Data‐Driven Prognosis
References
Chapter 4: Machine Learning: Fundamentals
4.1 Types of Machine Learning
4.2 Probability Theory in Machine Learning: Fundamentals
4.3 Probability Mass Function and Probability Density Function
4.4 Mean, Variance, and Covariance Estimation
4.5 Probability Distributions
4.6 Maximum Likelihood and Maximum A Posteriori Estimation
4.7 Correlation and Causation
4.8 Kernel Trick
4.9 Performance Metrics
References
Chapter 5: Machine Learning: Data Pre‐processing
5.1 Data Cleaning
5.2 Feature Scaling
5.3 Feature Engineering
5.4 Imbalanced Data Handling
References
Chapter 6: Machine Learning: Anomaly Detection
6.1 Introduction
6.2 Types of Anomalies
6.3 Distance‐Based Methods
6.4 Clustering‐Based Methods
6.5 Classification‐Based Methods
6.6 Statistical Methods
6.7 Anomaly Detection with No System Health Profile
6.8 Challenges in Anomaly Detection
References
Chapter 7: Machine Learning: Diagnostics and Prognostics
7.1 Overview of Diagnosis and Prognosis
7.2 Techniques for Diagnostics
7.3 Techniques for Prognostics
References
Chapter 8: Uncertainty Representation, Quantification, and Management in Prognostics
8.1 Introduction
8.2 Sources of Uncertainty in PHM
8.3 Formal Treatment of Uncertainty in PHM
8.4 Uncertainty Representation and Interpretation
8.5 Uncertainty Quantification and Propagation for RUL Prediction
8.6 Uncertainty Management
8.7 Case Study: Uncertainty Quantification in the Power System of an Unmanned Aerial Vehicle
8.8 Existing Challenges
8.9 Summary
References
Chapter 9: PHM Cost and Return on Investment
9.1 Return on Investment
9.2 PHM Cost‐Modeling Terminology and Definitions
9.3 PHM Implementation Costs
9.4 Cost Avoidance
9.5 Example PHM Cost Analysis
9.6 Example Business Case Construction: Analysis for ROI
9.7 Summary
References
Chapter 10: Valuation and Optimization of PHM‐Enabled Maintenance Decisions
10.1 Valuation and Optimization of PHM‐Enabled Maintenance Decisions for an Individual System
10.2 Availability
10.3 Future Directions
References
Chapter 11: Health and Remaining Useful Life Estimation of Electronic Circuits
11.1 Introduction
11.2 Related Work
11.3 Electronic Circuit Health Estimation Through Kernel Learning
11.4 RUL Prediction Using Model‐Based Filtering
11.5 Summary
References
Chapter 12: PHM‐Based Qualification of Electronics
12.1 Why is Product Qualification Important?
12.2 Considerations for Product Qualification
12.3 Review of Current Qualification Methodologies
12.4 Summary
References
Chapter 13: PHM of Li‐ion Batteries
13.1 Introduction
13.2 State of Charge Estimation
13.3 State of Health Estimation and Prognostics
13.4 Summary
References
Chapter 14: PHM of Light‐Emitting Diodes
14.1 Introduction
14.2 Review of PHM Methodologies for LEDs
14.3 Simulation‐Based Modeling and Failure Analysis for LEDs
14.4 Return‐on‐Investment Analysis of Applying Health Monitoring to LED Lighting Systems
14.5 Summary
References
Chapter 15: PHM in Healthcare
15.1 Healthcare in the United States
15.2 Considerations in Healthcare
15.3 Benefits of PHM
15.4 PHM of Implantable Medical Devices
15.5 PHM of Care Bots
15.6 Canary‐Based Prognostics of Healthcare Devices
15.7 Summary
References
Chapter 16: PHM of Subsea Cables
16.1 Subsea Cable Market
16.2 Subsea Cables
16.3 Cable Failures
16.4 State‐of‐the‐Art Monitoring
16.5 Qualifying and Maintaining Subsea Cables
16.6 Data‐Gathering Techniques
16.7 Measuring the Wear Behavior of Cable Materials
16.8 Predicting Cable Movement
16.9 Predicting Cable Degradation
16.10 Predicting Remaining Useful Life
16.11 Case Study
16.12 Future Challenges
16.13 Summary
References
Chapter 17: Connected Vehicle Diagnostics and Prognostics
17.1 Introduction
17.2 Design of an Automatic Field Data Analyzer
17.3 Case Study: CVDP for Vehicle Batteries
17.4 Summary
References
Chapter 18: The Role of PHM at Commercial Airlines
18.1 Evolution of Aviation Maintenance
18.2 Stakeholder Expectations for PHM
18.3 PHM Implementation
18.4 PHM Applications
18.5 Summary
References
Chapter 19: PHM Software for Electronics
19.1 PHM Software: CALCE Simulation Assisted Reliability Assessment
19.2 PHM Software: Data‐Driven
19.3 Summary
Chapter 20: eMaintenance
20.1 From Reactive to Proactive Maintenance
20.2 The Onset of eMaintenance
20.3 Maintenance Management System
20.4 Sensor Systems
20.5 Data Analysis
20.6 Predictive Maintenance
20.7 Maintenance Analytics
20.8 Knowledge Discovery
20.9 Integrated Knowledge Discovery
20.10 User Interface for Decision Support
20.11 Applications of eMaintenance
20.12 Internet Technology and Optimizing Technology
References
Chapter 21: Predictive Maintenance in the IoT Era
21.1 Background
21.2 Benefits of a Predictive Maintenance Program
21.3 Prognostic Model Selection for Predictive Maintenance
21.4 Internet of Things
21.5 Predictive Maintenance Based on IoT
21.6 Predictive Maintenance Usage Cases
21.7 Machine Learning Techniques for Data‐Driven Predictive Maintenance
21.8 Best Practices
21.9 Challenges in a Successful Predictive Maintenance Program
21.10 Summary
References
Chapter 22: Analysis of PHM Patents for Electronics
22.1 Introduction
22.2 Analysis of PHM Patents for Electronics
22.3 Trend of Electronics PHM
22.4 Summary
References
Chapter 23: A PHM Roadmap for Electronics-Rich Systems
23.1 Introduction
23.2 Roadmap Classifications
23.3 Methodology Development
23.4 Nontechnical Barriers
References
Appendix A: Commercially Available Sensor Systems for PHM
A.1 SmartButton – ACR Systems
A.2 OWL 400 – ACR Systems
A.3 SAVER™ 3X90 – Lansmont Instruments
A.4 G‐Link®‐LXRS® – LORD MicroStrain® Sensing Systems
A.5 V‐Link®‐LXRS® – LORD MicroStrain Sensing Systems
A.6 3DM‐GX4–25™ – LORD MicroStrain Sensing Systems
A.7 IEPE‐Link™‐LXRS® – LORD MicroStrain Sensing Systems
A.8 ICHM® 20/20 – Oceana Sensor
A.9 Environmental Monitoring System 200™ – Upsite Technologies
A.10 S2NAP® – RLW Inc.
A.11 SR1 Strain Gage Indicator – Advance Instrument Inc.
A.12 P3 Strain Indicator and Recorder – Micro‐Measurements
A.13 Airscale Suspension‐Based Weighing System – VPG Inc.
A.14 Radio Microlog – Transmission Dynamics
Appendix B: Journals and Conference Proceedings Related to PHM
A.1 Journals
A.2 Conference Proceedings
Appendix C: Glossary of Terms and Definitions
Index
End User License Agreement
Chapter 1
Table 1.1 Examples of failure mechanisms, loads, and failure models in electronics via FMMEA, where
T
,
H
,
V
,
M
,
J
, and
S
indicate temperature, humidity, voltage, moisture, current density, and stress, respectively, and
Δ
and
∇
mean cyclic range and gradient.
Table 1.2 Examples of life‐cycle loads.
Table 1.3 Potential failure precursors for electronics [56].
Table 1.4 Monitoring parameters based on reliability concerns in hard drives.
Chapter 2
Table 2.1 Examples of sensor measurands for PHM.
Table 2.2 Chemical sensor principles.
Table 2.3 Considerations for sensor selection.
Table 2.4 Characteristics of sensor systems identified.
Chapter 3
Table 3.1 Examples of failure mechanisms [10].
Table 3.2 PoF‐based PHM for different electronic products.
Table 3.3 FMMEA of lithium‐ion cells (partial results) [31].
Table 3.4 FMMEA of LED lighting [32].
Table 3.5 Failure mechanisms, relevant loads, and models in electronics.
Chapter 4
Table 4.1 A confusion matrix.
Chapter 5
Table 5.1 Example of a dataset with missing data.
Table 5.2 A confusion matrix without the use of oversampling algorithms, MCC = 0.7293.
Table 5.3 A confusion matrix with the SMOTE algorithm, MCC = 0.8134.
Table 5.4 A confusion matrix with the ADASYN algorithm, MCC = 0.8944.
Chapter 6
Table 6.1 Distance‐based anomaly detection methods.
Table 6.2 Clustering‐based anomaly detection methods.
Table 6.3 Classification‐based anomaly detection methods.
Table 6.4 Statistical anomaly detection methods.
Table 6.5 A summary of advantages and disadvantages of distance‐based anomaly detection methods.
Table 6.6 A summary of advantages and disadvantages of clustering‐based anomaly detection methods.
Table 6.7 A summary of advantages and disadvantages of classification‐based anomaly detection methods.
Table 6.8 A summary of advantages and disadvantages of statistical anomaly detection methods.
Chapter 7
Table 7.1 AdaBoost algorithm.
Chapter 8
Table 8.1 Battery model parameters.
Table 8.2 Variable amplitude loading statistics.
Chapter 9
Table 9.1 Categories of nonmonetary considerations for PHM.
Table 9.2 Data defining unscheduled maintenance operational profile.
Table 9.3 Data assumptions for example cases presented in this section.
Table 9.4 Implementation costs and categories.
Table 9.5 Unscheduled maintenance costs and events.
Table 9.6 Operational profile.
Table 9.7 Spares inventory.
Table 9.8 Comparison of total life‐cycle costs per socket for various maintenance approaches.
Chapter 11
Table 11.1 Performance results of developed health estimation method on Sallen–key bandpass filter.
Table 11.2 Performance results of developed health estimation method on DC–DC converter system.
Chapter 12
Table 12.1 JESD22 qualification tests.
Chapter 13
Table 13.1 Fitted model parameter list and statistics list of model fitting.
Chapter 14
Table 14.1 Summary of material properties in the electrical simulation.
Table 14.2 Summary of optical properties of each layer in the optical simulation.
Table 14.3 PHM investment costs (
I
SHM
) per LRU [161, 162].
Table 14.4 LRU‐level implementation costs.
Table 14.5 System implementation costs.
Chapter 15
Table 15.1 Contributing factors in medical device mishaps.
Table 15.2 Physical stresses related to implantation environment.
Table 15.3 PHM information requirements.
Chapter 16
Table 16.1 Typical characteristics of 132‐kV HVAC cables.
Table 16.2 Subsea cable faults over a 15‐year period (up to 2006).
Table 16.3 Types of existing test standards for subsea cables.
Table 16.4 Cable layer material properties used for the derivation of the wear coefficient.
Table 16.5 Wear coefficients of layer materials from Taber experiments.
Chapter 17
Table 17.1 Common driver behaviors and corresponding features.
Table 17.2 BMS evaluation features for vehicle battery diagnostics.
Table 17.3 Features generated for battery SOC case study.
Table 17.4 Accuracies P (%) for kernel‐SVM based feature ranking approach.
Table 17.5 Accuracies P (%) for kernel‐SVM based feature ranking approach.
Table 17.6 Accuracies P (%) for kernel‐SVM based feature‐ranking approach.
Table 17.7 Accuracies P (%) of “filter” approaches.
Chapter 19
Table 19.1 Example dataset.
Table 19.2 Lithium‐ion capacity versus cycle data.
Chapter 21
Table 21.1 Comparison of predictive and preventive maintenance programs.
Table 21.2 Sample predictive maintenance usage cases.
Table 21.3 Sample data preparation tasks.
Chapter 22
Table 22.1 Data distribution of typical patents in semiconductor products and computers.
Table 22.2 Distribution of typical patents in batteries.
Table 22.3 Distribution of typical patents in electric motors.
Table 22.4 Distribution of typical patents in circuits and systems.
Table 22.5 Distribution of typical patents in electrical devices in automobiles and airplanes.
Table 22.6 Distribution of typical patents in networks and communications facilities.
Table 22.7 Distribution of typical patents in other subsystems.
Chapter 23
Table 23.1 PHM roadmap for electronic components.
Table 23.2 PHM roadmap for systems.
Table 23.3 Data‐driven approaches to feature discovery.
Chapter 1
Figure 1.1 Framework for prognostics and health management.
Figure 1.2 Application of health monitoring for product re‐use. (a) Usage as per design, (b) More severe usage than intended design, and (c) Less severe usage than intended design.
Figure 1.3 CALCE PHM methodology.
Figure 1.4 PoF‐based prognostics approach [32].
Figure 1.5 CALCE life consumption monitoring methodology.
Figure 1.6 Remaining life estimation of test board.
Figure 1.7 Load feature extraction.
Figure 1.8 Uncertainty implementation for prognostics.
Figure 1.9 Sun Microsystems' approach to PHM.
Figure 1.10 A general procedure of a data‐driven approach to prognostics.
Figure 1.11 Fusion PHM approach [32].
Figure 1.12 Technology stack for supporting IoT [32].
Figure 1.13 Inclusion of IoT‐based PHM in a predictive warranty service [32].
Chapter 2
Figure 2.1 Integrated sensor system for in‐situ environmental monitoring.
Figure 2.2 Sensor system selection procedure.
Chapter 3
Figure 3.1 PoF‐based PHM approach [1].
Figure 3.2 Prioritization of failure mechanisms.
Figure 3.3 Basic structure of a lithium‐ion cell [31].
Figure 3.4 Materials and structure of an LED chip: (a) LED packages, and (b) LED lighting lamps [32].
Figure 3.5 Strain measurement by strain gauge at the back side of the PCB for each BGA [36].
Figure 3.6 Acceleration measurement by a sensor at the center of the PCB [36].
Figure 3.7 FEA analysis that helps “extrapolate” the measured PCB strain to the actual local strain at the solder joint [36].
Figure 3.8 Damage calculation approach for temperature and vibration data.
Figure 3.9 Remaining‐life estimation of test board.
Figure 3.10 Dependence of the rate of stress voiding on the temperature‐dependent creep and tensile stress‐induced vacancy movement in the interconnect line. The rate of degradation is highest at some intermediate temperature where both creep (atomic diffusion) and the tensile stress are moderate.
Chapter 4
Figure 4.1 A high‐level overview of the ML approach to diagnosis.
Figure 4.2 A labeled training dataset for supervised learning.
Figure 4.3 Regression concept.
Figure 4.4 An unlabeled training dataset for unsupervised learning.
Figure 4.5 Instance‐based learning concept.
Figure 4.6 Model‐based learning concept.
Figure 4.7 Graph of the probability mass function (pmf) of a fair dice.
Figure 4.8 Graph of the probability density function (pdf).
Figure 4.9 (a) A two‐class, linearly separable dataset and (b) the decision boundary
of a linear SVM on the dataset, where the solid line is the boundary.
Figure 4.10 (a) A dataset in ℝ
2
, not linearly separable; (b) a circular decision boundary that can separate the outer ring from the inner ring; and (c) a dataset transformed by the transformation
.
Figure 4.11 Example of an ROC space.
Figure 4.12 (a) Milestones on the path to object system failure and (b) the end‐of‐prediction (EOP) time
t
EOP
to measure the goodness‐of‐fit between the actual performance degradation trend
y
and estimated degradation trend
.
Chapter 5
Figure 5.1 Data pre‐processing tasks generally required in PHM.
Figure 5.2 Feature extraction methods.
Figure 5.3 PCA, where PC1 and PC2 indicate the first and second principal components obtained from PCA.
Figure 5.4 LDA, where the variables
μ
and
s
indicate the mean and standard deviation obtained from a given class (i.e. healthy or faulty class), and the objective of LDA is to find a new axis that maximizes the separability.
Figure 5.5 PCA versus LDA.
Figure 5.6 Structure of the SOM.
Figure 5.7 KS statistic.
Figure 5.8 A pictorial schematic of a support vector machine (SVM).
Figure 5.9 A methodology to assess the effectiveness of oversampling algorithms for bearing fault diagnosis. Here, MCC stands for a Matthews correlation coefficient, and see Chapter 04.
Figure 5.10 Synthetic observations.
Chapter 6
Figure 6.1 Example of point anomalies.
Figure 6.2 Example of a contextual anomaly.
Figure 6.3 Example of a collective anomaly.
Figure 6.4 MD‐based anomaly detection.
Figure 6.5 Visualization of an anomaly threshold.
Figure 6.6 Concept of clustering.
Figure 6.7 Concept of clustering‐based anomaly detection.
Figure 6.8 Procedure of
k
‐means clustering.
Figure 6.9 SOM training process.
Figure 6.10 Concept of one‐class classification‐based anomaly detection.
Figure 6.11 Classification margin.
Figure 6.12 Issues in soft‐margin SVMs.
Figure 6.13 Example of the OC‐SVM by Schölkopf et al. [46].
Figure 6.14 Classification using the k‐NN algorithm.
Figure 6.15 Effect of
k
values in k‐NN classification.
Figure 6.16 k‐NN anomaly detection concept.
Figure 6.17 Concept of multi‐class classification‐based anomaly detection.
Figure 6.18 OAO multi‐class classification strategy.
Figure 6.19 OAA multi‐class classification strategy.
Figure 6.20 A general three‐layer, feedforward neural network structure.
Figure 6.21 SPRT null and alternative hypotheses for a normal distribution.
Figure 6.22 Concept of null hypothesis acceptance or rejection based on missed‐alarm and false‐alarm probabilities.
Figure 6.23 SPRT procedure.
Figure 6.24 Correlation analysis result: (a) sensor values at the initial (healthy) state, (b) sensor values after the initial state including faults, and (c) PCCs between two sensor values. (
See color plate section for the color representation of this figure
.)
Figure 6.25 iForest‐based anomaly detection.
Chapter 7
Figure 7.1 Machine learning‐based data‐driven diagnosis.
Figure 7.2 Prognostics concept.
Figure 7.3 DT visualization (visit https://github.com/calceML/PHM.git for hands‐on practice). Here,
x
1 – 5
corresponds to each of the dimensions of the input instances. Likewise, integer numbers 0 to 3 at the leaf nodes are the classes (i.e. four failure modes).
Figure 7.4 Effect of ensemble learning in terms of classification performance.
Figure 7.5 Changes in variance and bias as a function of the prediction model.
Figure 7.6 Bootstrap resampling.
Figure 7.7 (a) A residual building unit (RBU), where
is any nonlinear function for a given
x
; (b) an example of the DRN architecture, in which “ReLU” is a ReLU activation function, “BN” is batch normalization, “Conv 3 × 3” refers to a convolutional layer with convolutional kernel size of 3 × 3, and “GAP” is global average pooling. More details about the role of the convolutional layer can be found in [22].
Figure 7.8 Safety‐critical parts in automobiles.
Figure 7.9 Classification accuracy using handcrafted features for diagnosis of safety‐critical parts in automobiles.
Figure 7.10 An overview of a DRN for feature learning‐powered diagnosis. (
See color plate section for the color representation of this figure
.)
Figure 7.11 Learned features using the DRN for diagnosis. For visualization, linear discriminant analysis (see Chapter 05) was applied to learned features at layers. (
See color plate section for the color representation of this figure
.)
Figure 7.12 Example of RUL estimation using linear regression analysis.
Figure 7.13 Example of nonlinear relationship between the predictor (no. of cycles) and target (discharge capacity).
Figure 7.14 Effect of polynomial regression.
Figure 7.15 Effect of the regularization term in ridge regression.
Figure 7.16 Effect of the regularization term in LASSO regression.
Figure 7.17 k‐NN regression concept.
Figure 7.18 Particle filter general algorithm flowchart.
Chapter 8
Figure 8.1 PHM‐related activities.
Figure 8.2 Sources of uncertainty.
Figure 8.3 Architecture for prognostics and uncertainty quantification.
Figure 8.4 Definition of
R
=
G
(
X
).
Figure 8.5 Battery equivalent circuit [10].
Figure 8.6 EOD prediction at multiple time‐instants.
Figure 8.7 EOD prediction at
T
= 800 seconds (near failure).
Figure 8.8 Multimodal RUL probability distribution.
Chapter 9
Figure 9.1 Data‐driven (precursor to failure monitoring) modeling approach. Symmetric triangular distributions are chosen for illustration. Note, the LRU TTF pdf (left) and the data‐driven TTF pdf (right) are not the same (they could have different shapes and sizes).
Figure 9.2 Model‐based (LRU‐independent) modeling approach. Symmetric triangular distributions are chosen for illustration. Note that the LRU TTF pdf (left) and the model‐based method TTF pdf (right) are not the same (they could have different shapes and sizes).
Figure 9.3 Temporal ordering of implementation cost inclusion in the discrete‐event simulation.
Figure 9.4 Variation of the effective life‐cycle cost per socket with the fixed‐schedule maintenance interval (10 000 sockets simulated with no random failures assumed).
Figure 9.5 Variation of the effective life‐cycle cost per socket with the safety margin and prognostic distance for various LRU TTF distribution widths and constant PHM structure TTF width (10 000 sockets simulated).
Figure 9.6 Variation of the effective life‐cycle cost per socket with the safety margin and prognostic distance for various PHM structure TTF and constant LRU TTF distribution widths (10 000 sockets simulated).
Figure 9.7 Variation of the effective life‐cycle cost per socket and failures avoided, with the safety margin and prognostic distance for 2000 hours LRU TTF distribution widths and 1000 hours PHM distribution widths, with and without random failures included (10 000 sockets simulated).
Figure 9.8 Multisocket timeline example.
Figure 9.9 TTF distributions for LRUs used in multisocket analysis examples. The plot on the right shows the cost of single‐socket systems made from these two LRUs as a function of time using a prognostic distance of 500 hours for the LRU in Socket 1 (note the results for 10 000 instances of each socket are shown). All data other than the LRU TTF are given in Table 9.3.
Figure 9.10 Mean life‐cycle cost per system of two dissimilar sockets. Socket 1 LRU, location parameter = 19 900 hours (health monitoring); socket 2 LRU, FFOP = 9900 hours (unscheduled maintenance) (10 000 systems simulated).
Figure 9.11 Mean life‐cycle cost per system of two or three similar sockets. All LRUs, location parameter = 19 900 hours (data‐driven); 10 000 systems simulated.
Figure 9.12 Mean life‐cycle cost per system of mixed sockets; 10 000 systems simulated.
Figure 9.13 Weibull distribution of TTFs. TTF 1:
β
= 1.1 [71],
η
= 1200 hours [68], and
γ
= 25 000 hours; TTF 2:
β
= 3,
η
= 25 000 hours, and
γ
= 0.
Figure 9.14 Variation of life‐cycle cost with data‐driven PHM prognostic distance (5000 LRUs sampled). Left: the TTF 1 distribution in Figure 9.13 (left); and right: the TTF 2 distribution in Figure 9.13 (right).
Figure 9.15 Socket cost histories over the system support life (5000 LRUs sampled). These graphs correspond to the TTF 1 distribution on the left side of Figure 9.13.
Figure 9.16 Histogram of ROI for a 5000‐socket population.
Figure 9.17 Mean ROI as a function of the annual infrastructure cost of PHM per LRU (5000 LRUs sampled).
Figure 9.18 System socket availability associated with unscheduled and PHM maintenance approaches (5000 LRUs sampled). Note a 24‐month lead time for spare replenishment (as defined in Table 9.7) was assumed.
Chapter 10
Figure 10.1 Simple predictive maintenance value formulation [2].
Figure 10.2 An example of the ROA valuation [2].
Figure 10.3 Left – cumulative revenue loss (
R
L
); middle – avoided corrective maintenance cost (
C
A
); and right – predictive maintenance value paths (
V
PM
) for a single system (100 paths are shown) [2].
Figure 10.4 Expected predictive maintenance option value curve (predictive maintenance opportunity is once per hour) and the histogram of
ARUL
C
[2].
Figure 10.5 Expected predictive maintenance option value curve when the predictive maintenance opportunity is once every 48, 72, or 96 hours [2].
Figure 10.6 Left – cumulative revenue loss; middle – avoided corrective maintenance cost; and right – predictive maintenance value paths for turbines 1 and 2 (100 paths are shown).
Figure 10.7 Expected predictive maintenance option value curves for turbines 1 and 2 when managed using a PPA or an “as‐delivered” contract (predictive maintenance opportunity is once every 48 hours).
Figure 10.8 Expected predictive maintenance option value curve for turbines 1 and 2 when the number of turbines down is varying (predictive maintenance opportunity is once every 48 hours).
Figure 10.9 Expected predictive maintenance option value curves for when turbine 1 is managed in isolation, and when turbines 1 and 2 are managed in a wind farm (predictive maintenance opportunity is once every 48 hours).
Figure 10.10 Computed maximum allowable ILT for two different maintenance policies [15].
Chapter 11
Figure 11.1 Example plots for parametric drifts exhibited by electronic components. (a) Degradation of electrolytic capacitors under isothermal aging is accompanied by decrease in capacitance parameters. (b) Increase in resistance between the collector and emitter (
R
CE
) terminals of an insulated gate bipolar transistor due to die‐attach degradation. (c) Decrease in capacitance with degradation of embedded capacitors under combined temperature and voltage aging. (d) Increase in resistance with solder joint degradation of surface mount resistors under thermal cycling conditions.
Figure 11.2 Typical steps involved in a prognostic approach.
Figure 11.3 Examples where linear separability between healthy and failure classes ensures
d
hh
<
d
hf
either in (a) Euclidean space or (b) principal component space.
Figure 11.4 Illustration of the principle underlying kernel‐based learning methods.
Figure 11.5 Overview of the proposed circuit health estimation method.
Figure 11.6 Particle filtering approach for optimization of hyperparameters. See text for explanation.
Figure 11.7 Schematic of a Sallen–Key bandpass filter centered at 25 kHz. The table represents the critical components and their failure ranges.
Figure 11.8 Magnitude (top) and phase (bottom) of Sallen–Key bandpass filter's transfer function with and without faults. (
See color plate section for the color representation of this figure
.)
Figure 11.9 Example of a sweep (test) signal.
Figure 11.10 (a) Illustration of wavelet decomposition using filter banks, and (b) frequency range coverings for the details and approximation for three levels of decomposition.
Figure 11.11 Plot of training error rate with respect to iteration number.
Figure 11.12 (a) Progression of parametric fault in
C
1
of Sallen–Key bandpass filter. (b) Health estimates using the developed kernel and MD‐based method for fault in
C
1
.
Figure 11.13 (a) Progression of parametric fault in
C
2
of Sallen–Key bandpass filter. (b) Health estimates using the developed kernel and MD‐based method for fault in
C
2
.
Figure 11.14Figure 11.14 (a) Progression of parametric fault in
R
2
of Sallen–Key bandpass filter. (b) Health estimates using the developed kernel and MD‐based method for fault in
R
2
.
Figure 11.15Figure 11.15 (a) Progression of parametric fault in
R
3
of Sallen–Key bandpass filter. (b) Health estimates using the developed kernel and MD‐based method for fault in
R
3
.
Figure 11.16 DC–DC buck converter system design abstraction levels.
Figure 11.17 Schematic of a LC (inductor capacitor) low‐pass filter circuit in a DC–DC converter system.
Figure 11.18 Low‐pass filter circuit health estimated using the kernel method (lower curve) in comparison with the actual health
(upper curve) for the progression of parametric fault in
C
– Run 1.
Figure 11.19 Low‐pass filter circuit health estimated using the kernel method (lower curve) in comparison with the actual health
(upper curve) for the progression of parametric fault in
C
– Run 2.
Figure 11.20Figure 11.20 Low‐pass filter circuit health estimated using the kernel method (lower curve) in comparison with the actual health
(upper curve) for the progression of parametric fault in
C
– Run 3.
Figure 11.21Figure 11.21 Low‐pass filter circuit health estimated using the kernel method (lower curve) in comparison with the actual health
(upper curve) for the progression of parametric fault in
C
– Run 4.
Figure 11.22 Schematic of voltage divider feedback circuit in a DC–DC converter system.
Figure 11.23 Voltage divider feedback circuit health estimated using the kernel method (lower curve) in comparison with the actual health
(upper curve) for the progression of parametric fault in
R
1
: (a) and (b) represent two different degradation trends.
Figure 11.24 Degradation trends in voltage divider feedback circuit health estimated using the kernel method (lower curve) in comparison with the actual health
(upper curve) for the progression of parametric fault in
R
3
: (a) and (b) represent two different degradation trends.
Figure 11.25 Prognostics illustration.
Figure 11.26 Simple one‐component circuit for degradation modeling illustration.
Figure 11.27 Illustration of the steps involved in a simple particle filter: (a) initial condition, (b) particle sampling from initial distribution, (c) one‐step prediction, and (d) state update.
Figure 11.28 Observed and estimated degradation in health of low‐pass filter circuit due to progression of a fault in the electrolytic capacitor.
Figure 11.29 Estimated deviation in capacitance of the liquid electrolytic capacitor.
Figure 11.30 RUL estimation result for low‐pass filter circuit using model‐based filtering method.
Figure 11.31 Observed and estimated degradation in voltage feedback circuit health due to progression of a fault in
R
1
.
Figure 11.32 Estimated deviation in resistance
R
1
of the voltage feedback circuit.
Figure 11.33 RUL estimation result in voltage feedback circuit due to progression of a fault in
R
1
using a model‐based filtering method.
Figure 11.34 Observed and estimated degradation in voltage feedback circuit health due to progression of a fault in
R
3
.
Figure 11.35 Estimated deviation in resistance
R
3
of the voltage feedback circuit.
Figure 11.36 RUL estimation result in voltage feedback circuit due to progression of a fault in
R
3
using a model‐based filtering method.
Figure 11.37 Estimated voltage feedback circuit health due to simulated progression of a fault in component
R
3
.
Figure 11.38 Estimated deviation in resistance
R
3
of the voltage feedback circuit with simulated component degradation.
Figure 11.39 RUL estimation result in voltage feedback circuit due to simulated progression of fault in
R
3
using model‐based filtering method.
Figure 11.40 RUL prediction results for the voltage divider feedback circuit with (a) random walk model for
θ
t
and (b) first‐principles‐based model for
θ
t
.
Figure 11.41 Predicted RUL distribution for voltage divider feedback circuit with random walk model (a, c) and first‐principles‐based model (b, d) for
θ
t
at (a, c) 100 h and (b, d) 50 h before failure.
Chapter 12
Figure 12.1 Product qualification steps.
Figure 12.2 Current market trends in electronics.
Figure 12.3 Complexity of the computer supply chain [16].
Figure 12.4 (a) Healthy solder joint and (b) failed solder joint.
Figure 12.5 Accelerated thermal cycling test conditions.
Figure 12.6 Comparison of accelerated test and use condition time‐to‐failure.
Figure 12.7 PoF approach for qualification testing [26] (see Chapter 01).
Figure 12.8 RUL method to reduce light‐emitting diode qualification time [43].
Figure 12.9 Temperature cycle versus solder joint temperature [44].
Figure 12.10 Solder joints after (a) 1500 cycles, and (b) 4500 cycles [44].
Figure 12.11 A fusion prognostics‐based qualification test methodology [18].
Figure 12.12 (a) Target and (b) canary resistors [35].
Figure 12.13 Failure distribution of canary and standard resistors [35].
Chapter 13
Figure 13.1 The schematic diagram of a lithium‐ion cell.
Figure 13.2 The structure of a multilayer feed‐forward neural network.
Figure 13.3 Battery testing profiles: (a) DST profile, which was used as training data; (b) US06 Highway Driving Schedule, which was used as testing data; and (c) federal urban driving schedule (FUDS).
Figure 13.4 Neural network results for US06 testing data at different temperatures: (a) 10°C, (b) 25°C, (c) 40°C, and (d) 50°C.
Figure 13.5 SOC estimation results for US06 at different temperatures after UKF filtering: (a) 10°C, (b) 25°C, (c) 40°C, and (d) 50°C.
Figure 13.6 OCV curve at 20°C.
Figure 13.7 Schematic of the internal resistance (R
int
) model of the battery.
Figure 13.8 DST profile at 20°C: (a) measured current and (b) measured voltage.
Figure 13.9 (a) OCV–SOC curves between 30% and 80% SOC at different temperatures, and (b) the SOC corresponding to the specified OCVs at 0, 20, and 40°C.
Figure 13.10 Curve fitting for
C
(
T
) and
C
(50°C) for model validation.
Figure 13.11 FUDS profile (at 20°C) used for model validation: (a) measured current; (b) measured voltage; and (c) cumulative SOC.
Figure 13.12 (a) The estimated error of
U
term
, and (b) true and estimated SOC using two different OCV–SOC tables when FUDS was operated at 40°C.
Figure 13.13 The curve fitting of the model (Eq. 13.13) to the battery capacity fade data.
Figure 13.14 Flowchart of the proposed scheme for battery prognostics.
Figure 13.15 Prediction result at 18 cycles for battery A4. The BMC prognostic model was initialized by DS theory. The prediction error is 1 cycle, and the standard deviation of RUL estimation is 6 cycles.
Figure 13.16 Prediction result at 32 cycles for battery A4. The BMC prognostic model was initialized by DS theory. BMC accurately predicted the failure time. The standard deviation of the RUL estimation is 2 cycles.
Chapter 14
Figure 14.1 Available prognostic methods/models for LEDs, and categorization.
Figure 14.2 FMMEA for LED from die to lighting system [4].
Figure 14.3 Illustration of (a) the LED chip structure; (b) electrode geometry; and (c) simplified simulation model [112].
Figure 14.4 Illustration of the modeling process of the electro‐optical simulation.
Figure 14.5 Illustration of the optical model.
Figure 14.6 Discretization of the MQW layer of the chip model.
Figure 14.7 (a) Vector plot of the density of current flowing through the MQW layer; (b) contour plot of the simulated current density distribution on the MQW layer [112]. (
See color plate section for the color representation of this figure
.)
Figure 14.8 (a) Predicted 0° and 90° angular light intensity distribution patterns of the LED chip; (b) experimental and predicted light intensity distribution patterns of the LED chip [112].
Figure 14.9 Structure of a Pc‐white LED chip scale package with multicolor phosphors.
Figure 14.10 (a) A Pc‐white LED CSP soldered onto an Al
2
O
3
ceramic substrate with a silver surface; (b) thermal distribution tested by an IR camera (ambient temperature is 55°C); and (c) thermal distribution simulation result [122]. (
See color plate section for the color representation of this figure
.)
Figure 14.11 Luminescence mechanism of multicolor phosphors (G: G525, O: O5544, R: R6535, and gray part represents silicone).
Figure 14.12 Experimental and simulation results of initial spectrum power distributions for two Pc‐white LED CSPs [122].
Figure 14.13 (a) A 12‐W LED down lamp; (b) its thermal dissipation simulation; and (c) the thermal distribution simulation of the LED module.
Figure 14.14 Process flow for analyzing the ROI of a precursor‐to‐failure PHM approach using SHM, relative to unscheduled maintenance [159].
Figure 14.15 Weibull distributions of TTF1, TTF2, and TTF3.
Figure 14.16 Weibull distributions of TTF4, TTF5, and TTF6.
Figure 14.17 Variation of life‐cycle cost with precursor‐to‐failure PHM prognostic distance for the exponential distributions (i.e. TTF1 to TTF3).
Figure 14.18 Variation of life‐cycle cost with precursor‐to‐failure PHM prognostic distance for the normal distributions (i.e. TTF4 to TTF6).
Figure 14.19 Mean life‐cycle costs per socket using TTF1.
Figure 14.20 Mean life‐cycle costs per socket using TTF2.
Figure 14.21Figure 14.21 Mean life‐cycle costs per socket using TTF3.
Figure 14.22 Mean life‐cycle costs per socket using TTF4.
Figure 14.23 Mean life‐cycle costs per socket using TTF5.
Figure 14.24Figure 14.24 Mean life‐cycle costs per socket using TTF6.
Figure 14.25 System availability for the unscheduled and PHM with SHM maintenance approaches based on TTF1, TTF2, and TTF3 exponential failure distributions (100 000 LRUs sampled).
Figure 14.26 System availability for the unscheduled and PHM with SHM maintenance approaches based on TTF4, TTF5, and TTF6 normal failure distributions (100 000 LRUs sampled).
Figure 14.27 ROI of LED lighting systems using exponential failure distributions of TTF1 and TTF2.
Figure 14.28 ROI of LED lighting systems using an exponential failure distribution of TTF3.
Figure 14.29Figure 14.29 ROI of LED lighting systems using normal failure distributions of TTF4, TTF5, and TTF6.
Chapter 15
Figure 15.1 Factors influencing reliability of implantable medical devices.
Figure 15.2 Anticipated implantable medical device duration of use [12]. (
See color plate section for the color representation of this figure
.)
Figure 15.3 Milestones on the path to object system failure.
Figure 15.4 PHM challenges with implantable medical devices.
Figure 15.5 Uncertainties in prognostics relevant to medically implanted electronics.
Figure 15.6 Failure probability density distributions for canaries and actual products, showing prognostic distance or RUL.
Chapter 16
Figure 16.1 Three‐phase HVAC cable.
Figure 16.2 Inner structure of a modern HVDC cable.
Figure 16.3 Taber abrasive wear apparatuses: (a) single head tester and (b) double head tester.
Figure 16.4 Stainless steel accumulated volume loss plot versus Taber abrasive wheel rolling distance.
Figure 16.5 Forces acting on a cable.
Figure 16.6 A catenary model with concentrated loadings.
Figure 16.7 The most common tidal pattern.
Figure 16.8 Schematic view of layer volumes in stage three.
Figure 16.9 The bathtub curve for product failure rate.
Figure 16.10 Modeling methodology for predicting lifetime of subsea cables.
Figure 16.11
Graphical user interface
(
GUI
) for CableLife software.
Figure 16.12 High‐level illustration of CableLife software flow diagram.
Figure 16.13 Schematic plot of the sliding distances, lengths, and the tidal current flow rate of each of the zones.
Figure 16.14 Lifetime (RUL) prediction of single‐armored cable at zone 7 using wear coefficient extracted from using H10, H18, and H38 Taber abrasive wheels.
Chapter 17
Figure 17.1 A high‐level illustration of the vehicle design process. For clarity, not all feedback loops are illustrated.
Figure 17.2 An illustration of the connected vehicle diagnostics and prognostics concept.
Figure 17.3 Diagram of an automatic field data analyzer.
Figure 17.4 Illustration of prediction performance change during the feature elimination process.
Figure 17.5 The experiment time‐series data example contains SOC, voltage, temperature, and current for one vehicle in Nebraska. The units of SOC, voltage, temperature, and current are %, V, °C, and A, respectively.
Figure 17.6 Histograms of highly ranked features based on 10‐fold cross‐validation using (a) kernel‐SVM wrapper approach, first iteration; (b) kernel‐SVM wrapper approach, second iteration without features 4 and 6; (c) LDA approach; (d) PCA approach; (e) SLPP approach; and (f) LPP approach.
Figure 17.7 Classification results for all data samples using the kernel‐SVM method as the classifier and feature 6 as the input feature. The
y
‐axis is the sample density for SOC decrease or increase (%).
Figure 17.8 A segment of experiment dat with correct SOC estimation: (a) one‐day driving cycle and (b) SOC change during driving.
Figure 17.9 A segment of experiment data with incorrect SOC estimation: (a) one‐day driving cycle and (b) SOC change during driving.
Figure 17.10 Classification results for all data samples based on the kernel‐SVM method over features 4 and 5.
Chapter 18
Figure 18.1 Progression of maintenance tasks for commercial aviation, MSG‐1 through MSG‐3.
Figure 18.2 Evolution of aircraft maintenance in relation to PHM capabilities.
Figure 18.3 Remaining useful life prediction [12].
Figure 18.4 Two‐spool hi‐bypass aircraft engine with major components and station numbers [15].
Figure 18.5 Information flow for engine diagnostics [18].
Figure 18.6 Environmental control system (ECS) pack.
Figure 18.7 Aircraft air distribution fan.
Figure 18.8 Commercial aircraft wheel brake.
Figure 18.9 Fuel inerting system.
Figure 18.10 Impact of the high cost of fuel.
Figure 18.11 Slat actuator (left) and flap actuator (right).
Figure 18.12 Gearbox‐mounted starter generator.
Figure 18.13 Power distribution.
Figure 18.14 Commercial aircraft landing system.
Chapter 19
Figure 19.1 CALCE's PoF‐based approach to prognostics.
Figure 19.2 Screenshot of the CALCE SARA printed wiring board analysis modules.
Figure 19.3 The CALCE SARA time to failure plot.
Figure 19.4 Thermal analysis of a printed wiring assembly.
Figure 19.5 A screen capture of vibration analysis through the calcePWA.
Figure 19.6 Available PHM algorithms in CALCE PHM software for a PHM analysis.
Figure 19.7 Data flow through CALCE PHM software. Input data flow to the first connected step via the upper path, then the data take either the path through the steps or the path below the steps.
Figure 19.8 CALCE PHM software overview.
Figure 19.9
Data Pre‐processing
with (a) nothing chosen; (b) filtering chosen; and (c) normalization chosen.
Figure 19.10
Feature Discovery
with (a) nothing chosen; (b) PCA chosen; (c) kPCA chosen with polynomial kernel; and (d) statistical feature selection chosen.
Figure 19.11
Anomaly Detection
with: (a) nothing chosen; (b) k‐means clustering chosen; (c) fuzzy c‐means clustering chose; (d)
Mahalanobis distance
(
MD
) chosen; (e) GMM chosen; (f) OC‐SVM chosen; and (g) SPRT chosen.
Figure 19.12 Results of z‐score normalization using the healthy data as a reference for all data normalization.
Figure 19.13 Projection of the data onto the first two principal components of the healthy data after the first z‐score normalizing the data (left) and variance contained in each principal component of the healthy data (right).
Figure 19.14 Result of classification of the data using a SVM with a Gaussian function kernel and after first z‐score normalizing the data and projecting the data onto the first two principal components of the healthy data.
Figure 19.15 Confusion matrix (left) and performance matrix (right) for the support vector machine classification shown in Figure 19.14.
Figure 19.16 Raw data (above) and filtered data (below) for the lithium‐ion battery capacity degradation.
Figure 19.17 Support vector regression (SVR) with kernel parameter sigma equal to: 3.5 (upper left); 0.75 (upper right); (lower left) 1.9; and (lower right) residuals for sigma equal to 1.9.
Figure 19.18 Projection of the SVR model for a lithium‐ion battery under test. Assuming the failure threshold is 0.4, the RUL is 153 steps, or in the case of the battery, 153 cycles.
Chapter 20
Figure 20.1 Description of the physical and virtual sensors used to assess the condition of equipment.
Figure 20.2 Framework for integrated
knowledge discovery in database
s (KDD) and maintenance decision support [5].
Figure 20.3 Conceptual model of maintenance information logistics.
Figure 20.4 The four phases of maintenance analytics [5].
Figure 20.5 eMaintenance solution for maintenance analytics [5].
Figure 20.6 A generic knowledge discovery process.
Figure 20.7 Context‐aware modular web framework architecture [5].
Figure 20.8 A subset of rolling‐stock vehicles compared with the full set.
Figure 20.9 Time series analysis of railway vehicle flange height.
Figure 20.10 The IT infrastructure of the maintenance solution for forming presses.
Figure 20.11 Positions of the MEMS vibration sensors in a demonstration press.
Figure 20.12 Wireless condition‐monitoring solution and the network system gateway.
Figure 20.13 Basic tasks of the ECEM system.
Figure 20.14 Cloud–clients and server.
Figure 20.15 eMaintenance alarm database architecture.
Figure 20.16 The alarm management system for RUL estimation of frame components.
Figure 20.17 Implementation of virtual sensor as a cloud service.
Figure 20.18 Pictures of the user dashboard.
Figure 20.19 Screenshot of a single sensor dashboard with accumulated maximum stress values.
Figure 20.20 Cloud‐based overall structure of the project [22].
Chapter 21
Figure 21.1 Evolution of the maintenance paradigm in conjunction with changes in production.
Figure 21.2 Enhanced P–F interval curve, which is adapted to detect potential failure earlier in the curve via predictive maintenance.
Figure 21.3 The bathtub curve.
Figure 21.4 Prognostic approaches.
Figure 21.5 Adaptive machine‐learning process versus human cognitive thinking.
Figure 21.6 Machine‐learning pipeline for a predictive maintenance program.
Figure 21.7
k
‐fold cross‐validation.
Figure 21.8 Continuous monitoring of predictive models in production.
Chapter 22
Figure 22.1 Number of PHM patents for electrical systems per year.
Figure 22.2 Top 10 PHM patent holders, 2000–2015.
Figure 22.3 The main categories of PHM use in electrical and electronic systems. (
See color plate section for the color representation of this figure
.)
Cover
Table of Contents
Begin Reading
C1
v
vi
xxiii
xxiv
xxv
xxvi
xxvii
xxviii
xxix
xxx
xxxi
xxxii
xxxiii
xxxv
xxxvi
xxxvii
xxxviii
xxxix
xl
xli
xlii
xliii
xliv
xlv
xlvii
xlxlix
l
li
lii
liii
liv
lv
lvi
lvii
lviii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
725
726
727
728
729
730
E1
Edited by
Michael G. Pecht and Myeongsu Kang
University of MarylandUSA
This edition first published 2018
© 2018 John Wiley and Sons Ltd
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
The right of Michael G. Pecht and Myeongsu Kang to be identified as the authors of the editorial material in this work has been asserted in accordance with law.
Registered Office(s)
John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK
Editorial Office
The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Wiley also publishes its books in a variety of electronic formats and by print-on-demand. Some content that appears in standard print versions of this book may not be available in other formats.
Limit of Liability/Disclaimer of Warranty
While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
Library of Congress Cataloging-in-Publication Data
Names: Pecht, Michael G., editor. | Kang, Myeongsu, 1980- editor.
Title: Prognostics and health management of electronics : fundamentals,
machine learning, and the internet of things / edited by Michael G. Pecht, Ph.D.,
PE, Myeongsu Kang, Ph.D.
Description: Second edition | Hoboken, NJ : John Wiley & Sons, 2018. |
Includes bibliographical references and index. |
Identifiers: LCCN 2018029737 (print) | LCCN 2018031572 (ebook) | ISBN
9781119515302 (Adobe PDF) | ISBN 9781119515357 (ePub) | ISBN 9781119515333
(hardcover)
Subjects: LCSH: Electronic systems-Maintenance and repair.
Classification: LCC TK7870 (ebook) | LCC TK7870 .P754 2018 (print) | DDC
621.381028/8-dc23
LC record available at https://lccn.loc.gov/2018029737
Cover image: © monsitj/iStockphoto
Cover design by Wiley
Dr. Myeongsu Kang passed away before the final publication of this book.This book is dedicated to Dr. Kang, his wife Yeoung-seon Kim, and children Mark and Matthew.
Michael G. Pecht ([email protected]) received a BS in physics, an MS in electrical engineering, and an MS and PhD in engineering mechanics from the University of Wisconsin at Madison, USA. He is a Professional Engineer, and a Fellow of the IEEE, ASME, SAE, and IMAPS. He is Editor‐in‐Chief of IEEE Access, served as chief editor of the IEEE Transactions on Reliability for nine years and chief editor for Microelectronics Reliability
