Nonparametric Statistical Process Control - Subhabrata Chakraborti - E-Book

Nonparametric Statistical Process Control E-Book

Subhabrata Chakraborti

0,0
78,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

A unique approach to understanding the foundations of statistical quality control with a focus on the latest developments in nonparametric control charting methodologies

Statistical Process Control (SPC) methods have a long and successful history and have revolutionized many facets of industrial production around the world. This book addresses recent developments in statistical process control bringing the modern use of computers and simulations along with theory within the reach of both the researchers and practitioners. The emphasis is on the burgeoning field of nonparametric SPC (NSPC) and the many new methodologies developed by researchers worldwide that are revolutionizing SPC.

Over the last several years research in SPC, particularly on control charts, has seen phenomenal growth. Control charts are no longer confined to manufacturing and are now applied for process control and monitoring in a wide array of applications, from education, to environmental monitoring, to disease mapping, to crime prevention. This book addresses quality control methodology, especially control charts, from a statistician’s viewpoint, striking a careful balance between theory and practice. Although the focus is on the newer nonparametric control charts, the reader is first introduced to the main classes of the parametric control charts and the associated theory, so that the proper foundational background can be laid. 

  • Reviews basic SPC theory and terminology, the different types of control charts, control chart design, sample size, sampling frequency, control limits, and more
  • Focuses on the distribution-free (nonparametric) charts for the cases in which the underlying process distribution is unknown
  • Provides guidance on control chart selection, choosing control limits and other quality related matters, along with all relevant formulas and tables
  • Uses computer simulations and graphics to illustrate concepts and explore the latest research in SPC

Offering a uniquely balanced presentation of both theory and practice, Nonparametric Methods for Statistical Quality Control is a vital resource for students, interested practitioners, researchers, and anyone with an appropriate background in statistics interested in learning about the foundations of SPC and latest developments in NSPC.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 658

Veröffentlichungsjahr: 2019

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Nonparametric Statistical Process Control

Edited by

Subhabrata Chakraborti

University of Alabama Tuscaloosa USA

 

Marien Alet Graham

University of Pretoria Pretoria South Africa

Copyright

This edition first published 2019

© 2019 John Wiley & Sons Ltd

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Subhabrata Chakraborti and Marien Alet Graham to be identified as the authors of this work has been asserted in accordance with law.

Registered Offices

John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA

John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

Editorial Office

9600 Garsington Road, Oxford, OX4 2DQ, UK

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.

Limit of Liability/Disclaimer of Warranty

While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Library of Congress Cataloging‐in‐Publication Data

Names: Chakraborti, Subhabrata, author. | Graham, Marien Alet, author.

Title: Nonparametric statistical process control / Subhabrata Chakraborti,

Marien Alet Graham.

Description: Hoboken, NJ : John Wiley & Sons, 2019. | Includes

bibliographical references and index. |

Identifiers: LCCN 2018023685 (print) | LCCN 2018032639 (ebook) | ISBN

9781118890677 (Adobe PDF) | ISBN 9781118890578 (ePub) | ISBN 9781118456033

(hardcover)

Subjects: LCSH: Nonparametric statistics. | Process control—Statistical

methods.

Classification: LCC QA278.8 (ebook) | LCC QA278.8 .C445 2018 (print) | DDC

519.5–dc23

LC record available at https://lccn.loc.gov/2018023685

Cover Design: Wiley

Cover Image: © Rachelle Burnside/Shutterstock

Dedication

To the memory of my parents,

Himangshu and Pratima

To my wife, Anuradha, and our son, Siddhartha Neil

SC

To the memory of my grandmother, Marie Lochner

MAG

About the Authors

Dr. Subhabrata Chakraborti is Professor of Statistics and Morrow Faculty Excellence Fellow at the University of Alabama, USA. He is a Fellow of the American Statistical Association and an elected member of the International Statistical Institute. Professor Chakraborti has published many peer‐review journal articles in a number of areas, including censored/survival data analysis, income distribution analysis including poverty and income inequality, industrial statistics and general statistical inference. He has been the recipient of various teaching and research awards. His current research interests span applications of statistical methods, including nonparametric methods, in the area of statistical process control. He is the co‐author of three editions of a highly acclaimed book, Nonparametric Statistical Inference (2010), published by Taylor & Francis. Professor Chakraborti has been a Fulbright scholar to South Africa and has spent time as a visiting professor in several countries, including Turkey, Holland, India and Brazil. He has mentored many students and scholars, chaired special topics sessions, delivered invited lectures and keynote addresses and conducted workshops at conferences around the world. Professor Chakraborti has been serving as an Associate Editor of Communications in Statistics for over twenty years.

Dr. Marien Alet Graham received her MSc and PhD in Mathematical Statistics from the University of Pretoria, South Africa. Her research interests are in statistical process control, nonparametric statistics and statistical education. She has published several peer‐review journal articles and has made research presentations at many national and international conferences. She has been awarded several bursaries from the University of Pretoria and the National Research Foundation (NRF) of South Africa. Recently, she has been awarded an NRF Y1 rating.

Preface

Statistical process control (SPC) methods have a long and successful history. Starting around the late 1930s, led by Walter Shewhart and Edward Deming, these methods have stood the test of time and have revolutionized many facets of industrial production, not just in the United States but also in countries around the world. Over the last several years, research in SPC, particularly on control charts, has seen a phenomenal growth. Control charts are no longer confined to manufacturing and are now applied for process control and monitoring in a wide array of applications, from education to disease mapping and crime prevention. During his visit in the Department of Statistics at the University of Pretoria in South Africa in 2004 on a Fulbright fellowship, the first author had an opportunity to develop and teach a postgraduate course on SPC. That visit and the course turned out to be the beginning of a very successful decade‐long relationship, including the establishment of new undergraduate and postgraduate courses and the completion of several honors essays, master's theses, and two doctoral dissertations under his supervision. In this long period of teaching and collaborative research, it was realized that, although there were good textbooks on SPC, most of their focus was on the applied side, and much of the more recent progress on the technical developments, particularly those that have taken place over the last decade or so, were not fully embraced or covered. Much of the relevant details were scattered all over the literature. This often created an inconvenient gap for users and researchers. Hence, the idea for a new book on SPC was born. The goal was to write a book that would bring some of the more recent developments, including the use of computers and simulations, within the reach of both the researchers and the practitioners of SPC.

During this period of time, another interesting development took place. The vast majority of the control charts found in the literature are developed under the assumption that the process is normality distributed. While these are useful, within the quarters where SPC is applied, there has been some uneasiness among both users and researchers, since the process distribution was often unknown (or the normality assumption was untenable), and hence the efficacy of the parametric control charts was either unknown or in question. Thus, as has happened in the fields of classical statistical estimation and hypothesis testing over the last many years, the subject of nonparametric SPC (NSPC) came into the forefront of research some fifteen or so years ago. Now, the field is grown, so much so that researchers from many parts of the world are developing new methodologies that are challenging the conventional wisdom in SPC. The first author, his students, and collaborators have played, and continue to play, a key leadership role in pushing this exciting methodological front forward. Yet, there remains much work to be done.

In this book, written with Marien Graham, the second author and a former doctoral student from the University of Pretoria, an attempt is made to cover some of the key ideas in the field of univariate control charts. Although the focus is on the newer nonparametric control charts, the reader is first introduced to the main classes of parametric (normal theory) control charts and the associated theory, so that a proper foundational background can be laid.

The book is aimed at the advanced undergraduate (honors) and postgraduate (masters' and doctoral) students and researchers working in SPC. The reader is expected to have at least a basic background in statistical methods as well as in probability and statistical inference (mathematical statistics). A related background in mathematics (knowledge of calculus) is assumed. The goal is to write a book for the student, the interested practitioner, the self‐learner and the researcher, so a careful balance is struck between theory and practice. It is intended to be a vital source of reference for the researcher so that relevant theories are presented along with many references, including some of the most recent advances in the literature. Another important distinguishing feature of this book is its use of computers and software, including simulations and graphics to illustrate concepts and research ideas in SPC.

The book is partly based on the research done by the authors with their collaborators and the lecture notes developed over several years of teaching at the Department of Information Systems, Statistics and Management Science, University of Alabama, at the Department of Statistics and the Department of Science, Mathematics and Technology Education at the University of Pretoria. At the Department of Statistics, these courses have been taught successfully over the last several years and now comprise an integral part of the statistics honors and masters' programs.

In a nutshell, in this book, quality control methodology, mostly control charts, are presented from a statistician's viewpoint, striking a careful balance between theory and practice, explanations and implementations. The first chapter gives a review of and the background to mathematical statistics covering well‐known topics such as basic probabilities, random variables and their distributions, expected values, moments, variance, skewness, the central limit theorem, order statistics, and so on. The second chapter is an introductory chapter to the basic concepts and theory of statistical quality control. Here we discuss some basic terminology and concepts, such as the types of variability encountered, the definition of a control chart, the different types of control charts, the design of control charts, the sample size, sampling frequency, the choice of control limits and so on. In the third chapter parametric control charts are considered where one has knowledge about the underlying process distribution, which is typically the normal distribution. In the fourth chapter the distribution‐free (nonparametric) charts are discussed for the case where the underlying process distribution is unknown. Finally, in Chapter 5, some miscellaneous nonparametric control charts are discussed. Throughout the book guidance is given on the choice of control chart, the choice of the charting constants (i.e. control limits), and other quality‐related matters. Required formulas and tables are provided.

The book has been several years in the making. A work of this magnitude couldn't be completed without the support and cooperation from many, who have contributed with their time, work and knowledge, directly or indirectly, enriching our knowledge and understanding of the subject matter. They all deserve our gratitude. This group of people includes our current and former students, colleagues, research collaborators and friends. It would be hard to list them all individually, we name a special few. The authors are especially thankful to Professor Nico Crowther and Professor Chris Smit from the Department of Statistics, University of Pretoria, for their interest, enthusiastic support and cooperation, over the long haul, without which much of the impetus for this project would simply not exist. The support from the South African Research Chairs Initiative (SARChI) is also gratefully acknowledged. Finally, the chain reaction that started all of these activities would most likely not have happened without the first author's visit to the University of Pretoria on a Fulbright Fellowship awarded by the Council of International Exchange of Scholars, USA. That visit was also made possible with the strong support from the University of Alabama, Tuscaloosa, and in particular from Professor Edward Mansfield, the Head of the Information Systems, Statistics and Management Science department at the time, where the first author is a professor. We owe a lot to these people and organizations.

We would like to acknowledge the editorial and production team at Wiley, including Blesy Regulas, for their support. We are grateful to Ms. Debbie Jupe, a former editor at Wiley, who played a key role in initiating this project. We are also thankful to the authors and publishers who have given us permission to reproduce their work, such as tables, figures, etc., in our book.

Last and not the least, we thank our family members for their sacrifices, encouragement and support during the years this book was written. Without their full cooperation, this project couldn't have been completed.

Tuscaloosa, Alabama, USA

Pretoria, South Africa

Marien Alet Graham

Subhabrata Chakraborti

About the companion website

The companion website for this book is at

www.wiley.com/go/chakraborti/Nonparametric_Statistical_Process_Control

The website includes:

– SAS programs

– Microsoft Excel data files

1Background/Review of Statistical Concepts

Chapter Overview

This chapter gives an overview of some key statistical concepts as they relate to statistical process control (SPC). This will aid in familiarizing the reader with concepts and terminology that will be helpful in reading the following chapters.

1.1 Basic Probability

The term probability indicates how likely an event is or what the chance is that the event will happen. Most events can't be predicted with total certainty; the best we can do is say how likely they are to happen, and quantify that likelihood or chance using the concept of probability. A probability is a real number between (and including) zero and one. When an event is certain to happen, its probability equals one, whereas when it is impossible for the event to happen, its probability equals zero. Otherwise, the event is likely to happen or occur with a certain probability, expressed as a fraction between zero and one. For example, when a coin is tossed, there are two possible outcomes, namely, that a head (H) or a tail (T) can be observed. Note that an outcome is the result of a single trial of an experiment and the sample space (S) constitutes all possible outcomes of an experiment (the sample space is exhaustive). In the coin tossing example, the sample space is given by S = {H,T}. If the coin is unbiased (or fair), the probability (P) of observing a head is the same as the probability of observing a tail, each of which equals . The probability of the set of all possible experimental outcomes in the sample space must equal one. In this example, this is evident since P(H) + P(T) = 0.5 + 0.5 = 1. When all experimental outcomes in the sample space are equally likely, this is referred to as the classical method of assigning probabilities, which is illustrated in the coin example. Another example of the classical method of assigning probabilities is when a dice is thrown. In this case, the sample space is given by S = {1,2,3,4,5,6} and if the dice is unbiased (or fair) the probability of observing a one on the dice is the same as observing any other value on the dice that equals . Mathematically, we can write

where Ei defines the ith experimental outcome, i.e.

E

1

= 1

Observed value on the dice is a one

E

2

= 2

Observed value on the dice is a two

E

3

= 3

Observed value on the dice is a three

E

4

= 4

Observed value on the dice is a four

E

5

= 5

Observed value on the dice is a five

E

6

= 6

Observed value on the dice is a six

Again, note that the probability of the set of possible experimental outcomes equals one since

In the two examples given above, the experimental outcomes are equally likely. Let's consider an experiment where the experimental outcomes are not equally likely. Suppose that a glass jar contains four red, eight green, three blue, and five yellow marbles. If a single marble is chosen at random from the jar, what is the probability of choosing a specific color, say, a red marble? In general, the probability of an event occurring is calculated by dividing the number of ways an event can occur by the total number of possible experimental outcomes.

P

(Red) = (Number of red marbles)/(Total number of marbles) =

P

(Green) = (Number of green marbles)/(Total number of marbles) =

P

(Blue) = (Number of blue marbles)/(Total number of marbles) =

P

(Yellow) = (Number of yellow marbles)/(Total number of marbles) =

Again, note that the probability of the set of possible experimental outcomes equals one since

When all the experimental outcomes are not equally likely, this is referred to as the relative frequency method of assigning probabilities, which is illustrated in the marble example.

Next, we consider random variables and their distributions that play the most important roles in statistics and probability.

1.2 Random Variables and Their Distributions

A random variable, denoted as , can take on a value, or, an interval of values, with an associated probability. The random variable can be univariate (one) or bivariate (two) or even multivariate (more than two). There are two major types of random variables, namely, discrete and continuous. Although there are situations where there can be a mixed random variable, which is partly discrete and partly continuous, we focus on the discrete and continuous variables here. To illustrate a discrete random variable, let's consider the coin example where either a head or a tail can be observed in a trial (a coin toss). Suppose that a coin is tossed five times and the random variable denotes the number of heads that are observed. Then can only take on integer values S = {0,1,2,3,4,5} and, accordingly, is a discrete random variable. Another example of a discrete random variable would be an that denotes the number of members in a household. Alternatively, a continuous random variable can take on values within some range. The probability of a continuous variable taking on any specific value is zero. If denotes the height of a tree, then it is possible for a tree to have a height of 2.176 m or even 2.1765482895 m; the number of decimal places depends on the accuracy of the measuring instrument. Thus can take on values other than only integer values, within some range of values and, accordingly, is a continuous random variable. Another example of a continuous random variable would be if denotes the lifetime of a light bulb.

A random variable has an associated probabilitymass function (pmf) if discrete or a probability density function (pdf) if continuous. First, we define the cumulative distributionfunction (cdf) before defining the pmf and pdf for discrete and continuous random variables, respectively.

Every random variable has a cumulative distribution function (cdf) that defines its distribution. The cdf is a function that gives the probability that a random variable is less than or equal to some real value , that is, . In the case of a discrete random variable, the cdf is calculated by adding the probabilities up to and including , whereas for a continuous random variable, the cdf is calculated by finding the area (integrating) under its pdf up to . The cdf is a monotone non‐decreasing right‐continuous function, which is a step function for a discrete random variable (see Figure 1.1) and is a continuous function for a continuous random variable (see Figure 1.2). For Figure 1.1 , it should be noted that with are the discrete values that the random variable can take on. For more details on the properties of a cdf see any mathematical statistics book.

Figure 1.1 The cdf for a discrete random variable.

Figure 1.2 The cdf for a continuous random variable.

The pmf of a discrete random variable is a function that gives the probability that the random variable takes on the value of , that is, . More formally, let , satisfying the following two conditions

The pdf of a continuous random variable is the first derivative of the cdf . That is, . More formally, the pdf must satisfy the following two conditions

The cdf (or equivalently the pmf and the pdf) describes the distribution of a random variable over its values or its range or domain of values, that is, how the total probability (which equals one) is distributed or spread out over the values or the range of values of the random variable(s). Probabilities may be either marginal, joint, or conditional. A marginal probability is the probability of the occurrence of a single event. It may be thought of as an unconditional probability since it is not conditioned (or dependent) on another event. An example of a marginal probability is the probability that a red card is drawn from a deck of cards, which is given by (Red) = , since 26 out of 52 cards, that is, half the cards in a deck of cards, are red. A joint probability is the probability of the joint occurrence (or the intersection) of at least two events. The probability of the intersection of two events, and , may be written as , for example, the probability of drawing a red ace from a deck of cards is given by (Red Ace) = , since there are two red aces in a deck of 52 cards, namely, the ace of hearts and the ace of diamonds. A conditional probability is the probability of event occurring, given that event occurs, and is denoted by , for example, the probability of drawing an ace, given that the card is red, is given by (Ace | Red) = , since there are two aces in the total of 26 red cards, namely, the ace of hearts and the ace of diamonds. The definition of a conditional probability is given by

(1.1)

This formula shows the relationship between the marginal, the joint, and the conditional probability. Returning to the example of the deck of cards, for example, (Ace | Red) = , which is the same answer as found previously. Typically, a marginal probability relates to an event associated with a single (scalar) random variable, whereas both joint and conditional probabilities relate to events associated with two or more random variables, that is, in a bivariate or a multivariate distribution.

Next, consider the distribution of two random variables, and . We may like to know how the two random variables relate to each other, say, how one might affect or influence the other. We thus use the term joint probability distribution or the joint distribution to describe the probability distribution of the two random variables, as they vary jointly or together, over the mass points or the ranges of the random variables under consideration. The joint distribution can then be calculated from the joint probabilities that two events, related to the two random variables, occur simultaneously. For example, for two discrete random variables, and , the joint pmf can be written as

where and are the conditional probabilities of given , and of given , respectively, and and are the marginal probabilities of and , respectively. On the other hand, for two continuous random variables the joint pdf can be written as

where and are the conditional pdfs of given , and of given , respectively, and and are the marginal pdfs of and , respectively. The latter, that is, a marginal probabilitydistribution, is defined next. Often, when confronted with the joint probability distributions of two random variables, we wish to restrict our attention to the distribution of the individual variables. These probability distributions are called the marginal probability distributions of the respective individual random variables and each of them can be obtained by summing (or integrating) the joint distribution over the range of the other variables. Two events are said to be independent if the occurrence of the one event does not affect the probability of the other event. In this case, the joint distribution for discrete independent random variables is given by , and for continuous random variables we have . Finally, note that the joint cdf of two or more random variables (discrete or continuous) can be obtained by summing (or integrating) the relevant values (or the range of values) of the joint pmf (or the pdf) of the corresponding random variables. In the continuous case, the joint pdf is obtained by calculating the partial derivative of the joint cdf. Any standard probability book can be consulted for more details.

The expected value of a random variable is a (one) measure of its location or central tendency. For a random variable , the expected value or the mean of its distribution is given by when is discrete and when is continuous. Some well‐known properties of expected value are as follows. If and are random variables and is any constant, then (i) , (ii) , (iii) , and (iv) . The reader can consult any standard probability and statistics book for these details.

The expectation of a discrete random variable has important applications in SPC as a chart performance measure. This is because the random variable called the run‐length, to be defined later, of any control chart is a discrete random variable.

Next, we consider the conditional expectation in a bivariate distribution with two random variables, and Y. The conditional expectation of given is given by when and are two discrete random variables. When the random variables are continuous, .

An important result relates the unconditional or the marginal expectation, say, of , that is, , to the conditional expectation of given , that is, . It can be shown that . This result shows that for any two random variables the unconditional expectation of one variable can be obtained by taking the expectation of the conditional expectation of that variable given the other variable. This result plays an important role in studying the performance of control charts when parameters are estimated, as will be seen in later chapters. Note that the conditional expectation, , is a function of and and is obtained simply by taking the expectation of this function over the distribution of .

We show proof of this result when and are discrete variables with a joint distribution. The continuous case can be treated in a similar way.

Proof

Along with the expectation, the variance of the conditional distribution is also important. The conditional variance of given is given by , which can equivalently be written as . A useful result is that the variance of the unconditional distribution of, say, , can be obtained from the expectation and the variance of the conditional distribution of given

The verification of these results is left to the reader.

Some common discrete and continuous distributions are given in Tables 1.1 and 1.2, respectively, along with general formulae to calculate their means, medians, and variances. Column (a) of Tables 1.1 and 1.2 displays the name and parameters of each distribution. Column (b) displays the pmf and the pdf for the discrete and continuous probability distributions, respectively. For the discrete distributions, Columns (c) and (d) display the mean and the variance of each distribution, respectively. For the continuous distributions, in Table 1.2 , Columns (c), (d) and (e) display the mean, median, and variance, respectively. Note that, for discrete distributions, the percentiles, which includes the median, need to be defined so that they are unique. This is why they are omitted in Table 1.1 , but we consider them in later chapters.

Table 1.1 Some common discrete probability distributions.

(a)

(b)

(c)

(d)

Distribution

Probability mass function (pmf)

Mean

Variance

Binomial

BIN

(

)

Poisson

POI

(

)

Geometric

GEO

(

)

Table 1.2 Some common continuous probability distributions.

(a)

(b)

(c)

(d)

(e)

Distribution

Probability density function (pdf)

Mean

Median

Variance

Standard Normal

N

(0,1)

)

0

0

1

Student's

t

)

denotes the degrees of freedom

0

0

for

Gamma

GAM

(

)

and

denote the shape and scale parameters

No simple closed form

The gamma function (denoted

) is defined for all complex numbers (except for negative integers) as

. Note that

with

and

! when

is a positive integer, with the convention that 0!=1.

Beta

for

Where the function

,

is called the beta function.

Logistic

Logistic

(

)

)

and

denote the location and scale parameters

Log‐Logistic

Log‐Logistic

(

and

denote the scale and shape parameters

else undefined

with

for

Laplace or Double Exponential

DE

(

)

)

and

denote the location and scale parameters

Uniform

Some notes on Table 1.2 follow. The gamma distribution is positively skewed and the skewness of the gamma distribution increases as the shape parameter decreases. Also note that the GAM(1,1) distribution is the exponential distribution with mean 1, EXP(1). The skewness of a distribution is typically defined in terms of the moments and is given later.

The contaminated normal (CN) distribution is a mixture of two normal distributions, which has interesting applications in SPC; however, it is not included in Table 1.2 since the formulae are too complex to fit into the table. The definition and some properties of the CN distribution are given below. The pdf of the CN distribution is given by

where is the pdf of a normal distribution with mean and variance and denotes the level of contamination. It can be shown that the expected value and the variance of the CN distribution are given by

and

respectively.