Statistical Process Monitoring
Statistical Process Monitoring
The purpose of Statistical Process Monitoring is to determine if the performance of a process is maintaining an acceptable level of quality.
Any process will experience natural variability, that is, variability due to essentially unimportant and uncontrollable sources of variation.
A process may experience more serious types of variability in key performance measures (Significant variability).
Sources of variability may arise from one of several types of non-random “causes,” such as operator errors or improperly adjusted dials on a machine.
A production process is often subject to variability. There are 2 types:
- variability due the effect of many small, essentially unavoidable causes (a process that only operates with such common causes is said to be in (statistical) control)
- variability due to special causes, such as improperly adjusted machines, operator errors, defective materials, etc. (the variability is typically much larger than for common causes, and the process is said to be out of (statistical) control)
The aim of statistical process monitoring (SPM) is to identify occurrence of special causes.
So far, we have treated samples as if they arose as a result of a random experiment, i.e. is drawn from some distribution with population mean and population variance , and we use and as estimates of and .
In practice, the index is often a time index, which is to say that the are observed in sequence. In this case, we say that the sample is a time series.
If distribution changes over time due to external factors (war, pandemic, election, etc.) or internal factors (modification of the manufacturing process, policy change, etc.), the sample mean and the sample variance might not provide a useful summary of the situation.
To get a sense of what is going on, it is preferable to plot the data in the order that it has been collected, where the horizontal coordinate is the time of collection (order, day, week, quarter, year, etc.) and the vertical coordinate is the observation . We look for trends, cycles, shifts, etc.
A control chart consists of observed values of a statistic, such as or , plotted as a time series.
If the true mean and the true standard deviation of the process are known, then the CLT implies that
and one would expect that the observed sample means would lie in the interval
roughly of the time.
The upper control limit (UCL) is the upper end of the interval, the lower control limit (LCL) is the lower end of the interval, and the central line (CL) is .
For such charts, if we observe or , we have an indication that the process is instable and potentially out of (statistical) control.
The parameter is again interpreted as the probability of a type I error:
(signal of instability | process is stable).
Typically, we use , i.e. . If , that means that even one value outside the control limits is enough to make us suspect that something is off.
In practice, however, and are not known. In that case, we estimate by the observed grand mean , with the help of the observed mean of standard deviations :
In this case, the UCL, LCL, and CL are, respectively:
A control chart consists of:
- points representing a sample statistic taken from the process at different times
- the grand mean and the mean standard deviation of the sample statistic, which is computed using all observations, and is used to determine
- the center line, which is draw at the value of the grand mean
- the upper and lower control limits which indicate the threshold at the which the process output is considered statistically unlikely (typically three standard deviations away from the central line).