DATA SCIENCE , AUTOMATION , SIX SIGMA

Data Science in Supply Chain Series

Creating a Control Chart with Scripted Rule Checks

using: python, pandas, numpy, matplotlib

Below is a quick walk through of building and analyzing a control chart using basic six sigma methodologies.

Libraries

I use python for most data analysis done in for supply chain. In this case, I will be importing Numpy and Pandas libraries to add some functionality. I will also be bringing in Matplotlib in order to create the graphs we will be using to visualize the control charts.

Numpy

Numpy allows us to work with arrays mathematically. so we can multiply an array [0,1,2,3,4] times 2 and get the entire array multiplied as such [0,2,4,6,8]. This is known as vector and matrix math and is extremely useful when working with “spreadsheet” data.

Pandas

Pandas allows us to interact with csv and excel files fairly easily. A must when using data sent to you or available from many supply chain software packages.

Matplotlib

Matplotlib gives us the ability to create quickly and easily, excel like charts while customizing them for our needs.

Data

Our data consists of an excel sheet of measurements. The scenerio here is that we are measuring the dimension on a part which is suppose to be 1.2 inches with a tolerance of 0.005. These are measurements of every 5 part off a swiss lathe machine. So the chart is also a run chart graphing a time series. In other words, the order of observations matters here.

We are looking to make sure the machine is “in control”. As we meaure parts off the line, we use the data to make adjustments to the tooling and machine settings. The key here is that just because all the measurements are within tolerance, it doesn’t mean the process is in control.

Statistics

In statistics, there is a rule known as the Empirical Rule which states that if observations are gaussian (distributed normally), then we will find

  • 68% of observations within +/- 1 standard deviation of the average,
  • 95% of observations within +/- 2 standard deviation of the average,
  • 99.7% of observations within +/- 3 standard deviation of the average,
  • and the remainder would be outliers.

We can use this rough approxamtion to understand what could happen in the future. If we know the average and standard deviation of all the future parts that are going to be produced, we know it would follow the above rule… roughly. However, we don’t know this. So we use a sampling of the parts already produced in order to try to estimate the average and the standard deviation.

For a part with an acceptable range of 1.00 to 1.10 inches, and observed average measurement of 1.05 and observed standard deviation of 0.025, we can anticipate problems. Even if no actual measurements are out of tolerance from our test run, we can anticipate that about 5% of our parts will be out of tolerance given the empirical rule.

Even more than this, there are other rules which we will cover below which can help us estimate if the average has shifted or there was some change in the machine in the middle of the run. Things like trends and clusters that have a statistically low chance of occuring can clue us to this.

So let’s get to the analysis.

Setting up the environment

Let’s bring in our libraries first.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

%matplotlib inline

I am going to set the GD&T variables manually here.

# GD&T Callout
target = 1.2
tolerance = 0.005

Now let’s calculate the upper and lower tolerance from the information above.

upper_tol = target + tolerance
lower_tol = target - tolerance

And last, we’ll import our data and take a quick look at it to make sure we understand it.

measurement_data = pd.read_excel('../data/line measurements.xlsx')

This dataset has two columns, real simple.

measurement_data.columns

#=> Index(['Measurement Number', 'Measurement'], dtype='object')

We’ll now parse each column into a seperate variable to make it easier to use in analysis.

meas_num = measurement_data['Measurement Number']
meas = measurement_data.Measurement

Because we are going to need to do some vector-wise calculations and graphing, we’ll need each measurement row to have the tolerances in it. So we’ll create those columns here (this helps us create the lines where tolereances are).

measurement_data['target'] = target
measurement_data['upper_tol'] = upper_tol
measurement_data['lower_tol'] = lower_tol

plt.plot(meas_num, meas, color='b', marker='o')
plt.plot(meas_num, measurement_data['target'], color='g', linewidth=0.75)
plt.plot(meas_num, measurement_data['upper_tol'], color='r', linewidth=0.5, linestyle='dashed')
plt.plot(meas_num, measurement_data['lower_tol'], color='r', linewidth=0.5, linestyle='dashed')
plt.xlabel('Measurement Number')
plt.ylabel('Measurement')
plt.title('Tolerance Chart')
plt.show()

tolerance chart

We can see a few measurements are out of tolerance. I placed these out a bit for some of the zone analysis below.

Let’s move to creating a control chart now.

We are going to plot each measurement as is first to get a base chart.

plt.plot(meas_num, meas, color='b')
plt.xlabel('Measurement Number')
plt.ylabel('Measurement')
plt.title('X-Bar Run Chart')
plt.show()

run chart

Now we are going to add markers and zone lines.

First let’s find the average ad standard deviation using the Numpy functions on the Pandas Series.

# Statistics
avg = np.average(meas)
measurement_data['avg'] = avg

std = np.std(meas)

Now we’ll create the zone lines with the standard deviation calculated. We will also create columns that have those numbers so we can graph them as lines and not just points.

upper_1 = avg + std*1
upper_2 = avg + std*2
upper_3 = avg + std*3

lower_1 = avg - std*1
lower_2 = avg - std*2
lower_3 = avg - std*3

measurement_data['upper_1'] = upper_1
measurement_data['upper_2'] = upper_2
measurement_data['upper_3'] = upper_2

measurement_data['lower_1'] = lower_1
measurement_data['lower_2'] = lower_2
measurement_data['lower_3'] = lower_3

Finally, we will create the graph. You can see for the meas series (which is our measured data) we added a marker of 'o' which plots the actual point on the line and not just the line.

plt.plot(meas_num, meas, color='b', marker='o')
plt.plot(meas_num, measurement_data['avg'], linestyle='solid', linewidth=0.75, color='g')
plt.plot(meas_num, measurement_data['upper_1'], linestyle='dashed', linewidth=0.5, color='r')
plt.plot(meas_num, measurement_data['upper_2'], linestyle='dashed', linewidth=0.5, color='r')
plt.plot(meas_num, measurement_data['upper_3'], linestyle='dashed', linewidth=0.5, color='r')
plt.plot(meas_num, measurement_data['lower_1'], linestyle='dashed', linewidth=0.5, color='r')
plt.plot(meas_num, measurement_data['lower_2'], linestyle='dashed', linewidth=0.5, color='r')
plt.plot(meas_num, measurement_data['lower_3'], linestyle='dashed', linewidth=0.5, color='r')
plt.xlabel('Measurement Number')
plt.ylabel('Measurement')
plt.title('X-Bar Run Chart')
plt.show()
print(round(upper_3,4))
print(round(upper_2,4))
print(round(upper_1,4))
print(round(avg,4))
print(round(lower_1,4))
print(round(lower_2,4))
print(round(lower_3,4))

control chart

1.212
1.2086
1.2051
1.2017
1.1983
1.1948
1.1914

Now for the automated checks!

As mentioned above, it is import to make sure that there is not a statistically high chance of future part rejections based on the data we have so far. The basic rules you canu use to check this are below.

Control Chart Rules

  1. None beyond limits (Outlier)
  2. 2 out of 3 consecutive in outer region
  3. 4 out of 5 consecutive in middle region or beyond
  4. 7 or more consecutive on same side of average
  5. 7 consecutive trending same direction (trend)
  6. 8 consecutive with no outer region (Mixture)
  7. 15 consecutive in inner region (stratification)
  8. 14 consecutive alternative up and down around average (over control)

We are going to create a set called errors. As the checks find errors, they will add them to this set. At the end, we will have a list of all problems with our data. Because I run this multiple times I data, I always clear the set of previous errors before doing the checks.

errors = set()
errors.clear

I am also going to create a few calculations that will make my life easier later.

  • differentials to target will be the mathematical distance from the GD&T callout.
  • differentials to the avg will be the mathematical distance from the averge .
  • first differences is going to be the change from the previous measurement.
differentials_to_target = meas - target
differentials_to_avg = meas - avg
first_differences = np.ediff1d(meas)

Our first check will be to see if any measurements fall outside the tolerance. In this case they do.

### Tolerances

if np.max(np.absolute(differentials_to_target)) > tolerance:
    errors.add("Dimensions are out of tolerance")

Next we will see if there are outliers in our data. This will be observations outside of 3 standard deviations. Instead of of looking for measurements above and below +3 and -3 std, I take the absolute value of the differentials so I only have to look towards the upper line. Also, since I am not counting how many failed points, I am just looking at the max differential.

### Outliers

absolute_differentials = np.absolute(differentials_to_avg)

if np.max(absolute_differentials) > upper_3:
    errors.add("Outliers Exist")

errors

#=> {'Dimensions are out of tolerance'}

Next we’ll check rule 2. We will look at all the measurements in blocks of 3, moving down the line. If any of the windows has 2 or more in the outer region, or above upper_2, we have a process that is not matching the empirical rule about where observations should fall. This shows us that the standard devition (or mean) is not what we thought it was, it is most likely higher. This will result in more future measurements being farther out than we initially anticipated or have yet observed.

### Outer Regions

for index, i in enumerate(absolute_differentials):
    if index < 2:
        continue
    count = np.count_nonzero(absolute_differentials[index-2:index+1] > (std*2))
    if count >= 2:
            errors.add("Outer Zone Clusters")

errors

#=> {'Dimensions are out of tolerance'}

In this case, we look good so far.

We are going to be looking for similiar evidence in the other zones below using rules 3 and 4.

### Middle Regions

for index, i in enumerate(absolute_differentials):
    if index < 4:
        continue
    count = np.count_nonzero(absolute_differentials[index-4:index+1] > (std*1))
    if count >= 4:
            errors.add("Middle Zone Clusters")
errors

#=> {'Dimensions are out of tolerance', 'Middle Zone Clusters'}

Here we do have something that should not occur assuming our calculated average and standard deviation. These shifts are small shifts but statistically significant. They should not occur if “nothing” changed. Changes can be occurances such as:

  • different material,
  • different operator,
  • environment changes like temperature or floor vibration,
  • or setup procedure.
### Inner Regions

for index, i in enumerate(absolute_differentials):
    if index < 6:
        continue
    count = np.count_nonzero(differentials_to_target[index-6:index+1] > 0)
    if count >= 7:
            errors.add("Inner Zone Clusters")
    count = np.count_nonzero(differentials_to_target[index-6:index+1] < 0)
    if count >= 7:
            errors.add("Inner Zone Clusters")

errors

#=> {'Dimensions are out of tolerance', 'Middle Zone Clusters'}

The next thing we are going to look for are trends. These are representative of something wrong. Trends, specifically of this magnitude should now occur in random data. Some of the causes of trends can be tooling wear or ongoing temperature changes.

### Trends

for index, i in enumerate(first_differences):
    if index < 5:
        continue
    count = np.count_nonzero(first_differences[index-5:index+1] > 0)
    if count >= 6:
        errors.add("Trending Data is Present")
    count = np.count_nonzero(first_differences[index-5:index+1] < 0)
    if count >= 6:
        errors.add("Trending Data is Present")
    
errors

#=> {'Dimensions are out of tolerance', 'Middle Zone Clusters'}

Then next check is for including too many processes in the control chart. The distribution is not spread out enough, so there might be 2 or more controls being performed before this measurement is taken. For example, maybe a part goes through 2 processes, and this measurement is after the second process (grinding after machining maybe. So the machining operator is already disposing of bad parts before they even get to you. This would result in a mixture or a stratification problem.

### Mixture

for index, i in enumerate(absolute_differentials):
    if index < 7:
        continue
    count = np.count_nonzero(absolute_differentials[index-7:index+1] > upper_3)
    if count == 0:
        errors.add("No Mixture")
errors

#=> {'Dimensions are out of tolerance', 'Middle Zone Clusters', 'No Mixture'}
### Stratification

for index, i in enumerate(absolute_differentials):
    if index < 14:
        continue
    count = np.count_nonzero(absolute_differentials[index-14:index+1] > upper_1)
    if count == 0:
        errors.add("No Stratification")
errors

#=> {'Dimensions are out of tolerance',
     'Middle Zone Clusters',
     'No Mixture',
     'No Stratification'}

The final check is for manipulation, cherry picking results, or tweaking a process mid measurement. The results are “too good”.

### Over-Control

def sign_change(a, b):
    if a > 0 and b > 0:
        return 0
    elif a < 0 and b < 0:
        return 0
    else:
        return 1 

changes = []

for index, i in enumerate(first_differences):
    if index == 0:
        continue
    change = sign_change(first_differences[index],first_differences[index-1])
    changes.append(change)

for index, i in enumerate(changes):
    if index < 12:
        continue
    if np.array(changes[index-12:index+1]).sum() >= 13:
        errors.add("Over Control")
errors

#=> {'Dimensions are out of tolerance',
     'Middle Zone Clusters',
     'No Mixture',
     'No Stratification'}

Afterthoughts

What we end up with a control chart with a post script that looks for all the problems and helps identify next steps.

Some future builds would be to chart where the errors are occuring by marking those markers red, something I have planned at the moment, but isn’t terribly neccessary since a quick visual look can often find these just as fast.