Stat-Ease Blog

Blog

Greg’s DOE Adventure - Simple Comparisons

posted by Greg on July 26, 2019

Disclaimer: I’m not a statistician. Nor do I want you to think that I am. I am a marketing guy (with a few years of biochemistry lab experience) learning the basics of statistics, design of experiments (DOE) in particular. This series of blog posts is meant to be a light-hearted chronicle of my travels in the land of DOE, not be a textbook for statistics. So please, take it as it is meant to be taken. Thanks!

As I learn about design of experiments, it’s natural to start with simple concepts; such as an experiment where one of the inputs is changed to see if the output changes. That seems simple enough.

For example, let’s say you know from historical data that if 100 children brush their teeth with a certain toothpaste for six months, 10 will have cavities. What happens when you change the toothpaste? Does the number with cavities go up, down, or stay the same? That is a simple comparative experiment.

“Well then,” I say, “if you change the toothpaste and 6 months later 9 children have cavities, then that’s an improvement.”

Not so fast, I’m told. I’ve already forgotten about that thing called variability that I defined in my last post. Great.

In that first example, where 10 kids got cavities. That result comes from that particular sample of 100 kids. A different sample of 100 kids may produce an outcome of 9, other times it’s 11. There is some variability in there. It’s not 10 every time.

[Note: You can and should remove as much variability as you can. Make sure the children brush their teeth twice a day. Make sure it’s for exactly 2 minutes each time. But there is still going to be some variation in the experiment. Some kids are just more prone to cavities than others.]

How do you know when your observed differences are due to the changes to the inputs, and not from the variation?

It’s called the F-Test.

I’ve seen it written as:

f-test-formula.PNG

Where:

s = standard deviation

s2 = variance

n = sample size

y = response

ӯ (“y bar”) = average response

In essence, this is the amount of variance for individual observations in the new experiment (multiplied by the number of observations) divided by the total variation in the experiment.

Now that, by itself, does not mean much to me (see disclaimer above!). But I was told to think of it as the ratio of signal to noise. The top part of that equation is the amount of signal you are getting from the new condition; it’s the amount of change you are seeing from the established mean with the change you made (new toothpaste). The bottom part is the total variation you see in all your data. So, the way I’m interpreting this F-Test is (again, see disclaimer above): measuring the amount of change you see versus the amount of change that is naturally there.

If that ratio is equal to 1, more than likely there is no difference between the two. In our example, changing the toothpaste probably makes no difference in the number of cavities.

As the F-value goes up, then we start to see differences that can likely be credited to the new toothpaste. The higher the value of the F-test, the less likely it is that we are seeing that difference by chance and the more likely it is due to the change in the input (toothpaste).

Trivia

Question: Why is this thing called an F-Test?

Answer: It is named after Sir Ronald Fisher. He was a geneticist who developed this test while working on some agricultural experiments. “Gee Greg, what kind of experiments?”. He was looking at how different kinds of manure effected the growth of potatoes. Yup. How “Peculiar Poop Promotes Potato Plants”. At least that would have been my title for the research.


Four Tips for Graduate Students' Research Projects

posted by Shari on May 22, 2019

Graduate students are frequently expected to use design of experiments (DOE) in their thesis project, often without much DOE background or support. This results in some classic mistakes.

  1. Designs that were popular in the 1970’s-1990’s (before computers were widely available) have been replaced with more sophisticated alternatives. A common mistake – using a Plackett-Burman (PB) design for either screening purposes, or to gain process understanding for a system that is highly likely to have interactions. PB designs are badly aliased resolution III, thus any interactions present in the system will cause many of the main effect estimates to be biased. This increases the internal noise of the design and can easily cause misleading and inaccurate results. Better designs for screening are regular two-level factorials at resolution IV or minimum-run (MR) designs. For details on PB, regular and MR designs, read DOE Simplified.
  2. Reducing the number of replicated points will likely result in losing important information. A common mistake – reducing the number of center points in a response surface design down to one. The replicated center points provide an estimate of pure error, which is necessary to calculate the lack of fit statistic. Perhaps even more importantly, they reduce the standard error of prediction in the middle of the design space. Eliminating the replication may mean that results in the middle of the design space (where the optimum is likely to be) have more prediction error than results at the edges of the design space!
  3. If you plan to use DOE software to analyze the results, then use the same software at the start to create the design. A common mistake – designing the experiment based on traditional engineering practices, rather than on statistical best practices. The software very likely has recommended defaults that will make a better design that what you can plan on your own.
  4. Plan your experimentation budget to include confirmation runs after the DOE has been run and analyzed. A common mistake – assuming that the DOE results will be perfectly correct! In the real world, a process is not improved unless the results can be proven. It is necessary to return to the process and test the optimum settings to verify the results.

The number one thing to remember is this: Using previous student’s theses as a basis for yours, means that you may be repeating their mistakes and propagating poor practices! Don’t be afraid to forge a new path and showcase your talent for using state-of-the-art statistical designs and best practices.


Greg's DOE Adventure: Important Statistical Concepts behind DOE

posted by Greg on May 3, 2019

If you read my previous post, you will remember that design of experiments (DOE) is a systematic method used to find cause and effect. That systematic method includes a lot of (frightening music here!) statistics.

[I’ll be honest here. I was a biology major in college. I was forced to take a statistics course or two. I didn’t really understand why I had to take it. I also didn’t understand what was being taught. I know a lot of others who didn’t understand it as well. But it’s now starting to come into focus.]

Before getting into the concepts of DOE, we must get into the basic concepts of statistics (as they relate to DOE).

Basic Statistical Concepts:

Variability
In an experiment or process, you have inputs you control, the output you measure, and uncontrollable factors that influence the process (things like humidity). These uncontrollable factors (along with other things like sampling differences and measurement error) are what lead to variation in your results.

Mean/Average
We all pretty much know what this is right? Add up all your scores, divide by the number of scores, and you have the average score.

Normal distribution
Also known as a bell curve due to its shape. The peak of the curve is the average, and then it tails off to the left and right.

Variance
Variance is a measure of the variability in a system (see above). Let’s say you have a bunch of data points for an experiment. You can find the average of those points (above). For each data point subtract that average (so you see how far away each piece of data is away from the average). Then square that. Why? That way you get rid of the negative numbers; we only want positive numbers. Why? Because the next step is to add them all up, and you want a sum of all the differences without negative numbers getting in the way. Now divide that number by the number of data points you started with. You are essentially taking an average of the squares of the differences from the mean.

That is your variance. Summarized by the following equation:

\(s^2 = \frac{\Sigma(Y_i - \bar{Y})^2}{(n - 1)}\)

In this equation:


Yi is a data point
Ȳ is the average of all the data points
n is the number of data points

Standard Deviation
Take the square root of the variance. The variance is the average of the squares of the differences from the mean. Now you are taking the square root of that number to get back to the original units. One item I just found out: even though standard deviations are in the original units, you can’t add and subtract them. You have to keep it as variance (s2), do your math, then convert back.


Greg's DOE Adventure: What is Design of Experiments (DOE)?

posted by Greg on April 19, 2019


Hi there. I’m Greg. I’m starting a trip. This is an educational journey through the concept of design of experiments (DOE). I’m doing this to better understand the company I work for (Stat-Ease), the product we create (Design-Expert® software), and the people we sell it to (industrial experimenters). I will be learning as much as I can on this topic, then I’ll write about it. So, hopefully, you can learn along with me. If you have any comments or questions, please feel free to comment at the bottom.

So, off we go. First things first.

What exactly is design of experiments (DOE)?

When I first decided to do this, I went to Wikipedia to see what they said about DOE. No help there.

“The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe or explain the variation of information under conditions that are hypothesized to reflect the variation.” –Wikipedia

The what now?

That’s not what I would call a clearly conveyed message. After some more research, I have compiled this ‘definition’ of DOE:

Design of experiments (DOE), at its core, is a systematic method used to find cause-and-effect relationships. So, as you are running a process, DOE determines how changes in the inputs to that process change the output.

Obviously, that works for me since I wrote it. But does it work for you?

So, conceptually I’m off and running. But why do we need ‘designed experiments’? After all, isn’t all experimentation about combining some inputs, measuring the outputs, and looking at what happened?

The key words above are ‘systematic method’. Turns out, if we stick to statistical concepts we can get a lot more out of our experiments. That is what I’m here for. Understanding these ‘concepts’ within this ‘systematic method’ and how this is advantageous.

Well, off I go on my journey!


Correlation vs. causality

posted by Greg on April 5, 2019


Recently, Stat-Ease Founding Principal, Pat Whitcomb, was interviewed to get his thoughts on design of experiments (DOE) and industrial analytics. It was very interesting, especially to this relative newbie to DOE. One passage really jumped out at me:

“Industrial analytics is all about getting meaning from data. Data is speaking and analytics is the listening device, but you need a hearing aid to distinguish correlation from causality. According to Pat Whitcomb, design of experiments (DOE) is exactly that. ‘Even though you have tons of data, you still have unanswered questions. You need to find the drivers, and then use them to advance the process in the desired direction. You need to be able to see what is truly important and what is not,’ says Pat Whitcomb, Stat-Ease founder and DOE expert. ‘Correlations between data may lead you to assume something and lead you on a wrong path. Design of experiments is about testing if a controlled change of input makes a difference in output. The method allows you to ask questions of your process and get a scientific answer. Having established a specific causality, you have a perfect point to use data, modelling and analytics to improve, secure and optimize the process.’"

It was the line ‘distinguish correlation from causality’ that got me thinking. It’s a powerful difference, one that most people don’t understand.

As I was mulling over this topic, I got into my car to drive home and played one of the podcasts I listen to regularly. It happened to be an interview with psychologist Dr. Fjola Helgadottir and her research into social media and mental health. As you may know, there has been a lot of attention paid to depression and social media use. When she brought up the concept of correlation and causality it naturally caught my attention. (And no, let’s not get into Jung’s concept of Synchronicity and whether this was a meaningful coincidence or not.)

The interesting thing that Dr. Helgadottir brought up was the correlation between social media and depression. That correlation is misunderstood by the general population as causality. She went on to say that recent research has not shown any causality between the two but has shown that people who are depressed are more likely to use social media more than other people. So there is a correlation between social media and depression, but one does not cause the other.

So, back to Pat’s comments. The data is speaking. We all need a listening device to tell us what it’s saying. For those of you in the world of industrial experimentation, experimental design can be that device that differentiates the correlations from the causality.