Stat-Ease Blog

Blog

Modeling both mean and standard deviation to achieve on target results with minimal variation

posted by Mark Anderson on May 28, 2024

My colleague Richard Williams just completed a very thorough three-part series of blogs detailing experiment designs aimed at building robustness against external noise factors, internal process variation, and combinations of both. In this follow up, I present another, more simplistic, approach to achieve on target results with minimal variation: Model not only the mean outcome but also the standard deviation. Experimenters making multiple measurements for every run in their design often overlook this opportunity.

For example, consider the paper helicopter experiment done by students of my annual DOE class at South Dakota Mines. The performance of these flying machines depends on paper weight, wing and body dimensions and other easily controlled factors such as putting on a paper clip to stabilize rotation. To dampen down variability in launching and air currents, students are strongly encouraged to drop each of their ‘copters three times and model the means of the flight time and distance from target. I also urge them to analyze the standard deviations of these two measures. Those who do discover that ‘copters without paper clips exhibit significantly higher variability in on-target landings. This can be seen in the interaction plot pictured, which came from a split plot factorial on paper helicopters done by me and colleagues at Stat-Ease (see this detailed here).


Interaction plot of factors d (body width) and E (clip) on a helicopter experiment

Putting on a paper clip dramatically decreased the standard deviation of distance from target for wide bodied ‘copters, but not for narrow bodied ones. Good to know!

When optimizing manufacturing processes via response surface methods, measuring variability as well as the mean response can provide valuable insights. For example, see this paper by me and Pat Whitcomb on Response Surface Methods (RSM) for Peak Process Performance at the Most Robust Operating Conditions for more details. The variability within the sample collection should represent the long-term variability of the process. As few as three per experimental run may be needed with the proper spacing.

By simply capturing the standard deviation, experimenters become enabled to deal with unknown external sources of variation. If the design is an RSM, this does not preempt them from also applying propagation of error (POE) to minimize internal variation transmitted to responses from poorly controlled process factors. However, to provide the greatest assurance for a robust operating system, take one of the more proactive approaches suggested by Richard.


Achieving robust processes via three experiment-design options (part 3)

posted by Richard Williams on May 1, 2024

The goal of robustness studies is to demonstrate that our processes will be successful upon implementation in the field when they are exposed to anticipated noise factors. There are several assumptions and underlying concepts that need to be understood when setting out to conduct a robustness study. Carefully considering these principles, three distinct types of designs emerge that address robustness:

I. Having settled on process settings, we desire to demonstrate the system is insensitive to external noise-factor variation, i.e., robust against Z factor influence.

II. Given we may have variation between our selected process settings and the actual factor conditions that may be seen in the field, we wish to find settings that are insensitive to this variation. In other words, we may set our controlled X factors, but these factors wander from their set points and cause variation in our results. Our goal is to achieve our desired Y values while minimizing the variation stemming from imperfect process factor control. The impact of external noise factors (Z’s) is not explored in this type of study.

III. Given a system having controllable factors (X’s) and non-controllable noise factors (Z’s) impacting one or more desirable properties (Y’s), we wish to find the ideal settings for the controllable factors that simultaneously maximize the properties of interest, while minimizing the impact of variation from both types of noise factors.

Read part 1 here.

Read part 2 here.

Design-Type III: A combination of the first two types

The idea here is to simultaneously involve the process factors (X’s) and the noise factors (Z’s) in the same DOE so as to identify the right process (controllable) factor settings to deliver the intended responses (Y’s) with the minimal variation when both the process and noise factors vary.

One of the first to consider this holistic approach was Taguchi in the 1980’s. Taguchi envisioned a factorial space whereby the controllable factors are changed in the usual way, and the noise factors are studied at each corner of the factorial space as a secondary factorial design. In his vocabulary, there was an inner array (of controllable factors) and an outer array (of noise factors). The design principle was as shown below, with 16 data points collected as indicated in blue (for 2 process factors, and two noise factors).


Taguchi array with 16 data points

The principle of Taguchi’s approach is sound. There were, however, challenges made regarding the analytical approach and that led to further efforts by others to advance the science. Several concepts have been proposed. A solid candidate would be a dual response surface approach, where process factors and noise factors are combined in the same study, and two responses are measured: the process mean (predicted Y values), and also the variance for the predicted Y values at any given point within the design space. Armed with this knowledge the experimenter can seek regions within the design space where the desired Y values are achieved but are also relatively insensitive to the variation of both process and noise factors.

How are these dual response surface studies done? Essentially the same as described before under Type II Robustness studies, with one key exception. The external noise factors are included as factors within the study, and their influence evaluated as though they were controllable factors (which they are, of course, during the DOE itself). And the propagated error from all factors upon the responses are evaluated as per Type II.

The difference comes during the numeric optimization. Since Z factors cannot be controlled in the field, these factors are set to their nominal value (usually the center of the range studied) in numeric optimization. The standard deviation for these factors will still influence the POE assessment for responses.

Then, the experimenter can use numeric optimization to achieve the desirable Y value criteria while simultaneously minimizing the POE response, resulting in the identification of a robust region that is relative insensitive to variation in X and Z factor variation.

For additional information on using a combined array, and an introduction to using POE to address both process factor and noise factor variation, read the 2002 white paper by Mark Anderson and Shari Kraber on Cost-Effective and Information-Efficient Robust Design For Optimizing Processes And Accomplishing Six Sigma Objectives.

Read a follow-up by Mark Anderson here.


Achieving robust processes via three experiment-design options (part 2)

posted by Richard Williams on April 11, 2024

The goal of robustness studies is to demonstrate that our processes will be successful upon implementation in the field when they are exposed to anticipated noise factors. There are several assumptions and underlying concepts that need to be understood when setting out to conduct a robustness study. Carefully considering these principles, three distinct types of designs emerge that address robustness:

I. Having settled on process settings, we desire to demonstrate the system is insensitive to external noise-factor variation, i.e., robust against Z factor influence.

II. Given we may have variation between our selected process settings and the actual factor conditions that may be seen in the field, we wish to find settings that are insensitive to this variation. In other words, we may set our controlled X factors, but these factors wander from their set points and cause variation in our results. Our goal is to achieve our desired Y values while minimizing the variation stemming from imperfect process factor control. The impact of external noise factors (Z’s) is not explored in this type of study.

III. Given a system having controllable factors (X’s) and non-controllable noise factors (Z’s) impacting one or more desirable properties (Y’s), we wish to find the ideal settings for the controllable factors that simultaneously maximize the properties of interest, while minimizing the impact of variation from both types of noise factors.

Read part 1 here.

Design-Type II: Robustness against variation in our set points for process factors

In this type of analysis, we aim to find the process factor settings that satisfy our requirements and be the most insensitive to expected variations in those settings. For example, we may decide baking temperature and baking time impact the rise height of bread, per the results of a lab-scale DOE. But we anticipate that on an industrial scale, changes in conveyor speed could impact baking time and perhaps that large ovens may cycle in temperature, giving rise to variation.

We can use propagation of error (POE) in our time-temperature DOE to find the sweet spot where such fluctuations in process factors yield the smallest amount of variation in results, i.e., the most robust settings to for success in the field.

For additional detail on using POE as a tool for robust design see Pat Whitcomb’s 2020 Overview of Robust Design, Propagation of Error and Tolerance Analysis.

Read part 3 here.

Read a follow-up by Mark Anderson here.


Achieving robust processes via three experiment-design options (part 1)

posted by Richard Williams on March 29, 2024

The goal of robustness studies is to demonstrate that our processes will be successful upon implementation in the field when they are exposed to anticipated noise factors. There are several assumptions and underlying concepts that need to be understood when setting out to conduct a robustness study.

First, it is necessary that the noise factors be identified and can be controlled during the robustness study itself. Noise factors that cannot be controlled, cannot be evaluated. They will merely serve to increase variation within the study and exacerbate the unexplained variation encountered, i.e., they increase the residual (error) term.

Second, it is necessary to consider the type of noise variation being studied. Our modeling will be built upon X’s – the factors we control within our process, and Z’s – noise factors external to our system that we (eventually) cannot control. Variation around the chosen set points for X factors creates one source of noise that influences our responses (Y’s). Variation of identified external noise (Z’s) creates another source of influence. These Z factors, however, do not have a “set point.” In the field, they randomly appear and cause variation upon process responses.

Some DOE experts differentiate the two groups of terms (variation in X’s versus variation in Z’s) by using the terms robustness for stability against X-factor variation, and ruggedness to express stability against Z-factor variation. That differentiation is not universally used and nowadays the term ruggedness is less common, so I will refer to both concepts under the umbrella term, “robust design.”

Carefully considering these principles, three distinct types of designs emerge that address robustness:

I. Having settled on process settings, we desire to demonstrate the system is insensitive to external noise-factor variation, i.e., robust against Z factor influence.

II. Given we may have variation between our selected process settings and the actual factor conditions that may be seen in the field, we wish to find settings that are insensitive to this variation. In other words, we may set our controlled X factors, but these factors wander from their set points and cause variation in our results. Our goal is to achieve our desired Y values while minimizing the variation stemming from imperfect process factor control. The impact of external noise factors (Z’s) is not explored in this type of study.

III. Given a system having controllable factors (X’s) and non-controllable noise factors (Z’s) impacting one or more desirable properties (Y’s), we wish to find the ideal settings for the controllable factors that simultaneously maximize the properties of interest, while minimizing the impact of variation from both types of noise factors.

DOE tools for the first type of robustness study are discussed below, with more to come in future blog posts.

Design-Type I: Robustness against external noise factors

Here, we aim to prove that our process is robust against expected noise factors due to field conditions. For example, we may settle on baking conditions for making bread, but we now wish to demonstrate that anticipated changes in ambient temperature, ambient humidity, and flour brand do not impact the rise height of the bread.

The first thing we need to ask ourselves is our threshold of acceptance. Do we care about 0.1 mm of rise height? Or perhaps anything under 10 mm (1cm) change from baseline is inconsequential. Whatever value we settle on is our response change delta (ΔY) of interest – the amount of noise factor impact deemed to be alarming. If the actual noise factor causes a change less than that value, we accept that we may not detect it in this DOE.

We also need some assessment of the natural variation in the system, i.e., how much variation in results seen when repeatedly running the same conditions. This is our standard deviation sigma (σ) Note that this value would be reported in the fit statistics if a prior DOE had been run on this system (generally the case prior to doing a robustness study).

For this type of robustness study, a resolution III two-level factorial design suffices, provided we have sufficient Power (>80%) in the design, which is driven by the delta to sigma ratio (ΔY/σ). Power is the probability of detecting an effect of the chosen ΔY value, if indeed it exists. (Stat-Ease software conveniently calculates power when you construct factorial experiments using its design wizard.) If we do not have sufficient power, then more runs must be done—the best option being an upgrade to a resolution IV design. Due to their greater flexibility for number of runs, Plackett-Burman designs are another commonly used resolution III template for these types of robustness studies.

The hope of course is that we find nothing of significance, i.e., none of the noise factors cause the response to change by more than our chosen ΔY. That would be the goal of this type of robustness DOE.

If we do see noise factors appearing as significant, we have a dilemma: a resolution III experiment won’t reveal if the indicated factor is responsible, or if we really have an interaction between two other factors that is the culprit. All we will know for sure is that the system is NOT robust against the noise factors we have evaluated, at least at the selected value for ΔY. Note, however, that if we ran a Resolution IV DOE, we would have greater confidence that the indicated noise factor was the cause. But we really can’t be certain of that conclusion with anything less than a Resolution V experiment.

It is for this reason that we do not recommend using a Resolution III experiment for anything other than this type of robustness evaluation where we merely wish to prove our process is insensitive to external noise factors.

For additional information, see Mark Anderson’s 2021 webinar on DOE for Ruggedness Testing.

Read part 2 here.

Read part 3 here.

Read a follow-up by Mark Anderson here.


Know the SCOR for a winning strategy of experiments

posted by Mark Anderson on Jan. 22, 2024

Observing process improvement teams at Imperial Chemical Industries in the late 1940s George Box, the prime mover for response surface methods (RSM), realized that as a practical matter, statistical plans for experimentation must be very flexible and allow for a series of iterations. Box and other industrial statisticians continued to hone the strategy of experimentation to the point where it became standard practice for stats-savvy industrial researchers.

Via their Management and Technology Center (sadly, now defunct), Du Pont then trained legions of engineers, scientists, and quality professionals on a “Strategy of Experimentation” called “SCO” for its sequence of screening, characterization and optimization. This now-proven SCO strategy of experimentation, illustrated in the flow chart below, begins with fractional two-level designs to screen for previous unknown factors. During this initial phase, experimenters seek to discover the vital few factors that create statistically significant effects of practical importance for the goal of process improvement.

SCOR flowchart new

The ideal DOE for screening resolves main effects free of any two-factor interactions (2FI’s) in broad and shallow two-level factorial design. I recommend the “resolution IV” choices color-coded yellow on our “Regular Two-Level” builder (shown below). To get a handy (pun intended) primer on resolution, watch at least the first part of this Institute of Quality and Reliability YouTube video on Fractional Factorial Designs, Confounding and Resolution Codes.

If you would like to screen more than 8 factors, choose one of our unique “Min-Run Screen” designs. However, I advise you accept the program default to add 2 runs and make the experiment less susceptible to botched runs.

SE Screenshot
Stat-Ease® 360 and Design-Expert® software conveniently color-code and label different designs.

After throwing the trivial many factors off to the side (preferably by holding them fixed or blocking them out), the experimental program enters the characterization phase (the “C”) where interactions become evident. This requires a higher-resolution of V or better (green Regular Two-Level or Min-Run Characterization), or possibly full (white) two-level factorial designs. Also, add center points at this stage so curvature can be detected.

If you encounter significant curvature (per the very informative test provided in our software), use our design tools to augment your factorial design into a central composite for response surface methods (RSM). You then enter the optimization phase (the “O”).

However, if curvature is of no concern, skip to ruggedness (the “R” that finalizes the “SCOR”) and, hopefully, confirm with a low resolution (red) two-level design or a Plackett-Burman design (found under “Miscellaneous” in the “Factorial” section). Ideally you then find that your improved process can withstand field conditions. If not, then you will need to go back up to the beginning for a do-over.

The SCOR strategy, with some modification due to the nature of mixture DOE, works equally well for developing product formulations as it does for process improvement. For background, see my October 2022 blog on Strategy of Experiments for Formulations: Try Screening First!

Stat-Ease provides all the tools and training needed to deploy the SCOR strategy of experiments. For more details, watch my January webinar on YouTube. Then to master it, attend our Modern DOE for Process Optimization workshop.

Know the SCOR for a winning strategy of experiments!