Design-Expert® software, v12 offers formulators a simplified modeling option crafted to maximize essential mixture-process interaction information, while minimizing experimental costs. This new tool is nicknamed a “KCV model” after the initials of the developers – Scott Kowalski, John Cornell, and Geoff Vining. Below, Geoff reminisces on the development of these models.
To help learn this innovative methodology, first view a recorded webinar on the subject.
Next, sign up for the workshop "Mixture Design for Optimal Formulations".
The origin of the KCV designs goes back to a mixtures short-course that I taught for Doug Montgomery at an adhesives company in Ohio. One of the topics was mixture-process variables experiments, and Doug's notes for the course contained an example using ratios of mixture proportions with the process variables. Looking at the resulting designs, I recognized that ratios did not cover the mixture design space well. Scott Kowalski was beginning his dissertation at the time. Suddenly, he had a new chapter (actually two new chapters)!
The basic idea underlying the KCV designs is to start with a true second-order model in both the mixture and process variables and then to apply the mixture constraint. The mixture constraint is subtle and can produce several models, each with a different number of terms. A fundamental assumption underlying the KCV designs is that the mixture by process variable interactions are of serious interest, especially in production. Typically, corporate R&D develops the basic formulation, often under extremely pristine conditions. Too often, R&D makes the pronouncement that "Thou shall not play with the formula." However, there are situations where production is much smoother if we can take advantage of a mixture component by process variable interaction that improves yields or minimizes a major problem. Of course, that change requires changing the formula.
Cornell (2002) is the definitive text for all things dealing with mixture experiments. It covers every possible model for every situation. However, John in his research always tended to treat the process variables as nuisance. In fact, John's intuitions on Taguchi's combined array go back to the famous fish patty experiment in his book. The fish patty experiment looked at combining three different types of fish and involved three different processing conditions. John's intuition was how to create the best formulation robust to the processing conditions, recognizing that the use of these fish patties was in a major fast-food chain. The processing conditions in actual practice typically were in the hands of teenagers who may or may not follow the protocol precisely.
John's basic instincts followed the corporate R&D-production divide. He rarely, if ever, truly worried about the mixture by process variable interactions. In addition, his first instinct always was to cross a full mixture component experiment with a full factorial experiment in the process variables. If he needed to fractionate a mixture-process experiment, he always fractionated the process variable experiment, because it primarily provided the "noise".
The basic KCV approach reversed the focus. Why can we not fractionate the mixture experiment in such a way that if a process variable is not significant, the resulting design projects down to a standard full mixture experiment? In the process, we also can see the impact of possible mixture by process variable interactions.
I still vividly remember when Scott presented the basic idea in his very first dissertation committee meeting. Of course, John Cornell was on Scott's committee. In Scott's first committee meeting, he outlined the full second-order model in both the mixture and process variables and then proceeded to apply the mixture constraint in such a way as to preserve the mixture by process variable interactions. John, who was not the biggest fan of optimal designs when a standard mixture experiment would work well, immediately jumped up and ran to the board where Scott was presenting the basic idea. John was afraid that we were proposing to use this model and apply our favorite D-optimal algorithm, which may or may not look like a standard mixture design. John and I were very good friends. I simply told him to sit down and wait a few minutes. He reluctantly did. Five minutes later, Scott presented our design strategy for the basic KCV designs, illustrating the projection properties where if a process variable was unimportant the mixture design collapsed to a standard full mixture experiment. John saw that this approach addressed his basic concerns about the blind use of an optimal design algorithm. He immediately became a convert, hence the C in KCV. He saw that we were basically crossing a good design in the process variables, which itself could be a fraction, with a clever fraction of the mixture component experiment.
John's preference to cross a mixture experiment with a process variable design meant that it was very easy to extend these designs to split-plot structures. As a result, we had two very natural chapters for Scott's dissertation. The first paper (Kowalski, Cornell, and Vining 2000) appeared in Communications in Statistics in a special issue guest edited by Norman Draper. The second paper (Kowalski, Cornell, and Vining 2002) appeared in Technometrics.
There are several benefits to the KCV design strategy. First, these designs have very nice projection properties. Of course, they were constructed specifically to achieve this goal. Second, it can significantly reduce the overall design size while still preserving the ability to estimate highly informative models. Third, unlike the approach that I taught in Ohio, the KCV designs cover the mixture experimental design space much better while still providing the equivalent information. The underlying models for both approaches are equivalent.
It has been very gratifying to see Design-Expert incorporate the KCV designs. We hope that Design-Expert users find them valuable.
References
Cornell, J.A. (2002). Experiments with Mixtures: Designs, Models, and the Analysis of Mixture Data, 3rd ed. NewYork: John Wiley and Sons.
Kowalski, S.M., Cornell, J.A., and Vining, G.G. (2000). “A New Model and Class of Designs for Mixture Experiments with Process Variables,” Communications in Statistics – Theory and Methods, 29, pp. 2255-2280.
Kowalski, S.M., Cornell, J.A., and Vining, G.G. (2002). “Split-Plot Designs and Estimation Methods for Mixture Experiments with Process Variables,” Technometrics, 44, pp. 72-79.
This blog post is from James Cawse, Consultant and Principal at Cawse and Effect, LLC. Jim uses his unique blend of chemical knowledge, statistical skills, industrial process experience, and quality commitment to find solutions for his client's difficult experimental and process problems. He received his Ph.D. in Organic Chemistry from Stanford University. On top of all that, he's a great guy! Visit his website (link above) to find out more about Jim, his background, and his company.
The basic rationale for using a statistically based DOE in any science are straightforward. The DOE method provides:
DOE works so well in most scientific disciplines because Mother Nature is kind. In general:
Y = B0 + B1x1 + B2x2 + B12x1x2 + B11x12 +…
In contrast, chemistry offers unique challenges to the team of experimenter and statistician. Chemistry is a science replete with nonlinearities, complex interactions, and nonquantitative factors and responses. Chemical experiments require more forethought and better planning than most DOE’s. Chemistry-specific elements must be considered.
Above all, chemists make mixtures of ‘stuff’. These may be catalysts, drugs, personal care items, petrochemicals, or others. A beginner trying to apply DOE to a mixture system may think to start with a conventional cubic factorial design. It soon becomes clear, however, that there is an impossible situation when the (+1, +1, +1) corner requires 100% of A and B and C! The actual experimental space of a mixture is a triangular simplex. This can be rotated into the plane to show a simplex design, and it can easily be extended to high dimensions such as a tetrahedron.
It is rare that a real mixture experiment will actually use 100% of the components as points. A real experiment with be constrained by upper and lower bounds, or by proportionality requirements. The active ingredients may also be tiny amounts in a solvent. The response to a mixture may be a function of the amount used (fertilizers or insecticides, for example). And the conditions of the process which the mixture is used in may also be important, as in baking a cake – or optimizing a pharmaceutical reaction. All of these will require special designs.
Fortunately, all of these simple and complex mixture designs have been extensively studied and are covered by Cornell3, Anderson et al4, and Design-Expert® software.
The goal of a kinetics study is an equation which describes the progress of the reaction. The fundamental reality of chemical kinetics is
Rate = f(concentrations, temperature).
However, the form of the equation is highly dependent on the details of the reaction mechanism! The very simplest reaction has the first-order form
Rate = k*C1
which is easily treated by regression. The next most complex reaction has the form
Rate = k*C1*C2
in which the critical factors are multiplied – no longer the additive form of a typical linear model. The complexity continues to increase with multistep reactions.
Catalysis studies are chemical kinetics taken to the highest degree of complication! In industry, catalysts are often improved over years or decades. This process frequently results in increasingly complex catalyst formulations with components which interact in increasingly complex ways. A basic catalyst may have as many as five active co-catalysts. We now find multiple 2-factor interactions pointing to 3-factor interactions. As the catalyst is further refined, the Law of Diminishing Returns sets in. As you get closer to the theoretical limit – any improvement disappears in the noise!
As we look at the actual chemicals which may appear as factors in our experiments, we often find numbers appearing as part of their names. Often the only difference among these molecules is the length of the chain (C-12, 14, 16, 18) and it is tempting to incorporate this as numeric levels of the factor. Actually, this is a qualitative factor; calling it numeric invites serious error! The correct description, now available in Design-Expert, is ’Discrete Numeric’.
The real message, however, is that the experimenters must never take off their ’chemist hat‘ when putting on a ’statistics hat’!
Reference Materials:
[Disclaimer: I’m not a statistician. Nor do I want you to think that I am. I am a marketing guy (with a few years of biochemistry lab experience) learning the basics of statistics, design of experiments (DOE) in particular. This series of blog posts is meant to be a light-hearted chronicle of my travels in the land of DOE, not be a text book for statistics. So please, take it as it is meant to be taken. Thanks!]
So, I’ve gotten thru some of the basics (Greg's DOE Adventure: Important Statistical Concepts behind DOE and Greg’s DOE Adventure - Simple Comparisons). These are the ‘building blocks’ of design of experiments (DOE). However, I haven’t explored actual DOE. I start today with factorial design.
In this case, the horizontal line (x-axis) is time and vertical line (y-axis) is temperature. The area in the box formed is called the Experimental Space. Each corner of this box is labeled as follows:
1 – low time, low temperature (resulting in crunchy, matchstick-like pasta), which can be coded minus-minus (-,-)
2 – high time, low temperature (+,-)
3 – low time, high temperature (-,+)
4 – high time, high temperature (making a mushy mass of nasty) (+,+)
One takeaway at this point is that when a test is run at each point above, we have 2 results for each level of each factor (i.e. 2 tests at low time, 2 tests at high time). In factorial design, the estimates of the effects (that the factors have on the results) is based on the average of these two points; increasing the statistical power of the experiment.
Power is the chance that an effect will be found, when there is an effect to be found. In statistical speak, power is the probability that an experiment correctly rejects the null hypothesis when the alternate hypothesis is true.
If we look at the same experiment from the perspective of altering just one factor at a time (OFAT), things change a bit. In OFAT, we start at point #1 (low time, low temp) just like in the Factorial model we just outlined (illustrated below).
Here, we go from point #1 to #2 by lengthening the time in the water. Then we would go from #1 to #3 by changing the temperature. See how this limits the number of data points we have? To get the same power as the Factorial design, the experimenter will have to make 6 different tests (2 runs at each point) in order to get the same power in the experiment.
After seeing these results of Factorial Design vs OFAT, you may be wondering why OFAT is still used. First of all, OFAT is what we are taught from a young age in most science classes. It’s easy for us, as humans, to comprehend. When multiple factors are changed at the same time, we don’t process that information too well. The advantage these days is that we live in a computerized world. A computer running software like Design-Expert®, can break it all down by doing the math for us and helping us visualize the results.
Additionally, with the factorial design, because we have results from all 4 corners of the design space, we have a good idea what is happening in the upper right-hand area of the map. This allows us to look for interactions between factors.
That is my introduction to Factorial Design. I will be looking at more of the statistical end of this method in the next post or two. I’ll try to dive in a little deeper to get a better understanding of the method.
Expand your design of experiments (DOE) expertise at the 2019 Fall Technical Conference.
Martin Bezener will be teaching a pre-conference short course on Practical DOE and giving a session talk on binary data (abstracts are below). Shari Kraber will be hosting our exhibit booth and can discuss your DOE needs.
Short Course Abstract (Wed, Sep 25): Practical DOE: ‘Tricks of the Trade’
In this dynamic short-course, Stat-Ease consultant Martin Bezener reveals DOE tricks of the trade that make the most from statistical design and analysis of experiments. Come and learn many secrets for design of experiment (DOE) success, such as:
Session 6A Abstract (Fri, Sep 27, 1:30pm): Practical Considerations in the Design of Experiments for Binary Data
Binary data is very common in experimental work. In some situations, a continuous response is not possible to measure. While the analysis of binary data is a well-developed field with an abundance of tools, design of experiments (DOE) for binary data has received little attention, especially in practical aspects that are most useful to experimenters. Most of the work in our experience has been too theoretical to put into practice. Many of the well-established designs that assume a continuous response don’t work well for binary data yet are often used for teaching and consulting purposes. In this talk, I will briefly motivate the problem with some real-life examples we’ve seen in our consulting work. I will then provide a review of the work that has been done up to this point. Then I will explain some outstanding open problems and propose some solutions. Some simulation results and a case study will conclude the talk.
Be sure to mark these as must attend events on your FTC schedule!
Hurry – Early Bird Registration ends Aug 25. Hotel block ends Aug 27. Final Registration ends September 13.
Links for more info:
We recently published the July-August edition of The DOE FAQ Alert. One of the items in that publication was the question below, and it's too interesting not to share here as well.
Original question from a Research Scientist:
"Empowered by the Stat-Ease class on mixture DOE and the use of Design-Expert, I have put these tools to good use for the past couple of years. However, I am having to more and more defend why a mixture design is more appropriate than factorials or response surface methods when experimenting on formulations. Do you have any resources, blogs posts, or real-world data that would better articulate why trying to use a full factorial or central composite design on mixture components is not the most effective option?"
Answer from Stat-Ease Consultant Martin Bezener:
“First, I assume you are talking about factorials or response surface method (RSM) designs involving the proportions of the components. It makes no sense to use a factorial or RSM if you are dealing with amounts, since doubling the amount of everything should not affect the response, but it will in a factorial or response-surface model.
"There are some major issues with factorial designs. For one thing, the upper bounds of all the components need to sum to less than 1. For example, let’s say you experimented on three components with the following ranges:
A. X1: 10 - 20%
B. X2: 5 - 6%
C. X3: 10 - 90%
then the full-factorial design would lay out a run at all-maximum levels, which makes no sense as that gives a total of 116% (20+6+90). Oftentimes people get away with this because there is a filler component (like water) that takes the formulation to a fixed total of 100%, but this doesn't always happen.
"Also, a factorial design will only consider the extreme combinations (lows/highs) of the mixture. So, you'll get tons of vertices but no points in the interior of the space. This is a waste of resources, since a factorial design doesn't allow fitting anything beyond an interaction model.
"An RSM design can be ‘crammed’ into mixture space to allow curvature fits, but this is generally a very poor design choice. Using ratios of components provides a work-around, but that has its own problems.
"Whenever you try to make the problem fit the design (rather than the other way around), you lose valuable information. A very nice illustration of this was provided in the by Mark Anderson in his article on the “Peril of Parts & the Failure of Fillers as Excuses to Dodge Mixture Design” in the May 2013 Stat-Teaser.”
An addendum from Mark Anderson, Principal of Stat-Ease and author of The DOE FAQ Alert:
"The 'problems' Martin refers to for using ratios (tedious math!) are detailed in RSM Simplified Chapter 11: 'Applying RSM to Mixtures'. You can learn more about this book and the others in the Simplified series ('DOE' and 'Formulation') on our website."
Links for additional information: