Thanksgiving is fast approaching—time to begin the meal planning. With this in mind, the NBC Today show’s October 22nd tips for "75 Thanksgiving desserts for the sweetest end to your feast" caught my eye, in particular the Donut Loaf pound cake. My 11 grandkids would love this “giant powdered sugar donut” (and their Poppa, too!).
I became a big fan of pound cake in the early 1990s while teaching DOE to food scientists at Sara Lee Corporation. Their ready-made pound cakes really hit the spot. However, it is hard to beat starting from scratch and baking your own pound cake. The recipe goes backs hundreds of years to a time when many people could not read, thus it simply called for a pound each of flour, butter, sugar and eggs. Not having a strong interest in baking and wanting to minimize ingredients and complexity (other than adding milk for moisture and baking powder for tenderness), I made this formulation my starting point for a mixture DOE, using the Sara Lee classic pound cake as the standard for comparison.
As I always advise Stat-Ease clients, before designing an experiment, begin with the first principles. I took advantage of my work with Sara Lee to gain insights on the food science of pound cake. Then I checked out Rose Levy Beranbaum’s The Cake Bible from my local library. I was a bit dismayed to learn from this research that the experts recommended cake flour, which costs about four times more than the all-purpose (AP) variety. Having worked in a flour mill during my time at General Mills as a process engineer, I was skeptical. Therefore, I developed a way to ‘have my cake and eat it too’: via a multicomponent constraint (MCC), my experiment design incorporated both varieties of flour. Figure 1 shows how to enter this in Stat-Ease software.
Figure 1. Setting up the pound cake experiment with a multicomponent constraint on the flours
By the way, as you can see in the screen shot, I scaled back the total weight of each experimental cake to 1 pound (16 ounces by weight), keeping each of the four ingredients in a specified range with the MCC preventing the combined amount of flour from going out of bounds.
The trace plot shown in Figure 2 provides the ingredient directions for a pound cake that pleases kids (based on tastes of my young family of 5 at the time) are straight-forward: more sugar, less eggs and go with the cheap AP flour (its track not appreciably different than the cake flour.)
Figure 2. Trace plot for pound cake experiment
For all the details on my pound cake experiment, refer to "Mixing it up with Computer-Aided Design"—the manuscript for a publication by Today's Chemist at Work in their November 1997 issue. This DOE is also featured in “MCCs Made as Easy as Making a Pound Cake” in Chapter 6 of Formulation Simplified: Finding the Sweet Spot through Design and Analysis of Experiments with Mixtures.
The only thing I would do different nowadays is pour a lot of powdered sugar over the top a la the Today show recipe. One thing that I will not do, despite it being so popular during the Halloween/Thanksgiving season, is add pumpkin spice. But go ahead if you like—do your own thing while experimenting on pound cake for your family’s feast. Happy holidays! Enjoy!
To learn more about MCCs and master DOE for food, chemical, pharmaceutical, cosmetic or any other recipe improvement projects, enroll in a Stat-Ease “Mixture Design for Optimal Formulations” public workshop or arrange for a private presentation to your R&D team.
Hi folks! It was wonderful to meet with so many new prospects and long-standing clients at the Advanced Manufacturing Minneapolis expo last week. One highlight of the show was running our own design of experiments (DOE) in-booth: a test to pinpoint the height and distance of a foam cat launched from our Cat-A-Pult toy. Visitors got to choose a cat and launch it based on our randomized run sheet. We got lots of takers coming in to watch the cats fly, and we even got a visit from local mascot Goldy Gopher!
Mark, Tony, and Rachel are all UMN alums - go Gophers!
But I’m getting a bit ahead of myself: this experiment primarily shows off the ease of use and powerful analytical capabilities of Design-Expert® and Stat-Ease® 360 software. I’m no statistician – the last math class I took was in high school, over a decade ago – but even a marketer like me was able to design, run, and analyze a DOE with just a little advice. Here’s how it worked.
Let’s start at the beginning, with the design. My first task was to decide what factors I wanted to test. There were lots of options! The two most obvious were the built-in experimental parts of the toy: the green and orange knobs on either side of the Cat-A-Pult, with spring tension settings from 1 to 5.
However, there were plenty of other places where there could be variation in my ‘pulting system:
Some of these questions can be answered with subject matter knowledge – in the case of launch pressure, by reading the instruction manual.
For our experiment, the surface question was moot: we had no way to test it, as the convention floor was covered in carpet. We also had no way to test beforehand if there were differences in mass between colors of cat, since we lacked a tool with sufficient precision. I settled on just testing the experimental knobs, but decided to account for some of this variation in other ways. We divided the experiment into blocks based on which specific Cat-A-Pult we were using, and numbered them from 1 to 5. And, while I decided to let people choose their cat color to enhance the fun aspect, we still tracked which color of cat was launched for each run - just in case.
Since my chosen two categoric factors had five levels each, I decided to use the Multilevel Categoric design tool to set up my DOE. One thing I learned from Mark is that these are an “ordinal” type of categoric factor: there is an order to the levels, as opposed to a factor like the color of the cat or the type of flooring (a “nominal” factor). We decided to just test 3 of the 5 Cat-A-Pults, trying to be reasonable about how many folks would want to play with the cats, so we set the design to have 3 replicates separated out into 3 blocks. This would help us identify if there were any differences between the specific Cat-A-Pults.
For my responses, I chose the Cat-A-Pult’s recommended ones: height and distance. My Stat-Ease software then gave me the full, 5x5, 25-run factorial design for this, with a total of 75 runs for the 3 replicates blocked by 'pult, meaning we would test every combination of green knob level and orange knob level on each Cat-A-Pult. More runs means more accurate and precise modeling of our system, and we expected to be able to get 75 folks to stop by and launch a cat.
And so, armed with my run sheet, I set up our booth experiment! I brought two measuring tapes for the launch zone: one laid along the side of it to measure distance, and one hanging from the booth wall to measure height. My measurement process was, shall we say, less than precise: for distance, the tester and I eyeballed the point at which the cat first landed after launch, then drew a line over to our measuring tape. For height, I took a video of the launch, then scrolled back to the frame at the apex of the cat’s arc and once again eyeballed the height measurement next to it. In addition to blocking the specific Cat-A-Pult used, we tracked which color of cat was selected in case that became relevant. (We also had to append A and B to the orange cat after the first orange cat was mistaken for swag!)
Whee! I'm calling that one at 23 inches.
Over the course of the conference, we completed 50 runs, getting through the full range of settings for ‘pults 1 and 2. While that’s less than we had hoped, it’s still plenty for a good analysis. I ran the analysis for height, following the steps I learned in our Finding the Vital Settings via Factorial Analysis eLearning module.
The half-normal plot of effects and the ANOVA table for Height.
The green knob was the only significant effect on Height, but the relatively low Predicted R² model-fit-statistic value of 0.36 tells us that there’s a lot of noise that the model doesn’t explain. Mark directed me to check the coefficients, where we discovered that there was a 5-inch variation in height between the two Cat-A-Pult! That’s a huge difference, considering that our Height response peaked at 27 inches.
With that caveat in mind, we looked at the diagnostic plots and the one-factor plot for Height. The diagnostics all looked fine, but the Least Significant Difference bars showed us something interesting: there didn’t seem to be significant differences between setting the green knob at 1-3, or between settings 4-5, but there was a difference between those two groups.
One-factor interaction plot for Height.
With this analysis under my belt, I moved on to Distance. This one was a bit trickier, because while both knobs were clearly significant to the model, I wasn’t sure whether or not to include the interaction. I decided to include it because that’s what multifactor DOE is for, as opposed to one-factor-at-a-time experimentation: we’re trying to look for interactions between factors. So once again, I turned to the diagnostics.
The three main diagnostic plots for Distance.
Here's where I ran into a complication: our primary diagnostic tools told me there was something off with our data. There’s a clear S-shaped pattern in the Normal Plot of Residuals and the Residuals vs. Predicted graph shows a slight megaphone shape. No transform was recommended according to the Box-Cox plot, but Mark suggested I try a square-root transform anyways to see if we could get more of the data to fit the model. So I did!
The diagnostics again, after transforming.
Unfortunately, that didn’t fix the issues I saw in the diagnostics. In fact, it revealed that there’s a chance two of our runs were outliers: runs #10 and #26. Mark and I reviewed the process notes for those runs and found that run #10 might have suffered from operator error: he was the one helping our experimenter at the booth while I ran off for lunch, and he reported that he didn’t think he accurately captured the results the way I’d been doing it. With that in mind, I decided to ignore that run when analyzing the data. This didn’t result in a change in the analysis for Height, but it made a large difference when analyzing Distance. The Box-Cox plot recommended a log transform for analyzing Distance, so I applied one. This tightened the p-value for the interaction down to 0.03 and brought the diagnostics more into line with what we expected.
The two-factor interaction plot for Distance.
While this interaction plot is a bit trickier to read than the one-factor plot for Height, we can still clearly see that there’s a significant difference between certain sets of setting combinations. It’s obvious that setting the orange knob to 1 keeps the distance significantly lower than other settings, regardless of the green knob’s setting. The orange knob’s setting also seems to matter more as the green knob’s setting increases.
Normally, this is when I’d move on to optimization, and figuring out which setting combinations will let me accurately hit a “sweet spot” every time. However, this is where I stopped. Given the huge amount of variation in height between the two Cat-A-Pults, I’m not confident that any height optimization I do will be accurate. If we’d gotten those last 25 runs with ‘pult #3, I might have had enough data to make a more educated decision; I could set a Cat-A-Pult on the floor and know for certain that the cat would clear the edge of the litterbox when launched! I’ll have to go back to the “lab” and collect more data the next time we’re out at a trade show.
One final note before I bring this story to a close: the instruction manual for the Cat-A-Pult actually tells us what the orange and green knobs are supposed to do. The orange knob controls the release point of the Cat-A-Pult, affecting the trajectory of the cat, and the green knob controls the spring tension, affecting the force with which the cat is launched.
I mentioned this to Mark, and it surprised us both! The intuitive assumption would be that the trajectory knob would primarily affect height, but the results showed that the orange knob’s settings didn’t significantly affect the height of the launch at all. “That,” Mark told me, “is why it’s good to run empirical studies and not assume anything!”
We hope to see you the next time we’re out and about. Our next planned conference is our 8th European DOE User Meeting in Amsterdam, the Netherlands on June 18-20, 2025. Learn more here, and happy experimenting!
As a chemical engineer with roots as an R&D process developer, the appeal of design of experiments (DOE) is its ability to handle multiple factors simultaneously. Traditional scientific methods restrict experimenters to one factor at a time (OFAT), which is inefficient and does not reveal interactions. However, a simple-comparative OFAT often suffices for a process improvement. If this is all that’s needed, you may as well do it right statistically. As industrial-statistical guru George Box reportedly said “DOE is a wonderful comparison machine.”
A fellow named William Sealy Gosset developed the statistical tools for simple-comparative experiments (SCE) in the early 1900s. As Head Experimental Brewer for Guiness in Dublin, he evaluated hops from various regions soft resin content—a critical ingredient for optimizing the bitterness on preserving their beer.1 To compare the results from one source versus another with statistical rigor, Gosset invented the t-test—a great tool for DOE even today (and far easier to do with modern software!).
The t-test simply compares two means relative to the standard deviation of the difference. The result can be easily interpreted with a modicum of knowledge about normal distributions: As t increases beyond 2 standard deviations, the difference becomes more and more significant. Gosset’s breakthrough came by his adjustment of the distribution for small sample sizes, which make the tails on the bell shape curve slightly fatter and the head somewhat lower as shown in Figure 1. The correction, in this case for a test comparing a sample of 4 results for one level versus 4 at the other, is minor but very important to get the statistics right.
Figure 1. Normal curve versus t-distribution (probabilities plotted by standard deviations from zero)
To illustrate a simple comparative DOE, consider a case study on the filling of 16-ounce plastic bottles with two production machines—line 1 and line 2.2 The packaging engineers must assess whether they differ. To make this determination, they set up an experiment to randomly select 10 bottles from each machine. Stat-Ease software makes this easy via its Factorial, Randomized, Multilevel Categorical design option as shown by the screen shot in Figure 2.
Figure 2. Setting up a simple comparative DOE in Stat-Ease software
The resulting volumes in ounces are shown below (mean outcome shown in parentheses).
Stat-Ease software translates the mean difference between the two machines (0.01 ounce) a t value of 0.7989, that is, less than one standard deviation apart, which produces a p-value of 0.4347—far above the generally acceptable standard of p<0.05 for significance. Its Model Graph in Figure 3 displays all the raw data, the means of each level and their least significant difference (LSD) bars based on a t test at p of 0.05—notice how they overlap from left to right—clearly the difference is not significant.
Figure 3. Graph showing effect on fill from one machine line to the other
Thus, from the stats and at first glance of the effect graph it seems that the packaging engineers need not worry about any differences between the two machine lines. But hold on before jumping to a final conclusion: What if a difference of 0.01 ounce adds up to a big expense over a long period of time? The managers overseeing the annual profit and loss for the filling operation would then be greatly concerned. Before doing any designed experiment, it pays to do a power calculation to work out how many runs are needed to see a minimal difference (signal ‘delta’) of importance relative to the variation (noise ‘sigma). In this case, the power for sample size 10 for a delta of 0.01 ounce with a sigma (standard deviation) of 0.028 ounces (provided by Stat-Ease software) generates a power of only 11.8%—far short of the generally acceptable level of 80%. Further calculations reveal that if this small of a difference really needed to be detected, they should fill 125 or more bottles on each line.
In conclusion, it turns out that simple comparative DOEs are not all that simple to do correctly from a statistical perspective. Some keys to getting these two level OFAT experiments done right are:
In a previous Stat-Ease blog, my colleague Shari Kraber provided insights into Improving Your Predictive Model via a Response Transformation. She highlighted the most commonly used transformation: the log. As a follow up to this article, let’s delve into another transformation: the square root, which deals nicely with count data such as imperfections. Counts follow the Poisson distribution, where the standard deviation is a function of the mean. This is not normal, which can invalidate ordinary-least-square (OLS) regression analysis. An alternative modeling tool, called Poisson regression (PR) provides a more precise way to deal with count data. However, to keep it simple statistically (KISS), I prefer the better-known methods of OLS with application of the square root transformation as a work-around.
When Stat-Ease software first introduced PR, I gave it a go via a design of experiment (DOE) on making microwave popcorn. In prior DOEs on this tasty treat I worked at reducing the weight of off-putting unpopped kernels (UPKs). However, I became a victim of my own success by reducing UPKs to a point where my kitchen scale could not provide adequate precision.
With the tools of PR in hand, I shifted my focus to a count of the UPKs to test out a new cell-phone app called Popcorn Expert. It listens to the “pops” and via the “latest machine learning achievements” signals users to turn off their microwave at the ideal moment that maximizes yield before they burn their snack. I set up a DOE to compare this app against two optional popcorn settings on my General Electric Spacemaker™ microwave: standard (“GE”) and extended (“GE++”). As an additional factor, I looked at preheating the microwave with a glass of water for 1 minute—widely publicized on the internet to be the secret to success.
Table 1 lays out my results from a replicated full factorial of the six combinations done in random order (shown in parentheses). Due to a few mistakes following the software’s plan (oops!), I added a few more runs along the way, increasing the number from 12 to 14. All of the popcorn produced tasted great, but as you can see, the yield varied severalfold.
A: | B: | UPKs | ||
---|---|---|---|---|
Preheat | Timing | Rep 1 | Rep 2 | Rep 3 |
No | GE | 41 (2) | 92 (4) | |
No | GE++ | 23 (6) | 32 (12) | 34 (13) |
No | App | 28 (1) | 50 (8) | 43 (11) |
Yes | GE | 70 (5) | 62 (14) | |
Yes | GE++ | 35 (7) | 51 (10) | |
Yes | App | 50 (3) | 40 (9) |
I then analyzed the results via OLS with and without a square root transformation, and then advanced to the more sophisticated Poisson regression. In this case, PR prevailed: It revealed an interaction, displayed in Figure 1, that did not emerge from the OLS models.
Figure 1: Interaction of the two factors—preheat and timing method
Going to the extended popcorn timing (GE++) on my Spacemaker makes time-wasting preheating unnecessary—actually producing a significant reduction in UPKs. Good to know!
By the way, the app worked very well, but my results showed that I do not need my cell phone to maximize the yield of tasty popcorn.
To succeed in experiments on counts, they must be:
For more details on the various approaches I’ve outlined above, view my presentation on Making the Most from Measuring Counts at the Stat-Ease YouTube Channel.
My colleague Richard Williams just completed a very thorough three-part series of blogs detailing experiment designs aimed at building robustness against external noise factors, internal process variation, and combinations of both. In this follow up, I present another, more simplistic, approach to achieve on target results with minimal variation: Model not only the mean outcome but also the standard deviation. Experimenters making multiple measurements for every run in their design often overlook this opportunity.
For example, consider the paper helicopter experiment done by students of my annual DOE class at South Dakota Mines. The performance of these flying machines depends on paper weight, wing and body dimensions and other easily controlled factors such as putting on a paper clip to stabilize rotation. To dampen down variability in launching and air currents, students are strongly encouraged to drop each of their ‘copters three times and model the means of the flight time and distance from target. I also urge them to analyze the standard deviations of these two measures. Those who do discover that ‘copters without paper clips exhibit significantly higher variability in on-target landings. This can be seen in the interaction plot pictured, which came from a split plot factorial on paper helicopters done by me and colleagues at Stat-Ease (see this detailed here).
Putting on a paper clip dramatically decreased the standard deviation of distance from target for wide bodied ‘copters, but not for narrow bodied ones. Good to know!
When optimizing manufacturing processes via response surface methods, measuring variability as well as the mean response can provide valuable insights. For example, see this paper by me and Pat Whitcomb on Response Surface Methods (RSM) for Peak Process Performance at the Most Robust Operating Conditions for more details. The variability within the sample collection should represent the long-term variability of the process. As few as three per experimental run may be needed with the proper spacing.
By simply capturing the standard deviation, experimenters become enabled to deal with unknown external sources of variation. If the design is an RSM, this does not preempt them from also applying propagation of error (POE) to minimize internal variation transmitted to responses from poorly controlled process factors. However, to provide the greatest assurance for a robust operating system, take one of the more proactive approaches suggested by Richard.