Stat-Ease
If you are having trouble viewing this email view it online.
 
Vol: 14 | No: 6 | Nov/Dec '14
Stat-Ease
The DOE FAQ Alert
     
 

Stat-Ease Statistical Group

Dear Experimenter,
Here’s another set of frequently asked questions (FAQs) about doing design of experiments (DOE), plus alerts to timely information and free software updates. If you missed the previous DOE FAQ Alert click here.

To open another avenue of communications with fellow DOE and Stat-Ease fans, sign up for The Stat-Ease Professional Network on Linked in. A recent thread features “For fans of Stats, Hockey and the Minnesota Wild.”

 
Stats Made Easy Blog

 

 

 

 

 

 

 

 

 

 

-

 
Topics in the body text of this DOE FAQ Alert are headlined below (the "Expert" ones, if any, delve into statistical details):

1:  FAQ: Can you have too much power?
2:  Expert FAQ: Including insignificant terms in response surface method (RSM) models
3:  Events alert: Talk on Quality by Design (QbD) Design Space at the 2015 Annual Meeting of the International Forum on Process Analytical Chemistry (IFPAC) in Washington, D.C.
4:  Info alert: Great primer on DOE for chemical, pharmaceutical, paints and other process industries
5:  Workshop alert: Time to tool up on DOE!
 
 


PS. Quote for the month: Thinking outside the box.


- Back to top -


1: FAQ: Can you have too much power?

Original question from Military Test Engineer:
“Using your new Design-Expert® version 9 software I set up a split-plot multi-level categorical design with three hard-to-change (HTC) factors and one that is easy to change (ETC).  We have two vehicles so we replicated the design in two blocks.  This created a 32-run test plan with 84.4% HTC power and 99.9% on the ETC.

Is 99.9% power too high?”

Answer from Stat-Ease Consultant Shari Kraber:
“The strong power on the easy-to-change (ETC) factors means that even small effects could be reported as statistically significant, but the change that they induce is not of practical interest to you. So worst that can happen is that you find an extra effect that you don't care about.  That's certainly better than not finding effects!  Otherwise it looks like you are in good shape!”

I answered the same question not knowing Shari did so.  Here’s what I said:

>My view is that sizing as you did based on the HTCs, which naturally creates an overpowering on the ETCs, is most sensible.  However, in general terms, if an experimenter is really more interested in the ETC and ETC-HTC interactions (which are powered at the ETC level) and runs are quite expensive, then perhaps paring back the design to only 80% power on the ETCs (thus under-powering the HTCs) would be reasonable.  So there you go—even though I am an engineer, I’ve learned from statisticians not to give one definite answer, but rather multiple “it depends” scenarios. ; )<

Then I asked our head Consultant Pat Whitcomb to look over both answers and weigh in with his.  Here is what Pat said:

“Mark, both you and Shari gave acceptable answers. Shari’s is more to the point of the question.  Your answer is more comprehensive and general, which I like.  However, I don’t think it fits this particular situation very well.  With three HTC and only one ETC factors; so there are six HTC effects (three main-effect and three two-factor interactions).  Because three out of four factors are HTC my guess is that power at the whole plot level is important.  Also, the only easy way to reduce runs is by performing only one replicate, in which case the power for HTC main effects is only 42.2%; while the ETC is still high at 92.7%.  Trying to pare back further to get 80% on the ETC factor isn’t possible with three HTC factors using our standard templates.”


(Learn more about power by attending the two-day computer-intensive workshop Experiment Design Made Easy.  Follow up with the half-day add-on class that details Factorial Split-Plot Designs for Hard-to-Change Factors.  Click on the titles for a description of these classes and link from their pages to the course outlines and schedules.  Then, if you like, enroll online.)


- Back to top -


2: Expert FAQ: Including insignificant terms in response surface method (RSM) models

Original question from Master Black Belt in Six Sigma:
“In an ISO standard with selected illustrations of response surface methodology using central composite designs they include insignificant terms in their final model, which I find very strange.  How do you view this.”

Answer:
The advice you question makes sense in the context of the “SCO” strategy of experimentation—screening, characterization and optimization*, because then an RSM design includes only factors known to affect the system and presumed to create curvature in the response.  In that event there’s little value in reducing terms out of the model—the surface maps do not change appreciably.  Furthermore, without careful oversight by statistically-savvy subject-matter experts, automated regression algorithms can pare out too many terms.

On the other hand, when models get really big, such as for a mixture-process combined experiment, we’ve found that as a practical matter it does make sense to take out the resulting plethora of insignificant terms—provided they are not required to maintain model hierarchy.

Here is what my co-author Pat Whitcomb and I said on this issue in our book RSM Simplified in a Chapter 4 sidebar titled “ARE YOU A “TOSSER” OR A “KEEPER?”:

“Have you observed that, whenever two people operate in close proximity, one will become obsessively neat, tossing anything considered superfluous, and the other keeps any stuff that may conceivably have value? This can be a big source of friction for office- or housemates. Similarly, controversy persists as to the utility of clearing out insignificant model terms not required to maintain hierarchy (family structure). This is easy enough to do with various algorithms for reduction, as noted in the chapter 2 sidebar “A Brief Word on Algorithmic Model Reduction”, but is it right or wrong? Here’s what some of the experts on DOE say:

  • “In response surface work it is customary to fit the full model…”
    Myers and Montgomery, 2002, p. 742. [Response Surface Methodology]
  • “Choose the smallest order such that no significant terms are excluded.”
    Oehlert, 2000, p. 56. [A First Course in Design and Analysis of Experiments]

Our interpretation of this advice is that statisticians make allowances for reducing a quadratic model, for which the typical RSM design is geared, provided that the end result is a full linear or 2FI polynomial. Since insignificant terms by definition will not create much impact on the shape of the response surface, this issue may be moot as a practical matter. However, be on the lookout for cases where a factor and every one of its dependents (for example: A, AB, AC, A2) are all insignificant. Eliminating these terms would then reduce the dimensionality of the surface, thus making interpretation easier. If this happens to you, watch out, because it begs the question: Why didn’t you screen out this non-factor before doing an in-depth optimization experiment?”

I welcome input from any readers who wish to weigh in on this somewhat controversial statistical topic.

P.S. from Pat: “When there more than a few insignificant terms you may see a large difference between the adjusted and predicted R-squareds.  In this case try reducing the model and see if that increases the predicted R-squared and brings it closer to the adjusted R-squared.  The predicted R-squared is sensitive to over model parameterization while the adjusted R-Squared is not.”

*See my June 20, 2011 StatsMadeEasy blog “Strategy of experimentation: Break it into a series of smaller stages”.

(Learn more about modeling by attending the two-day computer-intensive workshop Response Surface Methods for Process Optimization.  Click on the title for a complete description.  Link from this page to the course outline and schedule.  Then, if you like, enroll online.)


- Back to top -


3: Events alert: Talk on Quality by Design (QbD) Design Space at the 2015 Annual Meeting of the International Forum on Process Analytical Chemistry (IFPAC) in Washington, D.C.

Stat-Ease Consultant Pat Whitcomb will present a talk on “Using Propagation of Error with Tolerance Intervals to Define a Design Space” at IFPAC 2015, January 26-28 in Washington, D.C.  I will tag along to exhibit our software.  Pat’s talk is a follow-up to one I presented at last year’s IFPAC conference on “Managing Uncertainty in Design Space” that detailed application of tolerance intervals to produce a conservative QbD design space from operating windows based on response surface methods.  This new presentation by Pat shows how experimenters can account for variation transmitted from process inputs via a mathematical tool called “propagation of error” (POE).  The resulting design space then becomes even more reliable over the long term for defining an operation window within which critical to quality (CTQ) characteristics will certainly meet specifications.

Click here for more details on IFPAC and their Annual Conference, including a link to register.

PS.  Do you need a speaker on DOE for a learning session within your company or technical society at regional, national, or even international levels?  If so, contact me.  It may not cost you anything if Stat-Ease has a consultant close by, or if a web conference will be suitable.  However, for presentations involving travel, we appreciate reimbursement for travel expenses.  In any case, it never hurts to ask Stat-Ease for a speaker on this topic.


- Back to top -


4: Info alert: Great primer on DOE for chemical, pharmaceutical, paints and other process industries

Professor Wilhelm Kleppmann of Aalen University, author of a book on “Versuchsplanung” (experimental design) now in 8th edition from Hanser, wrote this great primer for Chemical Engineering magazine’s November issue on Design of Experiments (DoE): Optimizing Products and Processes Efficiently.  Check it out! (Note: You must create a complimentary account to read the article.)


- Back to top -


5: Workshop alert: Time to tool up on DOE!

All classes listed below will be held at the Stat-Ease training center in Minneapolis unless otherwise noted.  If possible, enroll at least 4 weeks prior to the date so your place can be assured.  Also, take advantage of a $400 discount when you take two complementary 2-day workshops that are offered on consecutive days.

*Receive a $200 discount per class when you enroll 2 or more students or enroll in consecutive 2-day workshops. Receive a $100 discount for enrolling in the FSPD workshop along with another class.

** Take both MIX and MIX2 to earn $400 off the combined tuition!

See this web page for complete schedule and site information on all Stat-Ease workshops open to the public.  To enroll, scroll down to the workshop of your choice and click on it, or call Rachel Pollack at 612-746-2038.  If spots remain available, bring along several colleagues and take advantage of quantity discounts in tuition.  Or, consider bringing in an expert from Stat-Ease to teach a private class at your site. Once you achieve a critical mass of about 6 students, it becomes very economical to sponsor a private workshop, which is most convenient and effective for your staff.  For a quote, e-mail [email protected].


- Back to top -


I hope you learned something from this issue. Address your general questions and comments to me at: [email protected].

Please do not send me requests to subscribe or unsubscribe—follow the instructions at the end of this message.
Sincerely,

Mark

Mark J. Anderson, PE, CQE
Principal, Stat-Ease, Inc.
2021 East Hennepin Avenue, Suite 480
Minneapolis, Minnesota 55413 USA

PS. Quote for the month—thinking outside the box:

 
"Opportunity lies not in the norm, but in the variance.”

—Pam Henderson, CEO of NewEdge, Inc., keynote speaker at the American Society for Quality (ASQ) 2014 Technical Communities Conference (TCC)

Trademarks: Stat-Ease, Design-Ease, Design-Expert and Statistics Made Easy are registered trademarks of Stat-Ease, Inc.

Acknowledgements to contributors:
—Students of Stat-Ease training and users of Stat-Ease software
Stat-Ease consultants Pat Whitcomb, Shari Kraber, Wayne Adams and Brooks Henderson
—Statistical advisor to Stat-Ease: Dr. Gary Oehlert
Stat-Ease programmers led by Neal Vaughn
—Heidi Hansel Wolfe, Stat-Ease sales and marketing director, Karen Dulski, and all the remaining staff that provide such supreme support!

Twitter-SmileyFor breaking news from Stat-Ease go to this Twitter site.

DOE FAQ Alert ©2014 Stat-Ease, Inc.
Circulation: 6300 worldwide
All rights reserved.


 
  Subscribe