Stat-Ease Blog

Blog

Experimental Design in Chemistry: A Review of Pitfalls (Guest Post)

posted by James Cawse on Nov. 1, 2019

This blog post is from James Cawse, Consultant and Principal at Cawse and Effect, LLC. Jim uses his unique blend of chemical knowledge, statistical skills, industrial process experience, and quality commitment to find solutions for his client's difficult experimental and process problems. He received his Ph.D. in Organic Chemistry from Stanford University. On top of all that, he's a great guy! Visit his website (link above) to find out more about Jim, his background, and his company.

Introduction

Getting the best information from chemical experimentation using design of experiments (DOE) is a concept that has been around for decades, although it is still painfully underused in chemistry. In a recent article Leardi1 pointed this out with an excellent tutorial on basic DOE for chemistry. The classic DOE text Statistics for Experimenters2 also used many chemical illustrations of DOE methodology. In my consulting practice, however, I have encountered numerous situations where ’vanilla‘ DOE – whether from a book, software, or a Six Sigma course – struggles mightily because of the inherent complications of chemistry.

The basic rationale for using a statistically based DOE in any science are straightforward. The DOE method provides:

  • Points distributed in a rational fashion throughout “experimental space”.
  • Noise reduction by averaging and application of efficient statistical tools.
  • ‘Synergy’, typically the result of the interactions of two or more factors - easily determined in a DOE.
  • An equation (model) that can then be used to predict further results and optimize the system.
All of these are provided in a typical DOE, which generally starts simply with a factorial design.

DOE works so well in most scientific disciplines because Mother Nature is kind. In general:

  • Most experiments can be performed with small numbers of ’well behaved‘ factors, typically simple numeric or qualitative at 2-3 levels
  • Interactions typically involve only 2 factors. Three level and higher interactions are ignored.
  • The experimental space is relatively smooth; there are no cliffs (e.g. phase changes).
As a result, additive models are a good fit to the space and can be determined by straightforward regression.

Y = B0 + B1x1 + B2x2 + B12x1x2 + B11x12 +…

In contrast, chemistry offers unique challenges to the team of experimenter and statistician. Chemistry is a science replete with nonlinearities, complex interactions, and nonquantitative factors and responses. Chemical experiments require more forethought and better planning than most DOE’s. Chemistry-specific elements must be considered.

Mixtures

Above all, chemists make mixtures of ‘stuff’. These may be catalysts, drugs, personal care items, petrochemicals, or others. A beginner trying to apply DOE to a mixture system may think to start with a conventional cubic factorial design. It soon becomes clear, however, that there is an impossible situation when the (+1, +1, +1) corner requires 100% of A and B and C! The actual experimental space of a mixture is a triangular simplex. This can be rotated into the plane to show a simplex design, and it can easily be extended to high dimensions such as a tetrahedron.

It is rare that a real mixture experiment will actually use 100% of the components as points. A real experiment with be constrained by upper and lower bounds, or by proportionality requirements. The active ingredients may also be tiny amounts in a solvent. The response to a mixture may be a function of the amount used (fertilizers or insecticides, for example). And the conditions of the process which the mixture is used in may also be important, as in baking a cake – or optimizing a pharmaceutical reaction. All of these will require special designs.

Fortunately, all of these simple and complex mixture designs have been extensively studied and are covered by Cornell3, Anderson et al4, and Design-Expert® software.

Kinetics

The goal of a kinetics study is an equation which describes the progress of the reaction. The fundamental reality of chemical kinetics is

Rate = f(concentrations, temperature).

However, the form of the equation is highly dependent on the details of the reaction mechanism! The very simplest reaction has the first-order form

Rate = k*C1

which is easily treated by regression. The next most complex reaction has the form

Rate = k*C1*C2

in which the critical factors are multiplied – no longer the additive form of a typical linear model. The complexity continues to increase with multistep reactions.

Catalysis studies are chemical kinetics taken to the highest degree of complication! In industry, catalysts are often improved over years or decades. This process frequently results in increasingly complex catalyst formulations with components which interact in increasingly complex ways. A basic catalyst may have as many as five active co-catalysts. We now find multiple 2-factor interactions pointing to 3-factor interactions. As the catalyst is further refined, the Law of Diminishing Returns sets in. As you get closer to the theoretical limit – any improvement disappears in the noise!

Chemicals are not Numbers

As we look at the actual chemicals which may appear as factors in our experiments, we often find numbers appearing as part of their names. Often the only difference among these molecules is the length of the chain (C-12, 14, 16, 18) and it is tempting to incorporate this as numeric levels of the factor. Actually, this is a qualitative factor; calling it numeric invites serious error! The correct description, now available in Design-Expert, is ’Discrete Numeric’.

The real message, however, is that the experimenters must never take off their ’chemist hat‘ when putting on a ’statistics hat’!


Reference Materials:

  1. Leardi, R., "Experimental design in chemistry: A tutorial." Anal Chim Acta 2009, 652 (1-2), 161-72.
  2. Box, G. E. P.; Hunter, J. S.; Hunter, W. G., Statistics for Experimenters. 2nd ed.; Wiley-Interscience: Hoboken, NJ, 2005.
  3. Cornell, J. A., Experiments with Mixtures. 3rd ed.; John Wiley and Sons: New York, 2002.
  4. Anderson, M.J.; Whitcomb, P.J.; Bezener, M.A.; Formulation Simplified; Routledge: New York, 2018.


How Can I Convince Colleagues Working on Formulations to Use Mixture Design Rather than Factorials or Response Surface Methods as They Would Do for Process Studies?

posted by Martin Bezener, PhD. on Aug. 12, 2019

We recently published the July-August edition of The DOE FAQ Alert. One of the items in that publication was the question below, and it's too interesting not to share here as well.

Original question from a Research Scientist:

"Empowered by the Stat-Ease class on mixture DOE and the use of Design-Expert, I have put these tools to good use for the past couple of years. However, I am having to more and more defend why a mixture design is more appropriate than factorials or response surface methods when experimenting on formulations. Do you have any resources, blogs posts, or real-world data that would better articulate why trying to use a full factorial or central composite design on mixture components is not the most effective option?"

Answer from Stat-Ease Consultant Martin Bezener:

“First, I assume you are talking about factorials or response surface method (RSM) designs involving the proportions of the components. It makes no sense to use a factorial or RSM if you are dealing with amounts, since doubling the amount of everything should not affect the response, but it will in a factorial or response-surface model.

"There are some major issues with factorial designs. For one thing, the upper bounds of all the components need to sum to less than 1. For example, let’s say you experimented on three components with the following ranges:

A. X1: 10 - 20%
B. X2: 5 - 6%
C. X3: 10 - 90%

then the full-factorial design would lay out a run at all-maximum levels, which makes no sense as that gives a total of 116% (20+6+90). Oftentimes people get away with this because there is a filler component (like water) that takes the formulation to a fixed total of 100%, but this doesn't always happen.

"Also, a factorial design will only consider the extreme combinations (lows/highs) of the mixture. So, you'll get tons of vertices but no points in the interior of the space. This is a waste of resources, since a factorial design doesn't allow fitting anything beyond an interaction model.

"An RSM design can be ‘crammed’ into mixture space to allow curvature fits, but this is generally a very poor design choice. Using ratios of components provides a work-around, but that has its own problems.

"Whenever you try to make the problem fit the design (rather than the other way around), you lose valuable information. A very nice illustration of this was provided in the by Mark Anderson in his article on the “Peril of Parts & the Failure of Fillers as Excuses to Dodge Mixture Design” in the May 2013 Stat-Teaser.”

An addendum from Mark Anderson, Principal of Stat-Ease and author of The DOE FAQ Alert:

"The 'problems' Martin refers to for using ratios (tedious math!) are detailed in RSM Simplified Chapter 11: 'Applying RSM to Mixtures'. You can learn more about this book and the others in the Simplified series ('DOE' and 'Formulation') on our website."


Links for additional information:


Four Tips for Graduate Students' Research Projects

posted by Shari on May 22, 2019

Graduate students are frequently expected to use design of experiments (DOE) in their thesis project, often without much DOE background or support. This results in some classic mistakes.

  1. Designs that were popular in the 1970’s-1990’s (before computers were widely available) have been replaced with more sophisticated alternatives. A common mistake – using a Plackett-Burman (PB) design for either screening purposes, or to gain process understanding for a system that is highly likely to have interactions. PB designs are badly aliased resolution III, thus any interactions present in the system will cause many of the main effect estimates to be biased. This increases the internal noise of the design and can easily cause misleading and inaccurate results. Better designs for screening are regular two-level factorials at resolution IV or minimum-run (MR) designs. For details on PB, regular and MR designs, read DOE Simplified.
  2. Reducing the number of replicated points will likely result in losing important information. A common mistake – reducing the number of center points in a response surface design down to one. The replicated center points provide an estimate of pure error, which is necessary to calculate the lack of fit statistic. Perhaps even more importantly, they reduce the standard error of prediction in the middle of the design space. Eliminating the replication may mean that results in the middle of the design space (where the optimum is likely to be) have more prediction error than results at the edges of the design space!
  3. If you plan to use DOE software to analyze the results, then use the same software at the start to create the design. A common mistake – designing the experiment based on traditional engineering practices, rather than on statistical best practices. The software very likely has recommended defaults that will make a better design that what you can plan on your own.
  4. Plan your experimentation budget to include confirmation runs after the DOE has been run and analyzed. A common mistake – assuming that the DOE results will be perfectly correct! In the real world, a process is not improved unless the results can be proven. It is necessary to return to the process and test the optimum settings to verify the results.

The number one thing to remember is this: Using previous student’s theses as a basis for yours, means that you may be repeating their mistakes and propagating poor practices! Don’t be afraid to forge a new path and showcase your talent for using state-of-the-art statistical designs and best practices.