Numerical Optimization Algorithm

Numerical optimization is a hill climbing technique. In addition to the design points, a set of random points are checked to see if there is a more desirable solution.

1. Let X, a vector of xi for i=1 .. n, represent design variables over the optimization space which is a subset of the design space.

2. Let yj, Uj, Lj for j=1 .. m be responses with upper and/or lower bounds serving as constraints.

3. Let y(X) be the response to be optimized. Then f(X) = y(X) for minimization, f(X)=-y(X) for maximization. Define the constraints as a series of discontinuous functions:

../../../_images/numerical-optimization-algorithm-1.png

This produces a system of m constraints that can be solved as an unconstrained problem via a penalty function approach:

../../../_images/numerical-optimization-algorithm-2.png

where p is a penalty parameter>0 for j = 1 to m. (The penalty parameter p starts at 1 and increases with each iteration by a factor of 100. The number of iterations is limited to 15, which gives a penalty factor of 1030 maximum.) The goal of the penalty function is to provide a hill to climb when the optimization starts in an undesirable location.

Finding an initial feasible region can be difficult. We start with a small value of a penalty function in a downhill simplex (Nelder-Mead) multi-dimensional pattern search (Numerical Recipes in Pascal by William H. Press et. al., p.326) which converges at either a stationary point or a design space boundary. Limits of the design space are maintained by evaluating the f(X) to +1010 at the design boundaries. The search around the initial convergence point is restarted using a larger penalty function. Convergence is achieved when the distance moved or objective function change is less than a 10-6 ratio.