The test runs were made (see the D entry in Figure 5.3). Now the analysis can be made to determine the best empirical model fit to the data. In MiniTab you can select the option by the following menu picks, STAT>DOE>Response Surface>Analyze Response Surface Design. You then select the output response of interest (D in this case), and check to make sure all the terms
are included in this initial regression analysis. After the terms are selected, you can then click OK and MiniTAB will perform a regression analysis to estimate the best model fit to the collected Modeling DOE data.
For this particular case these initial results are shown in Figure 5.4. The first thing to notice is the correlation coefficient (R-Sq) is 99%, which is greater than the 80% minimum standard, very good results. Of the ten potential terms in the model, the p-values indicate that 6 are significant. [Remember p-values > 0.1 indicate the contribution of that term is not significant.]
The revised model fit should then be created by deselecting the four terms with large p-values. The revised model is then generated along with the coefficients of the six retained terms as illustrated in Figure 5.5.
The next step is to optimize the system. In this case we need to find a set of nomimal settings for the three control factors that meet the range goal of 400 feet. This can be
accomplished by using either a
graphical method using contour plots or by working directly with the surface response model.
MiniTAB allows you to
generator contour plots. On the right
is a plot of distance D versus L1 and RA for a fixed value of L4 (=32ft). The red line indicates all possible combinations of L1 and RA that result in a predicted range of 400 feet.
In this case we decide that the smaller values of L1 are desired, so if we set L1=60ft, and L4=32ft, we can solve for RA to be 169.3 degrees.
We are now ready to verify the predictions. In this case we set up the trebuchet at the optimized settings for all control factors and run a series of tests. In our case, because of the
importance of the impending siege, we are able to make 20 runs at this test condition. In this case the data fails to reject the hypothesis that the correct range of 400 feet was achieved, thus we can claim the system models were verified.
The final step in this process is to flow the system
requirement (that is, the Commanding Officer says we must hit the range of (390-410 feet) with only one or less misses per 100 attempts). The Monte Carlo Simulation method will be used to allocate the requirements to the identified control factors (L1, L4,
and RA). The first step is to convert the defect rate into a statistical measure. It can be shown that for normally distributed variables that a 99%two-sided confidence interval (can miss long or short) corresponds to +/- 2.57 standard deviations for centered values. This corresponds to a Cp value of (2.57/3 or 0.86).
Using the derived empirical model and the MCS template shown earlier in Section 2.7 (Figure 2.6) an allocation can be made. The adjustable parameters are the assumed distributions for the three random variable inputs (L1, L4, and R4). In this case all were assumed to be normally distributed (so in the cells the formula = norminv(rand(),M1,S1) was used, where M1 is the mean value, and S1 is the standard deviation). These values were adjusted until the Cp and Cpk values for the output response (estimated distance) were at the desired value of 0.87.
After finding an acceptable set of variables, one can then define the specification limits (LSL and USL) and their required Cp and Cpk values. The Ingeniators talked with the craftsmen who build the trebuchets and determined that these values could be controlled with proper monitoring and inspections.
Of course, the rest is history. We institute the above requirements as part of an overall CPM strategy which flows down to the craftsmen and their
Statistical Quality Control program. We defeat the SixSigma Dynasty and our trebuchet becomes the best selling product in the world. All because some
Ingeniator decided to use a systematic data-driven approach (which 1000 years later would come to be called “Design of Experiments”).