Probabilistic design

Probabilistic design is a discipline within engineering design. It deals primarily with the consideration and minimization of the effects of random variability upon the performance of an engineering system during the design phase. Typically, these effects studied and optimized are related to quality and reliability. It differs from the classical approach to design by assuming a small probability of failure instead of using the safety factor. Probabilistic design is used in a variety of different applications to assess the likelihood of failure. Disciplines which extensively use probabilistic design principles include product design, quality control, systems engineering, machine design, civil engineering (particularly useful in limit state design) and manufacturing.

Objective and motivations
When using a probabilistic approach to design, the designer no longer thinks of each variable as a single value or number. Instead, each variable is viewed as a continuous random variable with a probability distribution. From this perspective, probabilistic design predicts the flow of variability (or distributions) through a system.

Because there are so many sources of random and systemic variability when designing materials and structures, it is greatly beneficial for the designer to model the factors studied as random variables. By considering this model, a designer can make adjustments to reduce the flow of random variability, thereby improving engineering quality. Proponents of the probabilistic design approach contend that many quality problems can be predicted and rectified during the early design stages and at a much reduced cost.

Typically, the goal of probabilistic design is to identify the design that will exhibit the smallest effects of random variability. Minimizing random variability is essential to probabilistic design because it limits uncontrollable factors, while also providing a much more precise determination of failure probability. This could be the one design option out of several that is found to be most robust. Alternatively, it could be the only design option available, but with the optimum combination of input variables and parameters. This second approach is sometimes referred to as robustification, parameter design or design for six sigma.

Sources of variability
Though the laws of physics dictate the relationships between variables and measurable quantities such as force, stress, strain, and deflection, there are still three primary sources of variability when considering these relationships.

The first source of variability is statistical, due to the limitations of having a finite sample size to estimate parameters such as yield stress, Young's modulus, and true strain. Measurement uncertainty is the most easily minimized out of these three sources, as variance is proportional to the inverse of the sample size.

We can represent variance due to measurement uncertainties as a corrective factor $$B$$, which is multiplied by the true mean $$X$$ to yield the measured mean of $$\bar X$$. Equivalently, $$\bar X = \bar B X$$.

This yields the result $$\bar B = \frac{\bar X}{X}$$, and the variance of the corrective factor $$B$$ is given as:

$$Var[B]= \frac{Var[\bar X]}{X} = \frac{Var[X]}{nX}$$

where $$B$$ is the correction factor, $$X$$ is the true mean, $$\bar X$$ is the measured mean, and $$n$$ is the number of measurements made.

The second source of variability stems from the inaccuracies and uncertainties of the model used to calculate such parameters. These include the physical models we use to understand loading and their associated effects in materials. The uncertainty from the model of a physical measurable can be determined if both theoretical values according to the model and experimental results are available.

The measured value $$\hat H(\omega)$$ is equivalent to the theoretical model prediction $$H(\omega)$$ multiplied by a model error of $$\phi(\omega)$$, plus the experimental error $$\varepsilon(\omega)$$. Equivalently,

$$\hat H(\omega) = H(\omega) \phi(\omega) + \varepsilon(\omega)$$

and the model error takes the general form:

$$\phi(\omega) = \sum_{i = 0}^n a_i \omega^{n}$$

where $$a_i$$ are coefficients of regression determined from experimental data.

Finally, the last variability source comes from the intrinsic variability of any physical measurable. There is a fundamental random uncertainty associated with all physical phenomena, and it is comparatively the most difficult to minimize this variability. Thus, each physical variable and measurable quantity can be represented as a random variable with a mean and a variability.

Comparison to classical design principles
Consider the classical approach to performing tensile testing in materials. The stress experienced by a material is given as a singular value (i.e., force applied divided by the cross-sectional area perpendicular to the loading axis). The yield stress, which is the maximum stress a material can support before plastic deformation, is also given as a singular value. Under this approach, there is a 0% chance of material failure below the yield stress, and a 100% chance of failure above it. However, these assumptions break down in the real world. The yield stress of a material is often only known to a certain precision, meaning that there is an uncertainty and therefore a probability distribution associated with the known value. Let the probability distribution function of the yield strength be given as $$f(R)$$.

Similarly, the applied load or predicted load can also only be known to a certain precision, and the range of stress which the material will undergo is unknown as well. Let this probability distribution be given as $$f(S)$$.

The probability of failure is equivalent to the area between these two distribution functions, mathematically:

$$P_f = P(R<S)= \int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}f(R)f(S)dSdR$$

or equivalently, if we let the difference between yield stress and applied load equal a third function $$R-S = Q$$, then:

$$P_f = \int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}f(R)f(S)dSdR = \int\limits_{-\infty}^{0} f(Q)dQ$$

where the variance of the mean difference $$Q$$ is given by $$\sigma_Q^{2} = \sqrt{\sigma_R^{2}+ \sigma_S^{2}}$$.

The probabilistic design principles allow for precise determination of failure probability, whereas the classical model assumes absolutely no failure before yield strength. It is clear that the classical applied load vs. yield stress model has limitations, so modeling these variables with a probability distribution to calculate failure probability is a more precise approach. The probabilistic design approach allows for the determination of material failure under all loading conditions, associating quantitative probabilities to failure chance in place of a definitive yes or no.

Methods used to determine variability
In essence, probabilistic design focuses upon the prediction of the effects of variability. In order to be able to predict and calculate variability associated with model uncertainty, many methods have been devised and utilized across different disciplines to determine theoretical values for parameters such as stress and strain. Examples of theoretical models used alongside probabilistic design include:


 * Finite element analysis
 * Stochastic finite element method
 * Boundary element method
 * Meshfree methods
 * Analytical methods (refer to classical design principles)

Additionally, there are many statistical methods used to quantify and predict the random variability in the desired measurable. Some methods that are used to predict the random variability of an output include:


 * the Monte Carlo method (including Latin hypercubes);
 * propagation of error;
 * design of experiments (DOE)
 * the method of moments
 * Statistical interference
 * quality function deployment
 * Failure mode and effects analysis