Line sampling

Line sampling is a method used in reliability engineering to compute small (i.e., rare event) failure probabilities encountered in engineering systems. The method is particularly suitable for high-dimensional reliability problems, in which the performance function exhibits moderate non-linearity with respect to the uncertain parameters The method is suitable for analyzing black box systems, and unlike the importance sampling method of variance reduction, does not require detailed knowledge of the system.

The basic idea behind line sampling is to refine estimates obtained from the first-order reliability method (FORM), which may be incorrect due to the non-linearity of the limit state function. Conceptually, this is achieved by averaging the result of different FORM simulations. In practice, this is made possible by identifying the importance direction $$\boldsymbol \alpha$$ in the input parameter space, which points towards the region which most strongly contributes to the overall failure probability. The importance direction can be closely related to the center of mass of the failure region, or to the failure point with the highest probability density, which often falls at the closest point to the origin of the limit state function, when the random variables of the problem have been transformed into the standard normal space. Once the importance direction has been set to point towards the failure region, samples are randomly generated from the standard normal space and lines are drawn parallel to the importance direction in order to compute the distance to the limit state function, which enables the probability of failure to be estimated for each sample. These failure probabilities can then be averaged to obtain an improved estimate.

Mathematical approach
Firstly the importance direction must be determined. This can be achieved by finding the design point, or the gradient of the limit state function.

A set of samples is generated using Monte Carlo simulation in the standard normal space. For each sample $$\boldsymbol x$$, the probability of failure in the line parallel to the important direction is defined as:



p_f(\boldsymbol x) = \int_{-\infty}^{+\infty} I(\boldsymbol x + \beta \cdot \boldsymbol \alpha)\varphi (\beta ) \, d\beta $$

where $$I(\cdot)$$ is equal to one for samples contributing to failure, and is zero otherwise:



I_f(\boldsymbol x) = \begin{cases} 1 & \text{if } \boldsymbol x \in \Omega_f  \\ 0 & \text{else} \end{cases} $$

$$\boldsymbol \alpha$$ is the important direction, $$\varphi$$  is the probability density function of a Gaussian distribution (and $$\beta$$  is a real number). In practice the roots of a nonlinear function must be found to estimate the partial probabilities of failure along each line. This is either done by interpolation of a few samples along the line, or by using the Newton–Raphson method.

The global probability of failure is the mean of the probability of failure on the lines:



\tilde{p}_f = \frac{1}{N_L} \sum_{i=1}^{N_L} p_f^{(i)} $$

where $$N_L$$ is the total number of lines used in the analysis and the $$p_f^{(i)}$$  are the partial probabilities of failure estimated along all the lines.

For problems in which the dependence of the performance function is only moderately non-linear with respect to the parameters modeled as random variables, setting the importance direction as the gradient vector of the performance function in the underlying standard normal space leads to highly efficient Line Sampling. In general it can be shown that the variance obtained by line sampling is always smaller than that obtained by conventional Monte Carlo simulation, and hence the line sampling algorithm converges more quickly. The rate of convergence is made quicker still by recent advancements which allow the importance direction to be repeatedly updated throughout the simulation, and this is known as adaptive line sampling.



Industrial application
The algorithm is particularly useful for performing reliability analysis on computationally expensive industrial black box models, since the limit state function can be non-linear and the number of samples required is lower than for other reliability analysis techniques such as subset simulation. The algorithm can also be used to efficiently propagate epistemic uncertainty in the form of probability boxes, or random sets. A numerical implementation of the method is available in the open source software OpenCOSSAN.