Interval predictor model

In regression analysis, an interval predictor model (IPM) is an approach to regression where bounds on the function to be approximated are obtained. This differs from other techniques in machine learning, where usually one wishes to estimate point values or an entire probability distribution. Interval Predictor Models are sometimes referred to as a nonparametric regression technique, because a potentially infinite set of functions are contained by the IPM, and no specific distribution is implied for the regressed variables.

Multiple-input multiple-output IPMs for multi-point data commonly used to represent functions have been recently developed. These IPM prescribe the parameters of the model as a path-connected, semi-algebraic set using sliced-normal or sliced-exponential distributions. A key advantage of this approach is its ability to characterize complex parameter dependencies to varying fidelity levels. This practice enables the analyst to adjust the desired level of conservatism in the prediction.

As a consequence of the theory of scenario optimization, in many cases rigorous predictions can be made regarding the performance of the model at test time. Hence an interval predictor model can be seen as a guaranteed bound on quantile regression. Interval predictor models can also be seen as a way to prescribe the support of random predictor models, of which a Gaussian process is a specific case .

Convex interval predictor models
Typically the interval predictor model is created by specifying a parametric function, which is usually chosen to be the product of a parameter vector and a basis. Usually the basis is made up of polynomial features or a radial basis is sometimes used. Then a convex set is assigned to the parameter vector, and the size of the convex set is minimized such that every possible data point can be predicted by one possible value of the parameters. Ellipsoidal parameters sets were used by Campi (2009), which yield a convex optimization program to train the IPM. Crespo (2016) proposed the use of a hyperrectangular parameter set, which results in a convenient, linear form for the bounds of the IPM. Hence the IPM can be trained with a linear optimization program:

\operatorname{arg\,min}_p \left\{\mathbb{E}_x(\bar{y}_p(x) - \underline{y}_p(x)) : \bar{y}_p(x^{(i)}) > y^{(i)} > \underline{y}_p(x^{(i)}), i=1,\ldots,N \right\} $$ where the training data examples are $$ y^{(i)}$$ and $$ x^{(i)}$$, and the Interval Predictor Model bounds $$ \underline{y}_p(x)$$ and $$\overline{y}_p(x) $$ are parameterised by the parameter vector $$ p $$. The reliability of such an IPM is obtained by noting that for a convex IPM the number of support constraints is less than the dimensionality of the trainable parameters, and hence the scenario approach can be applied.

Lacerda (2017) demonstrated that this approach can be extended to situations where the training data is interval valued rather than point valued.

Non-convex interval predictor models
In Campi (2015) a non-convex theory of scenario optimization was proposed. This involves measuring the number of support constraints, $$S$$, for the Interval Predictor Model after training and hence making predictions about the reliability of the model. This enables non-convex IPMs to be created, such as a single layer neural network. Campi (2015) demonstrates that an algorithm where the scenario optimization program is only solved $$S$$ times which can determine the reliability of the model at test time without a prior evaluation on a validation set. This is achieved by solving the optimisation program

\operatorname{arg\,min}_p \left\{h : |\hat{y}_p(x^{(i)}) - y^{(i)}| < h, i=1,\ldots,N\right\}, $$ where the interval predictor model center line $$ \hat{y}_p(x) = (\overline{y}_p(x) + \underline{y}_p(x)) \times 1/2$$, and the model width $$ h = (\overline{y}_p(x) - \underline{y}_p(x)) \times 1/2 $$. This results in an IPM which makes predictions with homoscedastic uncertainty.

Sadeghi (2019) demonstrates that the non-convex scenario approach from Campi (2015) can be extended to train deeper neural networks which predict intervals with hetreoscedastic uncertainty on datasets with imprecision. This is achieved by proposing generalizations to the max-error loss function given by

\mathcal{L}_{\text{max-error}} = \max_i |y^{(i)}-\hat{y}_p(x^{(i)})|, $$ which is equivalent to solving the optimisation program proposed by Campi (2015).

Applications
Initially, scenario optimization was applied to robust control problems.

Crespo (2015) and (2021) applied Interval Predictor Models to the design of space radiation shielding and to system identification.

In Patelli (2017), Faes (2019), and Crespo (2018), Interval Predictor models were applied to the structural reliability analysis problem. Brandt (2017) applies interval predictor models to fatigue damage estimation of offshore wind turbines jacket substructures.

Garatti (2019) proved that Chebyshev layers (i.e., the minimax layers around functions fitted by linear $$\ell_\infty$$-regression) belong to a particular class of Interval Predictor Models, for which the reliability is invariant with respect to the distribution of the data.

Software implementations
OpenCOSSAN provides a Matlab implementation of the work of Crespo (2015).