User:Veronica milioli/sandbox

Stability is a principle in computational learning theory that is used to estimate the accuracy of a machine learning algorithm. A stable learning algorithm is one for which the solution does not change much when the set of training examples is slightly modified.

Stability analysis is a mathematical technique to guarantee that learning algorithms will generalize -- that they will perform accurately on a finite set of examples and will not overfit data. Stability analysis involves the application of mathematical sensitivity analysis to a learning algorithm with respect to its training data, in order to obtain theoretical generalization bounds.

History
The first prominently used method for finding generalization bounds of a learning algorithm was to prove that the algorithm was consistent using the uniform convergence properties of empirical quantities to their means. This method could be used to obtain generalization bounds for algorithms where the hypothesis space, $$\! H $$, of possible solutions is bounded and known. Uniform convergence was used to obtain generalization bounds for a large class of learning algorithms known as Empirical Risk Minimization (ERM) algorithms. A general result, proved by Vladimir Vapnik, is that for any target function and input distribution, any hypothesis space $$\! H $$ with VC dimension $$\! d $$, and $$\! n$$ training examples, an ERM classification algorithm is consistent and will produce a training error that is at most $$\! O(d/n)$$ from the true training error. This result was later extended to almost-ERM algorithms with function classes that do not have unique minimizers.

Uniform convergence techniques, however, could not