User:Alex e e alex/sandbox

Error Tolerance (PAC learning)
(Intending to add this as a section of the PAC page rather than as its own, kept separate for now because multiple people are working on that one.)

Classification Noise
In the classification noise model, a machine learning algorithm is provided a set of bit string examples with one-bit labels. The examples are undisturbed, but with probability 1-η the wrong label is provided. The parameter η is called the classification noise rate.

If a PAC learning algorithm L receives samples of length n labeled according to a concept in class C, outputs a concept in class H that with probability at least 1-δ has error at most ε, and runs in time polynomial in n, 1/ε, 1/δ, and 1/(1-2ή) for η ≤ ή < 1/2, then C is efficiently PAC learnable using H in the presence of classification noise.

Statistical Query Learning
Statistical Query learning is a kind of active learning in which the learning algorithm can request information about the likelihood of a binary function χ being satisfied on the examples and receive an answer accurate to within a tolerance τ. A concept class C is efficiently learnable from statistical queries H if there exists a learning algorithm L such that for any sample distribution, any concept c in C that is most succinctly expressed in size(c) bits, and any error rate 0<ε<1/2, every query χ can be evaluated in time polynomial in 1/ε, n, and size(c), the inverse tolerance 1/τ is bounded by a polynomial in 1/ε, n, and size(c), the run time of L is bounded by a polynomial in 1/ε, n, and size(c), and the output of L has an error rate less than ε.

The statistical query model is strictly weaker than the PAC model: any efficiently SQ-learnable class is efficiently PAC learnable in the presence of classification noise, but there exist efficient PAC-learnable problems such as parity that are not efficiently SQ-learnable.

Other Noise Models
The PAC learning model has also been extended to include malicious errors, noise on inputs but not labels,   errors in on-line learning and more.