User:Yongyi781/sandbox

In machine learning, the sample complexity of a machine learning algorithm is, roughly speaking, the number of training samples needed for the algorithm to successfully learn a target function. More specifically, the sample complexity is the number of samples needed for the function returned by the algorithm to be within an arbitrarily small error of the best possible function, with probability arbitrarily close to 1.

There are two variants of sample complexity. The weak variant fixes a particular input-output distribution, and the strong variant takes the worst-case sample complexity over all input-output distributions. A natural question in statistical learning is to ask, for a given hypothesis space, whether the sample complexity is finite in the strong sense, that is, there is a bound on the number of samples needed so that an algorithm can approximately solve all possible learning problems on a particular input-output space no matter the distribution of data over that space. The No Free Lunch Theorem, discussed below, says that this is always impossible if the hypothesis space is not constrained.

Definition
Let X be a space which we call the input space, and Y be a space which we call the output space, and let Z denote the product $X × Y$. For example, in the setting of binary classification, X is typically a finite-dimensional vector space and Y is the set ${-1,1 }$.

Fix a hypothesis space $$\mathcal H$$ of functions $$f\colon X\to Y$$. A learning algorithm over $$\mathcal H$$ is a computable map from $$Z^*$$ to $$\mathcal H$$. In other words, it is an algorithm that takes as input a finite sequence of training samples and outputs a function from X to Y. Typical learning algorithms include empirical risk minimization and empirical risk minimization with Tikhonov regularization.

Fix a loss function $$V\colon Y\times Y\to\R_{\geq 0}$$, for example, the square loss $$ V(y,y')=(y-y')^2 $$. For a given distribution ρ on $X × Y$, the expected risk of a function $$f\in\mathcal H$$ is


 * $$\mathcal E(f) :=\mathbb E_\rho[V(f(x),y)]=\int_{X\times Y} V(f(x),y)\,d\rho(x,y)$$

In our setting, we have $$f=A(S_n)$$ where A is a learning algorithm and $$S_n = ((x_1,y_1),\ldots,(x_n,y_n))\sim \rho^n$$ is a sequence of vectors which are all drawn independently from ρ. Define the optimal risk$$ \mathcal E^*_\mathcal{H} = \underset{f \in \mathcal H}{\inf}\mathcal E(f). $$Set $$f_n=A(S_n)$$ for each n. Note that $f_{n}$ is a random variable and depends on the random variable $S_{n}$, which is drawn from the distribution $ρ^{n}$. The algorithm A is called consistent if $$ \mathcal E(f_n) $$ probabilistically converges to $$ \mathcal E_\mathcal H^* $$, in other words, for all ε, δ > 0, there exists a positive integer N such that for all n ≥ N, we have$$ \Pr_{\rho^n}[\mathcal E(f_n) - \mathcal E^*_\mathcal{H}\geq\varepsilon]<\delta. $$The sample complexity of A is then the minimum N for which this holds, as a function of ρ, ε, and δ. We write the sample complexity as N(ρ, ε, δ) to emphasize that this value of N depends on ρ, ε, and δ. If A is not consistent, then we set N(ρ, ε, δ) = ∞. If there exists an algorithm for which N is finite, then we say that the hypothesis space $$ \mathcal H $$ is learnable.

In words, the sample complexity N(ρ, ε, δ) defines the rate of consistency of the algorithm: given a desired accuracy ε and confidence δ, one needs to sample N(ρ, ε, δ) data points to guarantee that the risk of the output function is within ε of the best possible, with probability at least 1 - δ.

In probabilistically approximately correct (PAC) learning, one is concerned with whether the sample complexity is polynomial, that is, whether N(ρ, ε, δ) is bounded by a polynomial in 1/ε and 1/δ. If N(ρ, ε, δ) is polynomial for some learning algorithm, then one says that that the hypothesis space $$ \mathcal H $$ is PAC-learnable. Note that this is a stronger notion than being learnable.

No Free Lunch Theorem
One can ask whether there exists a learning algorithm so that the sample complexity is finite in the strong sense, that is, there is a bound on the number of samples needed so that the algorithm can learn any distribution over the input-output space with a specified target error. More formally, one asks whether there exists a learning algorithm A such that, for all ε, δ > 0, there exists a positive integer N such that for all n ≥ N, we have$$ \sup_\rho\left(\Pr_{\rho^n}[\mathcal E(f_n) - \mathcal E^*_\mathcal{H}\geq\varepsilon]\right)<\delta, $$where $$f_n=A(S_n)$$, with $$S_n = ((x_1,y_1),\ldots,(x_n,y_n))\sim \rho^n$$ as above. The No Free Lunch Theorem says that without restrictions on the hypothesis space $$\mathcal H$$, this is not the case, i.e., there always exist "bad" distributions for which the sample complexity is arbitrarily large. Thus in order to make statements about the rate of convergence of the quantity$$ \sup_\rho\left(\Pr_{\rho^n}[\mathcal E(f_n) - \mathcal E^*_\mathcal{H}\geq\varepsilon]\right), $$one must either The latter approach leads to concepts such as VC dimension and Rademacher complexity which control the complexity of the space $$\mathcal H$$. A smaller hypothesis space introduces more bias into the inference process, meaning that $$\mathcal E^*_\mathcal{H}$$ may be greater than the best possible risk in a larger space. However, by restricting the complexity of the hypothesis space it becomes possible for an algorithm to produce more uniformly consistent functions. This trade-off leads to the concept of regularization.
 * constrain the space of probability distributions $$\rho$$, e.g. via a parametric approach, or
 * constrain the space of hypotheses $$\mathcal H$$, as in distribution-free approaches.

It is a theorem from VC theory that the following three statements are equivalent for a hypothesis space $$\mathcal H$$: This gives a nice way to prove that certain hypothesis spaces are PAC learnable, and by extension, learnable.
 * 1) $$\mathcal H$$ is PAC-learnable.
 * 2) The VC dimension of $$\mathcal H$$ is finite.
 * 3) $$\mathcal H$$ is a uniform Glivenko-Cantelli class.

An example of a PAC-learnable hypothesis space
Let X = Rd, Y = {-1, 1}, and let $$\mathcal H$$ be the space of affine functions on X, that is, functions of the form $$x\mapsto \langle w,x\rangle+b$$ for some $$w\in\R^d,b\in\R$$. This is the linear classification with offset learning problem. Now, note that four coplanar points in a square cannot be shattered by any affine function, since no affine function can be positive on two diagonally opposite vertices and negative on the remaining two. Thus, the VC dimension of $$\mathcal H$$ is 3, in particular finite. It follows by the above characterization of PAC-learnable classes that $$\mathcal H$$ is PAC-learnable, and by extension, learnable.

Other Settings
In addition to the supervised learning setting, sample complexity is relevant to semi-supervised learning problems including active learning, where the algorithm can ask for labels to specifically chosen inputs in order to reduce the cost of obtaining many labels. The concept of sample complexity also shows up in reinforcement learning, online learning, and unsupervised algorithms, e.g. for dictionary learning.