User:Suhas Vijaykumar/svm

Linear SVM
We're given a training dataset of $$n$$ points, which are of the form
 * $$ (\vec{x}_1, y_1),\, \ldots ,\, (\vec{x}_n, y_n)$$

where the $$y_i$$ are either 1 or −1, each indicating the class to which the point $$\vec{x}_i $$ belongs. Each $$ \vec{x}_i $$ is a $$p$$-dimensional real vector. We want to find the "maximum-margin hyperplane" that divides the points $$\vec{x}_i$$ for which $$y_i=1$$, and those for which $$y_i=-1$$, which is defined so that the distance between the hyperplane and the nearest point $$\vec{x}_i$$ is maximized.

Any hyperplane can be written as the set of points $$\vec{x}$$ satisfying
 * $$\vec{w}\cdot\vec{x} - b=0,\,$$ Svm_max_sep_hyperplane_with_margin.png

where $${\vec{w}}$$ is the (not necessarily normalized) normal vector to the hyperplane. The parameter $$\tfrac{b}{\|\vec{w}\|}$$ determines the offset of the hyperplane from the origin along the normal vector $${\vec{w}}$$.

Hard-margin
If the training data are linearly separable, we select two hyperplanes that separate the two classes of data, so that the distance between them is as large as possible. The region bounded by the two hyperplanes is called "the margin". These hyperplanes can be described by the equations
 * $$\vec{w}\cdot\vec{x} - b=1\,$$

and
 * $$\vec{w}\cdot\vec{x} - b=-1.\,$$

Geometrically, the distance between these two hyperplanes is $$\tfrac{2}{\|\vec{w}\|}$$, so to maximize the distance between the planes we want to minimize $$\|\vec{w}\|$$. As we also have to prevent data points from falling into the margin, we add the following constraint: for each $$i$$ either
 * $$\vec{w}\cdot\vec{x}_i - b \ge 1, $$          if $$y_i = 1  $$

or
 * $$\vec{w}\cdot\vec{x}_i - b \le -1, $$     if $$y_i = -1. $$

These constraints state that each data point much lie on the correct side of the margin.

This can be rewritten as:
 * $$y_i(\vec{w}\cdot\vec{x}_i - b) \ge 1, \quad \text{ for all } 1 \le i \le n.\qquad\qquad(1)$$

We can put this together to get the optimization problem:"\vec{w}\"The $$\vec w$$ and $$b$$ that solve this problem determine our classifier, $$ \vec{x} \mapsto \sgn(\vec{w} \cdot \vec{x} + b)$$.

An easy-to-see but important consequence of this geometric description is that max-margin hyperplane is completely determined by those $$ \vec{x}_i$$ which lie nearest to it. These $$ \vec{x}_i$$ are called support vectors.

Soft-margin
To extend SVM to cases in which the data are not linearly separable, we introduce the hinge loss function,"$\max\left(0, 1-y_i(\vec{w}\cdot\vec{x_i} + b)\right). $"This function is zero if the constraint in (1) is satisfied, in other words, if $$\vec{x}_i$$ lies on the correct side of the margin. For data on the wrong side of the margin, the function's value is proportional to the distance from the margin.

We then wish to minimize"$\left[\frac 1 n \sum_{i=1}^n \max\left(0, 1 - y_i(w\cdot x_i + b)\right) \right] + \lambda\lVert w \rVert^2,$"where the parameter $$\lambda$$ determines the tradeoff between increasing the margin-size and ensuring that the $$\vec{x}_i$$ lie on the correct side of the margin. Thus, for sufficiently small values of $$\lambda$$, the soft-margin SVM will behave identically to the hard-margin SVM if the input data are linearly classifiable, but will still learn a viable classification rule if not.

Nonlinear Classification
The original maximum-margin hyperplane algorithm proposed by Vapnik in 1963 constructed a linear classifier. However, in 1992, Bernhard E. Boser, Isabelle M. Guyon and Vladimir N. Vapnik suggested a way to create nonlinear classifiers by applying the kernel trick (originally proposed by Aizerman et al. ) to maximum-margin hyperplanes. The resulting algorithm is formally similar, except that every dot product is replaced by a nonlinear kernel function. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. The transformation may be nonlinear and the transformed space high dimensional; although the classifier is a hyperplane in the transformed feature space, it may be nonlinear in the original input space.

It is noteworthy that working in a higher-dimensional feature space increases the generalization error of support vector machines, although given enough samples the algorithm still performs well.

Some common kernels include: The kernel is related to the transform $$\varphi(\vec{x_i})$$ by the equation $$k(\vec{x_i}, \vec{x_j}) = \varphi(\vec{x_i})\cdot \varphi(\vec{x_j})$$. The value w is also in the transformed space, with $$\textstyle\vec{w} = \sum_i \alpha_i y_i \varphi(\vec{x}_i)$$. Dot products with w for classification can again be computed by the kernel trick, i.e. $$\textstyle \vec{w}\cdot\varphi(\vec{x}) = \sum_i \alpha_i y_i k(\vec{x}_i, \vec{x})$$.
 * Polynomial (homogeneous): $$k(\vec{x_i},\vec{x_j})=(\vec{x_i} \cdot \vec{x_j})^d$$
 * Polynomial (inhomogeneous): $$k(\vec{x_i},\vec{x_j})=(\vec{x_i} \cdot \vec{x_j} + 1)^d$$
 * Gaussian radial basis function: $$k(\vec{x_i},\vec{x_j})=\exp(-\gamma \|\vec{x_i} - \vec{x_j}\|^2)$$, for $$\gamma > 0$$. Sometimes parametrized using $$\gamma=1/{2 \sigma^2}$$
 * Hyperbolic tangent: $$k(\vec{x_i},\vec{x_j})=\tanh(\kappa \vec{x_i} \cdot \vec{x_j}+c)$$, for some (not every) $$\kappa > 0 $$ and $$ c < 0 $$

Computing the SVM Classifier
Computing the (soft-margin) SVM classifier amounts to minimizing an expression of the form"$\left[\frac 1 n \sum_{i=1}^n \max\left(0, 1 - y_i(w\cdot x_i + b)\right) \right] + \lambda\lVert w \rVert^2. \qquad(2)$"We focus on the soft-margin classifier since, as noted above, choosing a sufficiently small value for $$\lambda$$ yields the hard-margin classifier for linearly classifiable input data. The classical approach, which involves reducing (2) to a quadratic programing problem, is detailed below. Then, more recent approaches such as sub-gradient descent and coordinate descent will be discussed.

Primal
Minimizing (2) can be rewritten as a constrained optimization problem with a differentiable objective function in the following way.

For each $$i \in \{1,\,\ldots,\,n\}$$ we introduce the variable $$\zeta_i$$, and note that $$ \zeta_i = \max\left(0, 1 - y_i(w\cdot x_i + b)\right)$$ if and only if $$ \zeta_i$$ is the smallest nonnegative number satisfying $$ y_i(w\cdot x_i + b) \geq 1- \zeta_i.$$

Thus we can rewrite the optimization problem as follows"w\""$ \text{subject to }y_i(x_i \cdot w + b) \geq 1 - \zeta_i \,\text{ and }\,\zeta_i \geq 0,\,\text{for all }i.$"This is called the primal problem.

Dual
By solving for the Lagrangian dual of the above problem, one obtains the simplified problem"$ \text{maximize}\,\, f(c_1 \ldots c_n) = \sum_{i=1}^n c_i + \frac 1 2 \sum_{i=1}^n\sum_{j=1}^n y_ic_i(x_i \cdot x_j)y_jc_j,$""$ \text{subject to } \sum_{i=1}^n c_iy_i = 0,\,\text{and } 0 \leq c_i \leq \frac{1}{2n\lambda}\;\text{for all }i.$"This is called the dual problem. Since the dual minimization problem is a quadratic function of the $$ c_i$$ subject to linear constraints, it is efficiently solvable by quadratic programming algorithms.

Here, the variables $$ c_i$$ are defined such that"$ \vec w = \sum_{i=1}^n c_iy_i \vec x_i$."Moreover, $$ c_i = 0$$ exactly when $$ \vec x_i$$ lies on the correct side of the margin, and $$ 0 < c_i <(2n\lambda)^{-1}$$ when $$ \vec x_i$$ lies on the margin's boundary. It follows that $$ \vec w$$ can be written as a linear combination of the support vectors.

The offset, $$ b$$, can be recovered by finding an $$ \vec x_i$$ on the margin's boundary and solving "$ y_i(\vec w \cdot \vec x_i + b) = 1 \iff b = \vec w \cdot \vec x_i - y_i.$"

Kernel Trick
Suppose now that we would like to learn a nonlinear classification rule which corresponds to a linear classification rule for the transformed data points $$ \varphi(\vec x_i).$$ Moreover, we are given a kernel function $$ k$$ which satisfies $$ k(\vec x_i, \vec x_j) = \varphi(\vec x_i) \cdot \varphi(\vec x_j)$$.

We know the classification vector $$ \vec w$$ in the transformed space satisfies "$ \vec w = \sum_{i=1}^n c_iy_i\varphi( \vec x_i),$"where the $$  c_i$$ are obtained by solving the optimization problem $$ \begin{align} \text{maximize}\,\, f(c_1 \ldots c_n) &= \sum_{i=1}^n c_i + \frac 1 2 \sum_{i=1}^n\sum_{j=1}^n y_ic_i(\varphi(\vec x_i) \cdot \varphi(\vec x_j))y_jc_j \\ &= \sum_{i=1}^n c_i + \frac 1 2 \sum_{i=1}^n\sum_{j=1}^n y_ic_ik(\vec x_i,\vec x_j)y_jc_j \\ \end{align} $$ "$ \text{subject to } \sum_{i=1}^n c_iy_i = 0,\,\text{and } 0 \leq c_i \leq \frac{1}{2n\lambda}\;\text{for all }i.$"The coefficients $$ c_i$$ can be solved for using quadratic programming, as before. Again, we can find some index $$ i$$ such that $$ 0 < c_i <(2n\lambda)^{-1}$$, so that $$ \varphi(\vec x_i)$$ lies on the boundary of the margin in the transformed space, and then solve $$ \begin{align} b = \vec w \cdot \varphi(\vec x_i) - y_i &= \left[\sum_{k=1}^n c_ky_k\varphi(\vec x_k)\cdot\varphi(\vec x_i)\right] - y_i \\ &= \left[\sum_{k=1}^n c_ky_kk(\vec x_k, \vec x_i)\right] - y_i. \end{align}$$ Finally, new points can be classified by computing "$ \vec z \mapsto \sgn(\vec w \cdot \varphi(\vec z) + b) = \sgn\left(\left[\sum_{i=1}^n c_iy_ik(\vec x_i, \vec z)\right] + b\right).$"

Modern methods
Recent algorithms for finding the SVM classifier include sub-gradient descent and coordinate descent. Both techniques have proven to offer significant advantages over the traditional approach when dealing with large, sparse datasets—sub-gradient methods are especially efficient when there are many training examples, and coordinate descent when the dimension of the feature space is high.

Sub-gradient descent
Sub-gradient descent algorithms for the SVM work work directly with the expression"$f(\vec w, b) = \left[\frac 1 n \sum_{i=1}^n \max\left(0, 1 - y_i(w\cdot x_i + b)\right) \right] + \lambda\lVert w \rVert^2.$"Note that $$f$$ is a convex function of $$\vec w$$ and $$b$$. As such, traditional gradient descent (or SGD) methods can be adapted, where instead of taking a step in the direction of the function's gradient, a step is taken in the direction of a vector selected from the function's sub-gradient. This approach has the advantage that, for certain implementations, the number of iterations does not scale with $$n$$, the number of data points.

Coordinate descent
Coordinate descent algorithms for the SVM work from the dual problem

$$ \text{maximize}\,\, f(c_1 \ldots c_n) = \sum_{i=1}^n c_i + \frac 1 2 \sum_{i=1}^n\sum_{j=1}^n y_ic_i(x_i \cdot x_j)y_jc_j,$$"$ \text{subject to } \sum_{i=1}^n c_iy_i = 0,\,\text{and } 0 \leq c_i \leq \frac{1}{2n\lambda}\;\text{for all }i.$"For each $$ i \in \{1,\, \ldots,\, n\}$$, iteratively, the coefficient $$ c_i$$ is adjusted in the direction of $$ \partial f/ \partial c_i$$. Then, the resulting vector of coefficients $$ (c_1',\,\ldots,\,c_n')$$ is projected onto the nearest vector of coefficients that satisfies the given constraints. (Typically Euclidean distances are used.) The process is then repeated until a near-optimal vector of coefficients is obtained. The resulting algorithm is extremely fast in practice, although few performance guarantees have been proved.

Empirical Risk Minimization
The soft-margin Support Vector Machine described above is an example of an empirical risk minimization (ERM) algorithm for the hinge loss. Seen this way, Support Vector Machines belong to a natural class of algorithms for statistical inference, and many of its unique features are due to the behavior of the hinge loss. This perspective can provide further insight into how and why SVMs work, and allow us to better analyze their statistical properties.

Risk Minimization
In supervised learning, one is given a set of training examples $$X_1 \ldots X_n$$ with labels $$y_1 \ldots y_n$$, and wishes to predict $$y_{n+1}$$ given $$X_{n+1}$$. To do so one forms a hypothesis, $$f$$, such that $$f(X_{n+1})$$ is a "good" approximation of $$y_{n+1}$$. A "good" approximation is usually defined with the help of a loss function, $$\ell(y,z)$$, which characterizes how bad $$z$$ is as a prediction of $$y$$. We would then like to choose a hypothesis that minimizes the expected risk:

$$\varepsilon(f) = \mathbb{E}\left[\ell(y_{n+1}, f(X_{n+1})) \right]$$

In most cases, we don't know the joint distribution of $$X_{n+1},\,y_{n+1}$$ outright. In these cases, a common strategy is to choose the hypothesis that minimizes the empirical risk:

$$\hat \varepsilon(f) = \frac 1 n \sum_{k=1}^n \ell(y_k, f(X_k))$$

Under certain assumptions about the sequence of random variables $$X_k,\, y_k$$ (for example, that they are generated by a finite Markov process), if the set of hypotheses being considered is small enough, the minimizer of the empirical risk will closely approximate the minimizer of the expected risk as $$n$$ grows large. This approach is called empirical risk minimization, or ERM.

Regularization and Stability
In order for the minimization problem to have a well-defined solution, we have to place constraints on the set $$\mathcal{H}$$ of hypotheses being considered. If $$\mathcal{H}$$ is a normed space (as is the case for SVM), a particularly effective technique is to consider only those hypotheses $$ f$$ for which $$\lVert f \rVert_{\mathcal H} < k$$. This is equivalent to imposing a regularization penalty $$\mathcal R(f) = \lambda_k\lVert f \rVert_{\mathcal H}$$, and solving the new optimization problem

$$\hat f = \mathrm{arg}\min_{f \in \mathcal{H}} \hat \varepsilon(f) + \mathcal{R}(f)$$.

This approach is called Tikhonov regularization.

More generally, $$\mathcal{R}(f)$$ can be some measure of the complexity of the hypothesis $$f$$, so that simpler hypotheses are preferred.

SVM and the Hinge Loss
Recall that the (soft-margin) SVM classifier $$\hat w, b: x \mapsto \sgn(\hat w \cdot x + b)$$ is chosen to minimize the following expression.

$$\left[\frac 1 n \sum_{i=1}^n \max\left(0, 1 - y_i(w\cdot x_i + b)\right) \right] + \lambda\lVert w \rVert^2$$

In light of the above discussion, we see that the SVM technique is equivalent to empirical risk minimization with Tikhonov regularization, where in this case the loss function is the hinge loss

$$\ell(y,z) = \max\left(0, 1 - yz \right).$$

From this perspective, SVM is closely related to other fundamental classification algorithms such as regularized least-squares and logistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with the square-loss, $$\ell_{sq}(y,z) = (y-z)^2$$; logistic regression employs the log-loss,  $$\ell_{log}(y,z) = \ln(1 + e^{-yz})$$.

Target Functions
The difference between the hinge loss and these other loss functions is best stated in terms of target functions - the function that minimizes expected risk for a given pair of random variables $$X,\,y$$.

In particular, let $$y_x$$ denote $$y$$ conditional on the event that $$X = x$$. In the classification setting, we have:

$$y_x = \begin{cases} 1 & \text{with probability } p_x \\ -1 & \text{with probability } 1-p_x \end{cases}$$

The optimal classifier is therefore:

$$f^*(x) = \begin{cases}1 & \text{if }p_x \geq 1/2 \\ -1 & \text{otherwise}\end{cases} $$

For the square-loss, the target function is the conditional expectation function, $$f_{sq}(x) = \mathbb{E}\left[y_x\right]$$; For the logistic loss, it's the logit function, $$f_{log}(x) = \ln\left(p_x / ({1-p_x})\right)$$. While both of these target functions yield the correct classifier, as $$\sgn(f_{sq}) = \sgn(f_{log}) = f^*$$, they give us more information than we need. In fact, they give us enough information to completely describe the distribution of $$ y_x$$.

On the other hand, one can check that the target function for the hinge loss is exactly $$f^*$$. Thus, in a sufficiently rich hypothesis space—or equivalently, for an appropriately chosen kernel—the SVM classifier will converge to the simplest function (in terms of $$\mathcal{R}$$) that correctly classifies the data. This extends the geometric interpretation of SVM—for linear classification, the empirical risk is minimized by any function whose margins lie between the support vectors, and the simplest of these is the max-margin classifier.