User:EpochFail/Wikignome/test

Header 2

 * item 1
 * item 2
 * tabbed item


 * itemed tab


 * 1) numbered 1
 * 2) numbered 2

Math
Maximize (in $$\alpha_i$$ )
 * $$\tilde{L}(\mathbf{\alpha})=\sum_{i=1}^n \alpha_i - \frac{1}{2}\sum_{i, j} \alpha_i \alpha_j y_i y_j k(\mathbf{x}_i, \mathbf{x}_j)$$

subject to (for any $$i = 1, \dots, n$$)
 * $$0 \leq \alpha_i \leq C,\,$$

and
 * $$ \sum_{i=1}^n \alpha_i y_i = 0.$$

The key advantage of a linear penalty function is that the slack variables vanish from the dual problem, with the constant C appearing only as an additional constraint on the Lagrange multipliers. For the above formulation and its huge impact in practice, Cortes and Vapnik received the 2008 ACM Paris Kanellakis Award. Nonlinear penalty functions have been used, particularly to reduce the effect of outliers on the classifier, but unless care is taken, the problem becomes non-convex, and thus it is considerably more difficult to find a global solution.