User:H.m.s.aljohani

hello
The plenty was proposed by Zou, H. and Hastie, a new regularisation and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the Lasso, while enjoying a similar sparsity of representation.

interpretation
The elastic net plenty contains LASSO (least absolute shrinkage and selection operator) part which  defined as
 * $$\|\beta\|_1 = \textstyle \sum_{j=1}^p |\beta_j|.$$

Use of this penalty function has several limitations. For example, in the "large p, small n" case (high-dimensional data with few examples), the LASSO selects at most n variables before it saturates. Also if there is a group of highly correlated variables, then the LASSO tends to select one variable from a group and ignore the others. To overcome these limitations, the elastic net adds a quadratic part to the penalty ($$\|\beta\|^2$$), which when used alone is ridge regression. The estimates from the elastic net method are defined by


 * $$ \hat{\beta} = \underset{\beta}{\operatorname{argmin}} (\| y-X \beta \|^2 + \lambda_2 \|\beta\|^2 + \lambda_1 \|\beta\|_1) .$$

The quadratic penalty term makes the loss function strictly convex, and it therefore has a unique minimum. The elastic net method includes the LASSO and ridge regression.

Improvement
Robert G.~Aykroyd