Abess

abess (Adaptive Best Subset Selection, also ABESS) is a machine learning method designed to address the problem of best subset selection. It aims to determine which features or variables are crucial for optimal model performance when provided with a dataset and a prediction task. abess was introduced by Zhu in 2020 and it dynamically selects the appropriate model size adaptively, eliminating the need for selecting regularization parameters.

abess is applicable in various statistical and machine learning tasks, including linear regression, the Single-index model, and other common predictive models. abess can also be applied in biostatistics.

Basic Form
The basic form of abess is employed to address the optimal subset selection problem in general linear regression. abess is an $$l_0$$ method, it is characterized by its polynomial time complexity and the property of providing both unbiased and consistent estimates.

In the context of linear regression, assuming we have knowledge of $$n$$ independent samples $$(x_i, y_i), i=1, \ldots, n$$, where $$x_i \in \mathbb{R}^{p \times 1}$$ and $$ y_i \in \mathbb{R} $$, we define $$X = (x_1, \ldots, x_n)^{\top}$$ and $$y = (y_1, \ldots, y_n)^{\top}$$. The following equation represents the general linear regression model:

$$   y = X\beta + \varepsilon. $$

To obtain appropriate parameters $$\beta$$, one can consider the loss function for linear regression:

$$   \mathcal{L}_n^{\text{LR}}(\beta; X, y) = \frac{1}{2n}\|y - X\beta\|_2^2. $$

In abess, the initial focus is on optimizing the loss function under the $$l_0$$ constraint. That is, we consider the following problem:

$$   \min_{\beta \in \mathbb{R}^{p \times 1}} \mathcal{L}_n^{\text{LR}}(\beta; X, y), \text{ subject to } \|\beta\|_0 \leq s, $$

where $$s$$ represents the desired size of the support set, and $$\|\beta\|_0 = \sum_{i=1}^p \mathcal{I}_{(\beta_i \neq 0)}$$ is the $$l_0$$ norm of the vector.

To address the optimization problem described above, abess iteratively exchanges an equal number of variables between the active set and the inactive set. In each iteration, the concept of sacrifice is introduced as follows:

$$ \xi_j = \mathcal L_n^{\text{LR}}\left(\hat{\boldsymbol{\beta}}^{\mathcal{A} \backslash\{j\}}\right) - \mathcal L_n^{\text{LR}}\left(\hat{\boldsymbol{\beta}}^{\mathcal{A}}\right) = \frac{\boldsymbol{X}_j^{\top} \boldsymbol{X}_j}{2 n}\left(\hat{\beta}_j\right)^2 $$
 * For j in the active set ($$ j\in\hat{\mathcal A} $$):

$$ \xi_j = \mathcal L_n^{\text{LR}}\left(\hat{\boldsymbol{\beta}}^{\mathcal{A}}\right) - \mathcal L_n^{\text{LR}}\left(\hat{\boldsymbol{\beta}}^{\mathcal{A}}+\hat{\boldsymbol{t}}^{\{j\}}\right) = \frac{\boldsymbol{X}_j^{\top} \boldsymbol{X}_j}{2 n}\left(\frac{\hat{\mathrm{d}}_j}{\boldsymbol{X}_j^{\top} \boldsymbol{X}_j / n}\right)^2 $$
 * For j in the inactive set ($$ j\notin\hat{\mathcal A} $$):

Here are the key elements in the above equations:
 * $$\hat{\beta}^{\mathcal A}$$: This represents the estimate of $$\beta$$ obtained in the previous iteration.
 * $$\hat{\mathcal A}$$: It denotes the estimated active set from the previous iteration.
 * $$\hat{\boldsymbol{\beta}}^{\mathcal{A} \backslash\{j\}}$$: This is a vector where the j-th element is set to 0, while the other elements are the same as $$\hat{\beta}^{\mathcal A}$$.
 * $$\hat{\boldsymbol{t}}^{\{j\}}=\arg \min _t \mathcal L_n^{\text{LR}}\left(\hat{\boldsymbol{\beta}}^{\mathcal{A}}+\boldsymbol{t}^{\{j\}}\right)$$: Here, $$t^{\{j\}}$$ represents a vector where all elements are 0 except the j-th element.
 * $$\hat{d}_j=\boldsymbol{X}_j^{\top}(\boldsymbol{y}-\boldsymbol{X} \hat{\boldsymbol{\beta}}) / n$$: This is calculated based on the equation mentioned.

The iterative process involves exchanging variables, with the aim of minimizing the sacrifices in the active set while maximizing the sacrifices in the inactive set during each iteration. This approach allows abess to efficiently search for the optimal feature subset.

In abess, select an appropriate $$s_{\max}$$ and optimize the above problem for active sets size $$s = 1, \ldots, s_{\max}$$ using the information criterion $$ \text{GIC} = n \log \mathcal{L}_n^{\text{LR}} + s \log p \log \log n,$$ to adaptively choose the appropriate active set size $$s$$ and obtain its corresponding abess estimator.

Generalizations
The splicing algorithm in abess can be employed for subset selection in other models.

Distribution-Free Location-Scale Regression
In 2023, Siegfried extends abess to the case of Distribution-Free and Location-Scale. Specifically, it considers the optimization problem

$$\max_{\boldsymbol{\vartheta} \in \mathbb{R}^P, \boldsymbol{\beta} \in \mathbb{R}^J, \boldsymbol{\gamma} \in \mathbb{R}^J} \sum_{i=1}^N \ell_i\left(\boldsymbol{\vartheta}, \boldsymbol{x}_i^{\top} \boldsymbol{\beta},{\sqrt{\exp \left(\boldsymbol{x}_i^{\top} \boldsymbol{\gamma}\right)}}^{-1}\right),$$

subject to $$\left\|\left(\boldsymbol{\beta}^{\top}, \boldsymbol{\gamma}^{\top}\right)^{\top}\right\|_0 \leq s,$$ where $$\ell_i$$ is a loss function, $$\boldsymbol{\vartheta}$$ is a parameter vector, $$\boldsymbol{\beta}$$ and $$\boldsymbol{\gamma}$$ are vectors, and $$\boldsymbol{x}_i$$ is a data vector.

This approach, demonstrated across various applications, enables parsimonious regression modeling for arbitrary outcomes while maintaining interpretability through innovative subset selection procedures.

Groups Selection
In 2023, Zhang applied the splicing algorithm to group selection, optimizing the following model:

$$ \min_{\boldsymbol{\beta} \in \mathbb{R}^p} \mathcal L_n^{\text{LR}}(\beta;X,y) \text{ subject to } \sum_{j=1}^J I\left(\|\boldsymbol{\beta}_{G_j}\|_2 \neq 0\right) \leq s $$

Here are the symbols involved:


 * $$J$$: Total number of feature groups, representing the existence of $$J$$ non-overlapping feature groups in the dataset.
 * $$G_j$$: Index set for the $$j$$-th feature group, where $$j$$ ranges from 1 to $$J$$, representing the feature grouping structure in the data.
 * $$s$$: Model size, a positive integer determined from the data, limiting the number of selected feature groups.

Regression with Corrupted Data
Zhang applied the splicing algorithm to handle corrupted data. Corrupted data refers to information that has been disrupted or contains errors during the data collection or recording process. This interference may include sensor inaccuracies, recording errors, communication issues, or other external disturbances, leading to inaccurate or distorted observations within the dataset.

Single Index Models
In 2023, Tang applied the splicing algorithm to optimal subset selection in the Single-index model.

The form of the Single Index Model (SIM) is given by $$ y_i = g(\boldsymbol{b}^{\top} \boldsymbol{x}_i, e_i), \quad i = 1, \ldots, n, $$

where $$\boldsymbol{b}$$ is the parameter vector, $$e_i$$ is the error term.

The corresponding loss function is defined as $$ l_n(\boldsymbol{\beta}) = \sum_{i=1}^n \left(\frac{r_i}{n} - \frac{1}{2} - \boldsymbol{x}_i^{\top} \boldsymbol{\beta}\right)^2, $$

where $$\boldsymbol{r}$$ is the rank vector, $$r_i$$ is the rank of $$y_i$$ in $$\boldsymbol{y}$$.

The Estimation Problem addressed by this algorithm is $$ \min_{\boldsymbol{\beta}} l_n(\boldsymbol{\beta}), \text { s.t. } \|\boldsymbol{\beta}\|_0 \leq s. $$

Eographically Weighted Regression Model
In 2023, Wu applied the splicing algorithm to geographically weighted regression (GWR). GWR is a spatial analysis method, and Wu's research focuses on improving GWR performance in handling geographical data regression modeling. This is achieved through the application of an l0-norm variable adaptive selection method, which simultaneously performs model selection and coefficient optimization, enhancing the accuracy of regression modeling for geographic spatial data.

Distributed Systems
In 2023, Chen introduced an innovative method addressing challenges in high-dimensional distributed systems, proposing an efficient algorithm for abess.

A distributed system is a computational model that distributes computing tasks across multiple independent nodes to achieve more efficient, reliable, and scalable data processing. In a distributed system, individual computing nodes can work simultaneously, collaboratively completing the overall tasks, thereby enhancing system performance and processing capabilities.

However, within distributed systems, there is a lack of efficient algorithms for optimal subset selection. To address this gap, Chen introduces a novel communication-efficient approach for handling optimal subset selection in distributed systems.

Software Package
The abess library. (version 0.4.5) is an R package and python package based on C++ algorithms. It is open-source on GitHub. The library can be used for optimal subset selection in linear regression, (multi-)classification, and censored-response modeling models. The abess package allows for parameters to be chosen in a grouped format. Information and tutorials are available on the abess homepage.

Application
abess can be applied in biostatistics, such as assessing the robust severity of COVID-19 patients, conducting antibiotic resistance in Mycobacterium tuberculosis, exploring prognostic factors in neck pain, and developing prediction models for severe pain in patients after percutaneous nephrolithotomy. abess can also be applied to gene selection. In the field of data-driven partial differential equation (PDE) discovery, Thanasutives applied abess to automatically identify parsimonious governing PDEs.