Knockoffs (statistics)

In statistics, the knockoff filter, or simply knockoffs, is a framework for variable selection. It was originally introduced for linear regression by Rina Barber and Emmanuel Candès, and later generalized to other regression models in the random design setting. Knockoffs has found application in many practical areas, notably in genome-wide association studies.

Fixed-X knockoffs
Consider a linear regression model with response vector $$\mathbf y$$ and feature matrix $$\mathbf X$$, which is treated as deterministic. A matrix $$\tilde{\mathbf X}$$ is said to be knockoffs of $$\mathbf X$$ if it does not depend on $$\mathbf y$$ and satisfies $$\mathbf X_i^\top\mathbf X_j=\mathbf X_i^\top\tilde{\mathbf X}_j=\tilde{\mathbf X}_i^\top\mathbf X_j=\tilde{\mathbf X}_i^\top\tilde{\mathbf X}_j$$ for $$i\ne j$$. Barber and Candès showed that, equipped with a suitable feature importance statistic, fixed-X knockoffs can be used for variable selection while controlling the false discovery rate (FDR).

Model-X knockoffs
Consider a general regression model with response vector $$\mathbf y$$ and random feature matrix $$\mathbf X$$. A matrix $$\tilde{\mathbf X}$$ is said to be knockoffs of $$\mathbf X$$ if it is conditionally independent of $$\mathbf y$$ given $$\mathbf X$$ and satisfies a subtle pairwise exchangeable condition: for any $$j$$, the joint distribution of the random matrix $$[\mathbf X,\tilde{\mathbf X}]$$ does not change if its $$j$$th and $$(j+p)$$th columns are swapped, where $$p$$ is the number of features. While it is less clear how to create model-X knockoffs compared to their fixed-X counterpart, various algorithms have been proposed to construct knockoffs. Once constructed, model-X knockoffs can be used for variable selection following the same procedure as fixed-X knockoffs and control the FDR.

Properties
The knockoffs $$\tilde{\mathbf X}$$ can be understood as negative controls. Informally speaking, knockoffs has the property that no method can statistically distinguish the original matrix from its knockoffs without looking at $$\mathbf y$$. Mathematically, the exchangeability conditions translate to symmetry that allows for an estimation of the type I error (e.g., if one wishes to choose the FDR as the type I error rate, the false discovery proportion is estimated), which then leads to exact type I error control.

Model-X knockoffs provides valid type I error control regardless of the unknown conditional distribution of $$\mathbf y$$ given $$\mathbf X$$, and it can work with black-box variable importance statistics, including the ones derived from complicated machine learning methods. A most significant challenge of implementing model-X knockoffs is that it requires nontrivial knowledge on the distribution of $$\mathbf X$$, which is usually high-dimensional. This knowledge can be gained with the help of unlabeled data.