abess
(Adaptive BEst Subset Selection) library aims to solve general best subset selection, i.e.,
find a small subset of predictors such that the resulting model is expected to have the highest accuracy.
The selection for best subset shows great value in scientific researches and practical applications.
For example, clinicians want to know whether a patient is healthy or not based on the expression levels of a few of important genes.
This library implements a generic algorithm framework to find the optimal solution in an extremely fast way. This framework now supports the detection of best subset under: linear regression, classification (binary or multi-class), counting-response modeling, censored-response modeling, multi-response modeling (multi-tasks learning), etc. It also supports the variants of best subset selection like group best subset selection, nuisance penalized regression, Especially, the time complexity of (group) best subset selection for linear regression is certifiably polynomial.
The abess
software has both Python and R's interfaces. Here a quick start will be given and for more details, please view: Installation.
Install the stable version of Python-package from Pypi:
$ pip install abess
or conda-forge:
$ conda install abess
Best subset selection for linear regression on a simulated dataset in Python:
from abess.linear import LinearRegression
from abess.datasets import make_glm_data
sim_dat = make_glm_data(n = 300, p = 1000, k = 10, family = "gaussian")
model = LinearRegression()
model.fit(sim_dat.x, sim_dat.y)
See more examples analyzed with Python in the Python tutorials.
Install the stable version of R-package from CRAN with:
install.packages("abess")
Best subset selection for linear regression on a simulated dataset in R:
library(abess)
sim_dat <- generate.data(n = 300, p = 1000)
abess(x = sim_dat[["x"]], y = sim_dat[["y"]])
See more examples analyzed with R in the R tutorials.
To show the power of abess in computation, we assess its timings of the CPU execution (seconds) on synthetic datasets, and compare to state-of-the-art variable selection methods. The variable selection and estimation results are deferred to Python performance and R performance. All computations are conducted on a Ubuntu platform with Intel(R) Core(TM) i9-9940X CPU @ 3.30GHz and 48 RAM.
We compare abess
Python package with scikit-learn
on linear regression and logistic regression. Results are presented in the below figure:
It can be see that abess
uses the least runtime to find the solution. This results can be reproduced by running the command in shell:
$ python abess/docs/simulation/Python/timings.py
We compare abess
R package with three widely used R packages: glmnet
, ncvreg
, and L0Learn
.
We get the runtime comparison results:
Compared with other packages,
abess
shows competitive computational efficiency,
and achieves the best computational power when variables have a large correlation.
Conducting the following command in shell can reproduce the above results in R:
$ Rscript abess/docs/simulation/R/timings.R
abess
is a free software and its source code is publicly available on Github. The core framework is programmed in C++, and user-friendly R and Python interfaces are offered. You can redistribute it and/or modify it under the terms of the GPL-v3 License. We welcome contributions for abess
, especially stretching abess
to the other best subset selection problems.
New features version 0.4.7
:
- Support limiting beta into a range by clipping method. One application is to perform non-negative fitting.
- Support no-intercept model for most regressors in
abess.linear
with argumentfit_intercept=False
. We assume that the data has been centered for these models. - Support AUC criterion for Logistic and Multinomial Regression.
New features version 0.4.6
:
- Support no-intercept model for most regressors in
abess.linear
with argumentfit_intercept=False
. We assume that the data has been centered for these models. (Python) abess
can be used viamlr3extralearners
as learnersregr.abess
andclassif.abess
. (R)- Use CMake on compiling to increase scalability.
- Support score functions for all GLM models. (Python)
- Rearrange some arguments in Python package to improve legibility. Please check the latest API document. (Python)
If you use abess
or reference our tutorials in a presentation or publication, we would appreciate citations of our library.
Zhu Jin, Xueqin Wang, Liyuan Hu, Junhao Huang, Kangkang Jiang, Yanhang Zhang, Shiyun Lin, and Junxian Zhu. "abess: A Fast Best-Subset Selection Library in Python and R." Journal of Machine Learning Research 23, no. 202 (2022): 1-7.
The corresponding BibteX entry:
@article{JMLR:v23:21-1060,
author = {Jin Zhu and Xueqin Wang and Liyuan Hu and Junhao Huang and Kangkang Jiang and Yanhang Zhang and Shiyun Lin and Junxian Zhu},
title = {abess: A Fast Best-Subset Selection Library in Python and R},
journal = {Journal of Machine Learning Research},
year = {2022},
volume = {23},
number = {202},
pages = {1--7},
url = {http://jmlr.org/papers/v23/21-1060.html}
}
- Junxian Zhu, Canhong Wen, Jin Zhu, Heping Zhang, and Xueqin Wang (2020). A polynomial algorithm for best-subset selection problem. Proceedings of the National Academy of Sciences, 117(52):33117-33123.
- Pölsterl, S (2020). scikit-survival: A Library for Time-to-Event Analysis Built on Top of scikit-learn. J. Mach. Learn. Res., 21(212), 1-6.
- Yanhang Zhang, Junxian Zhu, Jin Zhu, and Xueqin Wang. A splicing approach to best subset of groups selection. INFORMS Journal on Computing, 35(1):104–119, 2023. doi: 10.1287/ijoc.2022.1241.
- Qiang Sun and Heping Zhang (2020). Targeted Inference Involving High-Dimensional Data Using Nuisance Penalized Regression, Journal of the American Statistical Association, DOI: 10.1080/01621459.2020.1737079.
- Zhu Jin, Xueqin Wang, Liyuan Hu, Junhao Huang, Kangkang Jiang, Yanhang Zhang, Shiyun Lin, and Junxian Zhu. "abess: A Fast Best-Subset Selection Library in Python and R." Journal of Machine Learning Research 23, no. 202 (2022): 1-7.