Subset Selection with Shrinkage: Sparse Linear Modeling When the SNR Is Low

<jats:p> Learning Compact High-Dimensional Models in Noisy Environments </jats:p><jats:p> Building compact, interpretable statistical models where the output depends upon a small number of input features is a well-known problem in modern analytics applications. A fundamental tool u...

Mô tả đầy đủ

Chi tiết về thư mục
Những tác giả chính: Mazumder, Rahul, Radchenko, Peter, Dedieu, Antoine
Tác giả khác: Sloan School of Management
Định dạng: Bài viết
Ngôn ngữ:English
Được phát hành: Institute for Operations Research and the Management Sciences (INFORMS) 2022
Truy cập trực tuyến:https://hdl.handle.net/1721.1/144220
Miêu tả
Tóm tắt:<jats:p> Learning Compact High-Dimensional Models in Noisy Environments </jats:p><jats:p> Building compact, interpretable statistical models where the output depends upon a small number of input features is a well-known problem in modern analytics applications. A fundamental tool used in this context is the prominent best subset selection (BSS) procedure, which seeks to obtain the best linear fit to data subject to a constraint on the number of nonzero features. Whereas the BSS procedure works exceptionally well in some regimes, it performs pretty poorly in out-of-sample predictive performance when the underlying data are noisy, which is quite common in practice. In this paper, we explore this relatively less-understood overfitting behavior of BSS in low-signal noisy environments and propose alternatives that appear to mitigate such shortcomings. We study the theoretical statistical properties of our proposed regularized BSS procedure and show promising computational results on various data sets, using tools from integer programming and first-order methods. </jats:p>