Fast OSCAR and OWL with Safe Screening Rules

ICML 2020  ·  Runxue Bao, Bin Gu, Heng Huang ·

Ordered Weight $L_{1}$-Norms (OWL) is a new family of regularizers for high-dimensional sparse regression. However, due to the non-separable penalty, existing algorithms are either invalid or inefficient when either the size of the feature or sample is large. To address this challenge, we propose the first safe screening rule for the OWL regularized regression, which effectively avoids the updates of the parameters whose coefficients must be zeros. Moreover, we prove the proposed screening rule can be safely applied to the standard proximal gradient methods. More importantly, our screening rule can also be safely applied to stochastic proximal gradient methods in large-scale learning, which is the first safe screening rule in the stochastic setting. Experimental results on a variety of datasets show that the screening rule leads to a significant computation gain without any loss of accuracy, compared to exiting competitive algorithms.

PDF ICML 2020 PDF
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here