Sparse Classification and Phase Transitions: A Discrete Optimization Perspective

3 Oct 2017  ·  Dimitris Bertsimas, Jean Pauphilet, Bart Van Parys ·

In this paper, we formulate the sparse classification problem of $n$ samples with $p$ features as a binary convex optimization problem and propose a cutting-plane algorithm to solve it exactly. For sparse logistic regression and sparse SVM our algorithm finds optimal solutions for $n$ and $p$ in the $10,000$s within minutes. On synthetic data our algorithm exhibits a phase transition phenomenon: there exists a $n_0$ for which for $n<n_0$ the algorithm takes a long time to find the optimal solution and it is does not recover the correct support, while for $n\geqslant n_0$, the algorithm is very fast and accurately detects all the true features, but does not return any false features. When data is generated by $y_i = sign\left(x_i^T w^{\star} + \varepsilon_i\right)$, with $w^\star \in \{0,1\}^p$, supp$(w^{\star} )=k$ and $\varepsilon_i\sim \mathcal{N}(0, \sigma^2)$, we prove that $n_0> 6 \pi^2 \left(2 + \sigma^2\right) k \log(p-k)$. In contrast, while Lasso accurately detects all the true features, it persistently returns incorrect features, even as the number of observations increases. Finally, we apply our method on classifying the type of cancer using gene expression data from the Cancer Genome Atlas Research Network with $n=1,145$ lung cancer patients and $p=14,858$ genes.Sparse classification using logistic regression returns a classifier based on $50$ genes versus $171$ for Lasso and using SVM $30$ genes versus $172$ for Lasso with similar predictive accuracy.

PDF Abstract