Sparse Linear Regression with Constraints: A Flexible Entropy-based Framework

14 Nov 2023  ·  Amber Srivastava, Alisina Bayati, Srinivasa Salapaka ·

This work presents a new approach to solve the sparse linear regression problem, i.e., to determine a k-sparse vector w in R^d that minimizes the cost ||y - Aw||^2_2. In contrast to the existing methods, our proposed approach splits this k-sparse vector into two parts -- (a) a column stochastic binary matrix V, and (b) a vector x in R^k. Here, the binary matrix V encodes the location of the k non-zero entries in w. Equivalently, it encodes the subset of k columns in the matrix A that map w to y. We demonstrate that this enables modeling several non-trivial application-specific structural constraints on w as constraints on V. The vector x comprises of the actual non-zero values in w. We use Maximum Entropy Principle (MEP) to solve the resulting optimization problem. In particular, we ascribe a probability distribution to the set of all feasible binary matrices V, and iteratively determine this distribution and the vector x such that the associated Shannon entropy gets minimized, and the regression cost attains a pre-specified value. The resulting algorithm employs homotopy from the convex entropy function to the non-convex cost function to avoid poor local minimum. We demonstrate the efficacy and flexibility of our proposed approach in incorporating a variety of practical constraints, that are otherwise difficult to model using the existing benchmark methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods