Variable Selection
127 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Variable Selection
Libraries
Use these libraries to find Variable Selection models and implementationsLatest papers
Structured Learning in Time-dependent Cox Models
We propose a flexible framework for variable selection in time-dependent Cox models, accommodating complex selection rules.
TreeDQN: Learning to minimize Branch-and-Bound tree
Combinatorial optimization problems require an exhaustive search to find the optimal solution.
Sparsifying Bayesian neural networks with latent binary variables and normalizing flows
In this paper, we will consider two extensions to the LBBNN method: Firstly, by using the local reparametrization trick (LRT) to sample the hidden units directly, we get a more computationally efficient algorithm.
System Identification with Copula Entropy
In this paper we propose a method for identifying differential equation of dynamical systems with CE.
Synthesize High-dimensional Longitudinal Electronic Health Records via Hierarchical Autoregressive Language Model
In this paper, we propose Hierarchical Autoregressive Language mOdel (HALO) for generating longitudinal high-dimensional EHR, which preserve the statistical properties of real EHR and can be used to train accurate ML models without privacy concerns.
Dual-sPLS: a family of Dual Sparse Partial Least Squares regressions for feature selection and prediction with tunable sparsity; evaluation on simulated and near-infrared (NIR) data
A quantitative prediction objective can be enriched by qualitative data interpretation, for instance by locating the most influential features.
Learning a Generic Value-Selection Heuristic Inside a Constraint Programming Solver
Important design choices in a solver are the branching heuristics, which are designed to lead the search to the best solutions in a minimum amount of time.
Multi-Task Learning for Sparsity Pattern Heterogeneity: A Discrete Optimization Approach
Allowing the regression coefficients of tasks to have different sparsity patterns (i. e., different supports), we propose a modeling framework for MTL that encourages models to share information across tasks, for a given covariate, through separately 1) shrinking the coefficient supports together, and/or 2) shrinking the coefficient values together.
RFFNet: Large-Scale Interpretable Kernel Methods via Random Fourier Features
Kernel methods provide a flexible and theoretically grounded approach to nonlinear and nonparametric learning.
Near-optimal multiple testing in Bayesian linear models with finite-sample FDR control
In this paper, we develop near-optimal multiple testing procedures for high dimensional Bayesian linear models with isotropic covariates.