no code implementations • 10 Jan 2022 • Martin Slawski, Bodhisattva Sen
We study permutation recovery in the permuted regression setting and develop a computationally efficient and easy-to-use algorithm for denoising based on the Kiefer-Wolfowitz [Ann.
no code implementations • 2 Nov 2021 • Zhenbang Wang, Emanuel Ben-David, Martin Slawski
In the analysis of data sets consisting of (X, Y)-pairs, a tacit assumption is that each pair corresponds to the same observation unit.
no code implementations • 5 Nov 2019 • Yujing Chen, Yue Ning, Martin Slawski, Huzefa Rangwala
In this paper, we present an Asynchronous Online Federated Learning (ASO-Fed) framework, where the edge devices perform online learning with continuous streaming local data and a central server aggregates model parameters from clients.
no code implementations • 3 Oct 2019 • Martin Slawski, Guoqing Diao, Emanuel Ben-David
In this paper, we present a method to adjust for such mismatches under ``partial shuffling" in which a sufficiently large fraction of (predictors, response)-pairs are observed in their correct correspondence.
no code implementations • 5 Sep 2019 • Hang Zhang, Martin Slawski, Ping Li
For the case in which both the signal and permutation are unknown, the problem is reformulated as a bi-convex optimization problem with an auxiliary variable, which can be solved by the Alternating Direction Method of Multipliers (ADMM).
no code implementations • 16 Jul 2019 • Martin Slawski, Emanuel Ben-David, Ping Li
A tacit assumption in linear regression is that (response, predictor)-pairs correspond to identical observational units.
no code implementations • 17 May 2018 • Felicitas J. Detmer, Martin Slawski
Categorical regressor variables are usually handled by introducing a set of indicator variables, and imposing a linear constraint to ensure identifiability in the presence of an intercept, or equivalently, using one of various coding schemes.
no code implementations • NeurIPS 2017 • Ping Li, Martin Slawski
Random projections have been increasingly adopted for a diverse set of tasks in machine learning involving dimensionality reduction.
no code implementations • 16 Oct 2017 • Martin Slawski, Emanuel Ben-David
In this paper, we consider the situation of "permuted data" in which this basic correspondence has been lost.
no code implementations • 23 Sep 2017 • Martin Slawski
In this paper, we present an analysis showing that for random projections satisfying a Johnson-Lindenstrauss embedding property, the prediction error in subsequent regression is close to that of PCR, at the expense of requiring a slightly large number of random projections than principal components.
no code implementations • NeurIPS 2016 • Ping Li, Michael Mitzenmacher, Martin Slawski
Random projections constitute a simple, yet effective technique for dimensionality reduction with applications in learning and search problems.
no code implementations • 2 May 2016 • Ping Li, Syama Sundar Rangapuram, Martin Slawski
The de-facto standard approach of promoting sparsity by means of $\ell_1$-regularization becomes ineffective in the presence of simplex constraints, i. e.,~the target is known to have non-negative entries summing up to a given constant.
no code implementations • NeurIPS 2015 • Martin Slawski, Ping Li
We consider the problem of sparse signal recovery from $m$ linear measurements quantized to $b$ bits.
no code implementations • NeurIPS 2015 • Martin Slawski, Ping Li, Matthias Hein
Over the past few years, trace regression models have received considerable attention in the context of matrix completion, quantum state tomography, and compressed sensing.
no code implementations • 26 Apr 2014 • Martin Slawski, Matthias Hein
Consider a random vector with finite second moments.
no code implementations • NeurIPS 2013 • Martin Slawski, Matthias Hein, Pavlo Lutsik
Motivated by an application in computational biology, we consider low-rank matrix factorization with $\{0, 1\}$-constraints on one of the factors and optionally convex constraints on the second one.
no code implementations • 4 May 2012 • Martin Slawski, Matthias Hein
We show that for these designs, the performance of NNLS with regard to prediction and estimation is comparable to that of the lasso.
no code implementations • NeurIPS 2011 • Martin Slawski, Matthias Hein
Non-negative data are commonly encountered in numerous fields, making non-negative least squares regression (NNLS) a frequently used tool.