Search Results for author: Wojciech Kotłowski

Found 15 papers, 2 papers with code

Noise misleads rotation invariant algorithms on sparse targets

no code implementations5 Mar 2024 Manfred K. Warmuth, Wojciech Kotłowski, Matt Jones, Ehsan Amid

It is well known that the class of rotation invariant algorithms are suboptimal even for learning sparse linear problems when the number of examples is below the "dimension" of the problem.

Learning from Randomly Initialized Neural Network Features

no code implementations13 Feb 2022 Ehsan Amid, Rohan Anil, Wojciech Kotłowski, Manfred K. Warmuth

We present the surprising result that randomly initialized neural networks are good feature extractors in expectation.

Robust Online Convex Optimization in the Presence of Outliers

no code implementations5 Jul 2021 Tim van Erven, Sarah Sachs, Wouter M. Koolen, Wojciech Kotłowski

If the outliers are chosen adversarially, we show that a simple filtering strategy on extreme gradients incurs O(k) additive overhead compared to the usual regret bounds, and that this is unimprovable, which means that k needs to be sublinear in the number of rounds.

A case where a spindly two-layer linear network whips any neural network with a fully connected input layer

no code implementations16 Oct 2020 Manfred K. Warmuth, Wojciech Kotłowski, Ehsan Amid

It was conjectured that any neural network of any structure and arbitrary differentiable transfer functions at the nodes cannot learn the following problem sample efficiently when trained with gradient descent: The instances are the rows of a $d$-dimensional Hadamard matrix and the target is one of the features, i. e. very sparse.

Adaptive scale-invariant online algorithms for learning linear models

no code implementations20 Feb 2019 Michał Kempka, Wojciech Kotłowski, Manfred K. Warmuth

We consider online learning with linear models, where the algorithm predicts on sequentially revealed instances (feature vectors), and is compared against the best linear function (comparator) in hindsight.

Bandit Principal Component Analysis

no code implementations8 Feb 2019 Wojciech Kotłowski, Gergely Neu

We consider a partial-feedback variant of the well-studied online PCA problem where a learner attempts to predict a sequence of $d$-dimensional vectors in terms of a quadratic loss, while only having limited feedback about the environment's choices.

Decision Making

The Many Faces of Exponential Weights in Online Learning

no code implementations21 Feb 2018 Dirk van der Hoeven, Tim van Erven, Wojciech Kotłowski

A standard introduction to online learning might place Online Gradient Descent at its center and then proceed to develop generalizations and extensions like Online Mirror Descent and second-order methods.

Second-order methods

Scale-invariant unconstrained online learning

no code implementations23 Aug 2017 Wojciech Kotłowski

We first give a negative result, showing that no algorithm can achieve a meaningful bound in terms of scale-invariant norm of the comparator in the worst case.

Consistency Analysis for Binary Classification Revisited

no code implementations ICML 2017 Krzysztof Dembczyński, Wojciech Kotłowski, Oluwasanmi Koyejo, Nagarajan Natarajan

Statistical learning theory is at an inflection point enabled by recent advances in understanding and optimizing a wide range of metrics.

Binary Classification Classification +2

Online Isotonic Regression

no code implementations14 Mar 2016 Wojciech Kotłowski, Wouter M. Koolen, Alan Malek

We then prove that the Exponential Weights algorithm played over a covering net of isotonic functions has a regret bounded by $O\big(T^{1/3} \log^{2/3}(T)\big)$ and present a matching $\Omega(T^{1/3})$ lower bound on regret.

regression

PCA with Gaussian perturbations

no code implementations16 Jun 2015 Wojciech Kotłowski, Manfred K. Warmuth

We develop a simple algorithm that needs $O(kn^2)$ per trial whose regret is off by a small factor of $O(n^{1/4})$.

Surrogate regret bounds for generalized classification performance metrics

no code implementations27 Apr 2015 Wojciech Kotłowski, Krzysztof Dembczyński

We show that the regret of the resulting classifier (obtained from thresholding $f$ on $\widehat{\theta}$) measured with respect to the target metric is upperbounded by the regret of $f$ measured with respect to the surrogate loss.

Binary Classification Classification +1

Consistent optimization of AMS by logistic loss minimization

no code implementations5 Dec 2014 Wojciech Kotłowski

First, a real-valued function is learned by minimizing a surrogate loss for binary classification, such as logistic loss, on the training sample.

Binary Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.