Search Results for author: Nicolò Campolongo

Found 3 papers, 1 papers with code

Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers

1 code implementation NeurIPS 2021 Jeffrey Negrea, Blair Bilodeau, Nicolò Campolongo, Francesco Orabona, Daniel M. Roy

Quantile (and, more generally, KL) regret bounds, such as those achieved by NormalHedge (Chaudhuri, Freund, and Hsu 2009) and its variants, relax the goal of competing against the best individual expert to only competing against a majority of experts on adversarial data.

A closer look at temporal variability in dynamic online learning

no code implementations15 Feb 2021 Nicolò Campolongo, Francesco Orabona

Our proposed algorithm is adaptive not only to the temporal variability of the loss functions, but also to the path length of the sequence of comparators when an upper bound is known.

Temporal Variability in Implicit Online Learning

no code implementations NeurIPS 2020 Nicolò Campolongo, Francesco Orabona

We prove a novel static regret bound that depends on the temporal variability of the sequence of loss functions, a quantity which is often encountered when considering dynamic competitors.

Cannot find the paper you are looking for? You can Submit a new open access paper.