no code implementations • 7 Jun 2023 • Xavier Fontaine, Félix Gaschi, Parisa Rastin, Yannick Toussaint
However, other methods leveraging translation models can be used to perform NER without annotated data in the target language, by either translating the training set or test set.
no code implementations • NeurIPS 2020 • Valentin De Bortoli, Alain Durmus, Xavier Fontaine, Umut Simsekli
In comparison to previous works on the subject, we consider settings in which the sequence of stepsizes in SGD can potentially depend on the number of neurons and the iterations.
no code implementations • 8 Apr 2020 • Xavier Fontaine, Valentin De Bortoli, Alain Durmus
This paper proposes a thorough theoretical analysis of Stochastic Gradient Descent (SGD) with non-increasing step sizes.
no code implementations • 20 Jun 2019 • Xavier Fontaine, Pierre Perrault, Michal Valko, Vianney Perchet
By trying to minimize the $\ell^2$-loss $\mathbb{E} [\lVert\hat{\beta}-\beta^{\star}\rVert^2]$ the decision maker is actually minimizing the trace of the covariance matrix of the problem, which corresponds then to online A-optimal design.
no code implementations • 12 Feb 2019 • Xavier Fontaine, Shie Mannor, Vianney Perchet
This can be recast as a specific stochastic optimization problem where the objective is to maximize the cumulative reward, or equivalently to minimize the regret.
no code implementations • 11 Oct 2018 • Xavier Fontaine, Quentin Berthet, Vianney Perchet
We consider the stochastic contextual bandit problem with additional regularization.