Search Results for author: Eugene Golikov

Found 7 papers, 3 papers with code

Neural Tangent Kernel: A Survey

no code implementations29 Aug 2022 Eugene Golikov, Eduard Pokonechnyy, Vladimir Korviakov

A seminal work [Jacot et al., 2018] demonstrated that training a neural network under specific parameterization is equivalent to performing a particular kernel method as width goes to infinity.

Feature Learning in $L_{2}$-regularized DNNs: Attraction/Repulsion and Sparsity

no code implementations31 May 2022 Arthur Jacot, Eugene Golikov, Clément Hongler, Franck Gabriel

This second reformulation allows us to prove a sparsity result for homogeneous DNNs: any local minimum of the $L_{2}$-regularized loss can be achieved with at most $N(N+1)$ neurons in each hidden layer (where $N$ is the size of the training set).

Dynamically Stable Infinite-Width Limits of Neural Classifiers

no code implementations28 Sep 2020 Eugene Golikov

Existing MF and NTK limit models, as well as one novel limit model, satisfy most of the properties demonstrated by finite-width models.

Binary Classification

An Essay on Optimization Mystery of Deep Learning

no code implementations17 May 2019 Eugene Golikov

Despite the huge empirical success of deep learning, theoretical understanding of neural networks learning process is still lacking.

Differentiable lower bound for expected BLEU score

2 code implementations13 Dec 2017 Vlad Zhukov, Eugene Golikov, Maksim Kretov

In natural language processing tasks performance of the models is often measured with some non-differentiable metric, such as BLEU score.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.