Search Results for author: Arip Asadulaev

Found 13 papers, 1 papers with code

Unbalanced and Light Optimal Transport

no code implementations14 Mar 2023 Milena Gazdieva, Arip Asadulaev, Alexander Korotin, Evgeny Burnaev

We address this challenge and propose a novel theoretically-justified and lightweight unbalanced EOT solver.

Easy Batch Normalization

no code implementations18 Jul 2022 Arip Asadulaev, Alexander Panfilov, Andrey Filchenkov

It was shown that adversarial examples improve object recognition.

Object Recognition

Adversarial Training Improves Joint Energy-Based Generative Modelling

no code implementations18 Jul 2022 Rostislav Korst, Arip Asadulaev

We propose the novel framework for generative modelling using hybrid energy-based models.

Connecting adversarial attacks and optimal transport for domain adaptation

no code implementations30 May 2022 Arip Asadulaev, Vitaly Shutov, Alexander Korotin, Alexander Panfilov, Andrey Filchenkov

In domain adaptation, the goal is to adapt a classifier trained on the source domain samples to the target domain.

Domain Adaptation

Neural Optimal Transport with General Cost Functionals

no code implementations30 May 2022 Arip Asadulaev, Alexander Korotin, Vage Egiazarian, Petr Mokrov, Evgeny Burnaev

We introduce a novel neural network-based algorithm to compute optimal transport (OT) plans for general cost functionals.

Cycle monotonicity of adversarial attacks for optimal domain adaptation

no code implementations29 Sep 2021 Arip Asadulaev, Vitaly Shutov, Alexander Korotin, Alexander Panfilov, Andrey Filchenkov

In our algorithm, instead of mapping from target to the source domain, optimal transport maps target samples to the set of adversarial examples.

Domain Adaptation Semi-supervised Domain Adaptation

Stabilizing Transformer-Based Action Sequence Generation For Q-Learning

no code implementations23 Oct 2020 Gideon Stein, Andrey Filchenkov, Arip Asadulaev

To support the findings of this work, this paper seeks to provide an additional example of a Transformer-based RL method.

Q-Learning Reinforcement Learning (RL)

Wasserstein-2 Generative Networks

4 code implementations ICLR 2021 Alexander Korotin, Vage Egiazarian, Arip Asadulaev, Alexander Safin, Evgeny Burnaev

We propose a novel end-to-end non-minimax algorithm for training optimal transport mappings for the quadratic cost (Wasserstein-2 distance).

Domain Adaptation Style Transfer

Backronym

no code implementations5 Aug 2019 Arip Asadulaev

The problem is that the amount of information is also growing, and many methods remain unknown in a large number of papers.

BIG-bench Machine Learning

Conditioning of Reinforcement Learning Agents and its Policy Regularization Application

no code implementations13 Jun 2019 Arip Asadulaev, Igor Kuznetsov, Gideon Stein, Andrey Filchenkov

In this paper, we try to answer the following question: Can information about policy conditioning help to shape a more stable and general policy of reinforcement learning agents?

Continuous Control reinforcement-learning +1

Interpretable Few-Shot Learning via Linear Distillation

no code implementations13 Jun 2019 Arip Asadulaev, Igor Kuznetsov, Andrey Filchenkov

It is important to develop mathematically tractable models than can interpret knowledge extracted from the data and provide reasonable predictions.

Few-Shot Learning regression

Cannot find the paper you are looking for? You can Submit a new open access paper.