Search Results for author: Wojciech Masarczyk

Found 10 papers, 2 papers with code

Reinforcement learning with experience replay and adaptation of action dispersion

no code implementations30 Jul 2022 Paweł Wawrzyński, Wojciech Masarczyk, Mateusz Ostaszewski

To that end, the dispersion should be tuned to assure a sufficiently high probability (densities) of the actions in the replay buffer and the modes of the distributions that generated them, yet this dispersion should not be higher.

reinforcement-learning Reinforcement Learning (RL)

Logarithmic Continual Learning

no code implementations17 Jan 2022 Wojciech Masarczyk, Paweł Wawrzyński, Daniel Marczak, Kamil Deja, Tomasz Trzciński

Our approach leverages allocation of past data in a~set of generative models such that most of them do not require retraining after a~task.

Continual Learning

On robustness of generative representations against catastrophic forgetting

no code implementations4 Sep 2021 Wojciech Masarczyk, Kamil Deja, Tomasz Trzciński

Catastrophic forgetting of previously learned knowledge while learning new tasks is a widely observed limitation of contemporary neural networks.

Continual Learning Specificity

Multiband VAE: Latent Space Alignment for Knowledge Consolidation in Continual Learning

1 code implementation23 Jun 2021 Kamil Deja, Paweł Wawrzyński, Wojciech Masarczyk, Daniel Marczak, Tomasz Trzciński

We propose a new method for unsupervised generative continual learning through realignment of Variational Autoencoder's latent space.

Continual Learning Disentanglement +1

Reinforcement learning for optimization of variational quantum circuit architectures

no code implementations NeurIPS 2021 Mateusz Ostaszewski, Lea M. Trenkwalder, Wojciech Masarczyk, Eleanor Scerri, Vedran Dunjko

The study of Variational Quantum Eigensolvers (VQEs) has been in the spotlight in recent times as they may lead to real-world applications of near-term quantum devices.

reinforcement-learning Reinforcement Learning (RL)

BinPlay: A Binary Latent Autoencoder for Generative Replay Continual Learning

1 code implementation25 Nov 2020 Kamil Deja, Paweł Wawrzyński, Daniel Marczak, Wojciech Masarczyk, Tomasz Trzciński

We introduce a binary latent space autoencoder architecture to rehearse training samples for the continual learning of neural networks.

Continual Learning

Reducing catastrophic forgetting with learning on synthetic data

no code implementations29 Apr 2020 Wojciech Masarczyk, Ivona Tautkute

Our experimental results on Split-MNIST dataset show that training a model on such synthetic data in sequence does not result in catastrophic forgetting.

Split-MNIST

Cannot find the paper you are looking for? You can Submit a new open access paper.