no code implementations • 30 Oct 2023 • Wojciech Masarczyk, Tomasz Trzciński, Mateusz Ostaszewski
In the era of transfer learning, training neural networks from scratch is becoming obsolete.
no code implementations • 25 Oct 2022 • Bartosz Grabowski, Przemysław Głomb, Wojciech Masarczyk, Paweł Pławiak, Özal Yıldırım, U Rajendra Acharya, Ru-San Tan
Machine learning tasks can be divided into regression and classification.
no code implementations • 30 Jul 2022 • Paweł Wawrzyński, Wojciech Masarczyk, Mateusz Ostaszewski
To that end, the dispersion should be tuned to assure a sufficiently high probability (densities) of the actions in the replay buffer and the modes of the distributions that generated them, yet this dispersion should not be higher.
no code implementations • 17 Jan 2022 • Wojciech Masarczyk, Paweł Wawrzyński, Daniel Marczak, Kamil Deja, Tomasz Trzciński
Our approach leverages allocation of past data in a~set of generative models such that most of them do not require retraining after a~task.
no code implementations • 4 Sep 2021 • Wojciech Masarczyk, Kamil Deja, Tomasz Trzciński
Catastrophic forgetting of previously learned knowledge while learning new tasks is a widely observed limitation of contemporary neural networks.
1 code implementation • 23 Jun 2021 • Kamil Deja, Paweł Wawrzyński, Wojciech Masarczyk, Daniel Marczak, Tomasz Trzciński
We propose a new method for unsupervised generative continual learning through realignment of Variational Autoencoder's latent space.
no code implementations • NeurIPS 2021 • Mateusz Ostaszewski, Lea M. Trenkwalder, Wojciech Masarczyk, Eleanor Scerri, Vedran Dunjko
The study of Variational Quantum Eigensolvers (VQEs) has been in the spotlight in recent times as they may lead to real-world applications of near-term quantum devices.
1 code implementation • 25 Nov 2020 • Kamil Deja, Paweł Wawrzyński, Daniel Marczak, Wojciech Masarczyk, Tomasz Trzciński
We introduce a binary latent space autoencoder architecture to rehearse training samples for the continual learning of neural networks.
no code implementations • 29 Apr 2020 • Wojciech Masarczyk, Ivona Tautkute
Our experimental results on Split-MNIST dataset show that training a model on such synthetic data in sequence does not result in catastrophic forgetting.
no code implementations • 12 Sep 2019 • Wojciech Masarczyk, Przemysław Głomb, Bartosz Grabowski, Mateusz Ostaszewski
This approach can be applied to many of the hyperspectral classification problems.
General Classification Hyperspectral Image Classification +2