no code implementations • 26 Mar 2024 • Maja Karwowska, Łukasz Graczykowski, Kamil Deja, Miłosz Kasak, Małgorzata Janik
We also present the integration of the ML project with the ALICE analysis software, and we discuss domain adaptation, the ML technique needed to transfer the knowledge between simulated and real experimental data.
1 code implementation • 6 Mar 2024 • Bartosz Cywiński, Kamil Deja, Tomasz Trzciński, Bartłomiej Twardowski, Łukasz Kuciński
We introduce GUIDE, a novel continual learning approach that directs diffusion models to rehearse samples at risk of being forgotten.
no code implementations • 21 Dec 2023 • Miłosz Kasak, Kamil Deja, Maja Karwowska, Monika Jakubowska, Łukasz Graczykowski, Małgorzata Janik
In this work, we propose the first method for PID that can be trained with all of the available data examples, including incomplete ones.
1 code implementation • 21 Dec 2023 • Kamil Deja, Bartosz Cywiński, Jan Rybarczyk, Tomasz Trzciński
In this work, we introduce Adapt & Align, a method for continual learning of neural networks by aligning latent representations in generative models.
no code implementations • 18 Oct 2023 • Mateusz Pyla, Kamil Deja, Bartłomiej Twardowski, Tomasz Trzciński
Bayesian Flow Networks (BFNs) has been recently proposed as one of the most promising direction to universal generative modelling, having ability to learn any of the data type.
1 code implementation • 18 Sep 2023 • Valeriya Khan, Sebastian Cygert, Kamil Deja, Tomasz Trzciński, Bartłomiej Twardowski
We notice that in VAE-based generative replay, this could be attributed to the fact that the generated features are far from the original ones when mapped to the latent space.
no code implementations • 23 Jun 2023 • Jan Dubiński, Kamil Deja, Sandro Wenzel, Przemysław Rokita, Tomasz Trzciński
In particular, we examine the performance of variational autoencoders and generative adversarial networks, expanding the GAN architecture by an additional regularisation network and a simple, yet effective postprocessing step.
no code implementations • 27 Mar 2023 • Michał Zając, Kamil Deja, Anna Kuzina, Jakub M. Tomczak, Tomasz Trzciński, Florian Shkurti, Piotr Miłoś
Diffusion models have achieved remarkable success in generating high-quality images thanks to their novel training procedures applied to unprecedented amounts of data.
1 code implementation • 31 Jan 2023 • Kamil Deja, Tomasz Trzcinski, Jakub M. Tomczak
Joint machine learning models that allow synthesizing and classifying data often offer uneven performance between those tasks or are unstable to train.
no code implementations • 11 Jan 2023 • Georgi Tinchev, Marta Czarnowska, Kamil Deja, Kayoko Yanagisawa, Marius Cotescu
Prior work on modelling accents assumes a phonetic transcription is available for the target accent, which might not be the case for low-resource, regional accents.
no code implementations • 4 Jul 2022 • Jan Dubiński, Kamil Deja, Sandro Wenzel, Przemysław Rokita, Tomasz Trzciński
Especially prone to mode collapse are conditional GANs, which tend to ignore the input noise vector and focus on the conditional information.
1 code implementation • 31 May 2022 • Kamil Deja, Anna Kuzina, Tomasz Trzciński, Jakub M. Tomczak
Their main strength comes from their unique setup in which a model (the backward diffusion process) is trained to reverse the forward diffusion process, which gradually adds noise to the input signal.
no code implementations • 17 Jan 2022 • Wojciech Masarczyk, Paweł Wawrzyński, Daniel Marczak, Kamil Deja, Tomasz Trzciński
Our approach leverages allocation of past data in a~set of generative models such that most of them do not require retraining after a~task.
no code implementations • 4 Sep 2021 • Wojciech Masarczyk, Kamil Deja, Tomasz Trzciński
Catastrophic forgetting of previously learned knowledge while learning new tasks is a widely observed limitation of contemporary neural networks.
1 code implementation • 23 Jun 2021 • Kamil Deja, Paweł Wawrzyński, Wojciech Masarczyk, Daniel Marczak, Tomasz Trzciński
We propose a new method for unsupervised generative continual learning through realignment of Variational Autoencoder's latent space.
1 code implementation • 25 Nov 2020 • Kamil Deja, Paweł Wawrzyński, Daniel Marczak, Wojciech Masarczyk, Tomasz Trzciński
We introduce a binary latent space autoencoder architecture to rehearse training samples for the continual learning of neural networks.
1 code implementation • 11 Jun 2020 • Kamil Deja, Jan Dubiński, Piotr Nowak, Sandro Wenzel, Tomasz Trzciński
To address these shortcomings, we introduce a novel method dubbed end-to-end Sinkhorn Autoencoder, that leverages sinkhorn algorithm to explicitly align distribution of encoded real data examples and generated noise.