Search Results for author: Kamil Deja

Found 17 papers, 8 papers with code

Particle identification with machine learning from incomplete data in the ALICE experiment

no code implementations26 Mar 2024 Maja Karwowska, Łukasz Graczykowski, Kamil Deja, Miłosz Kasak, Małgorzata Janik

We also present the integration of the ML project with the ALICE analysis software, and we discuss domain adaptation, the ML technique needed to transfer the knowledge between simulated and real experimental data.

Domain Adaptation

GUIDE: Guidance-based Incremental Learning with Diffusion Models

1 code implementation6 Mar 2024 Bartosz Cywiński, Kamil Deja, Tomasz Trzciński, Bartłomiej Twardowski, Łukasz Kuciński

We introduce GUIDE, a novel continual learning approach that directs diffusion models to rehearse samples at risk of being forgotten.

Continual Learning Incremental Learning

Machine-learning-based particle identification with missing data

no code implementations21 Dec 2023 Miłosz Kasak, Kamil Deja, Maja Karwowska, Monika Jakubowska, Łukasz Graczykowski, Małgorzata Janik

In this work, we propose the first method for PID that can be trained with all of the available data examples, including incomplete ones.

Adapt & Align: Continual Learning with Generative Models Latent Space Alignment

1 code implementation21 Dec 2023 Kamil Deja, Bartosz Cywiński, Jan Rybarczyk, Tomasz Trzciński

In this work, we introduce Adapt & Align, a method for continual learning of neural networks by aligning latent representations in generative models.

Continual Learning

Bayesian Flow Networks in Continual Learning

no code implementations18 Oct 2023 Mateusz Pyla, Kamil Deja, Bartłomiej Twardowski, Tomasz Trzciński

Bayesian Flow Networks (BFNs) has been recently proposed as one of the most promising direction to universal generative modelling, having ability to learn any of the data type.

Bayesian Inference Continual Learning

Looking through the past: better knowledge retention for generative replay in continual learning

1 code implementation18 Sep 2023 Valeriya Khan, Sebastian Cygert, Kamil Deja, Tomasz Trzciński, Bartłomiej Twardowski

We notice that in VAE-based generative replay, this could be attributed to the fact that the generated features are far from the original ones when mapped to the latent space.

Continual Learning

Machine Learning methods for simulating particle response in the Zero Degree Calorimeter at the ALICE experiment, CERN

no code implementations23 Jun 2023 Jan Dubiński, Kamil Deja, Sandro Wenzel, Przemysław Rokita, Tomasz Trzciński

In particular, we examine the performance of variational autoencoders and generative adversarial networks, expanding the GAN architecture by an additional regularisation network and a simple, yet effective postprocessing step.

Exploring Continual Learning of Diffusion Models

no code implementations27 Mar 2023 Michał Zając, Kamil Deja, Anna Kuzina, Jakub M. Tomczak, Tomasz Trzciński, Florian Shkurti, Piotr Miłoś

Diffusion models have achieved remarkable success in generating high-quality images thanks to their novel training procedures applied to unprecedented amounts of data.

Benchmarking Continual Learning +1

Learning Data Representations with Joint Diffusion Models

1 code implementation31 Jan 2023 Kamil Deja, Tomasz Trzcinski, Jakub M. Tomczak

Joint machine learning models that allow synthesizing and classifying data often offer uneven performance between those tasks or are unstable to train.

counterfactual Domain Adaptation

Modelling low-resource accents without accent-specific TTS frontend

no code implementations11 Jan 2023 Georgi Tinchev, Marta Czarnowska, Kamil Deja, Kayoko Yanagisawa, Marius Cotescu

Prior work on modelling accents assumes a phonetic transcription is available for the target accent, which might not be the case for low-resource, regional accents.

Voice Conversion

Selectively increasing the diversity of GAN-generated samples

no code implementations4 Jul 2022 Jan Dubiński, Kamil Deja, Sandro Wenzel, Przemysław Rokita, Tomasz Trzciński

Especially prone to mode collapse are conditional GANs, which tend to ignore the input noise vector and focus on the conditional information.

On Analyzing Generative and Denoising Capabilities of Diffusion-based Deep Generative Models

1 code implementation31 May 2022 Kamil Deja, Anna Kuzina, Tomasz Trzciński, Jakub M. Tomczak

Their main strength comes from their unique setup in which a model (the backward diffusion process) is trained to reverse the forward diffusion process, which gradually adds noise to the input signal.

Denoising

Logarithmic Continual Learning

no code implementations17 Jan 2022 Wojciech Masarczyk, Paweł Wawrzyński, Daniel Marczak, Kamil Deja, Tomasz Trzciński

Our approach leverages allocation of past data in a~set of generative models such that most of them do not require retraining after a~task.

Continual Learning

On robustness of generative representations against catastrophic forgetting

no code implementations4 Sep 2021 Wojciech Masarczyk, Kamil Deja, Tomasz Trzciński

Catastrophic forgetting of previously learned knowledge while learning new tasks is a widely observed limitation of contemporary neural networks.

Continual Learning Specificity

Multiband VAE: Latent Space Alignment for Knowledge Consolidation in Continual Learning

1 code implementation23 Jun 2021 Kamil Deja, Paweł Wawrzyński, Wojciech Masarczyk, Daniel Marczak, Tomasz Trzciński

We propose a new method for unsupervised generative continual learning through realignment of Variational Autoencoder's latent space.

Continual Learning Disentanglement +1

BinPlay: A Binary Latent Autoencoder for Generative Replay Continual Learning

1 code implementation25 Nov 2020 Kamil Deja, Paweł Wawrzyński, Daniel Marczak, Wojciech Masarczyk, Tomasz Trzciński

We introduce a binary latent space autoencoder architecture to rehearse training samples for the continual learning of neural networks.

Continual Learning

End-to-end Sinkhorn Autoencoder with Noise Generator

1 code implementation11 Jun 2020 Kamil Deja, Jan Dubiński, Piotr Nowak, Sandro Wenzel, Tomasz Trzciński

To address these shortcomings, we introduce a novel method dubbed end-to-end Sinkhorn Autoencoder, that leverages sinkhorn algorithm to explicitly align distribution of encoded real data examples and generated noise.

Astronomy

Cannot find the paper you are looking for? You can Submit a new open access paper.