1 code implementation • 18 Jan 2024 • Grzegorz Rypeść, Sebastian Cygert, Valeriya Khan, Tomasz Trzciński, Bartosz Zieliński, Bartłomiej Twardowski
Class-incremental learning is becoming more popular as it helps models widen their applicability while not forgetting what they already know.
no code implementations • 22 Dec 2023 • Piotr Bilinski, Thomas Merritt, Abdelhamid Ezzerg, Kamil Pokora, Sebastian Cygert, Kayoko Yanagisawa, Roberto Barra-Chicote, Daniel Korzekwa
As there is growing interest in synthesizing voices of new speakers, here we investigate the ability of normalizing flows in text-to-speech (TTS) and voice conversion (VC) modes to extrapolate from speakers observed during training to create unseen speaker identities.
no code implementations • 22 Nov 2023 • Daniel Marczak, Sebastian Cygert, Tomasz Trzciński, Bartłomiej Twardowski
In the field of continual learning, models are designed to learn tasks one after the other.
no code implementations • 20 Oct 2023 • Damian Sójka, Yuyang Liu, Dipam Goswami, Sebastian Cygert, Bartłomiej Twardowski, Joost Van de Weijer
Each sequence is composed of 401 images and starts with the source domain, then gradually drifts to a different one (changing weather or time of day) until the middle of the sequence.
no code implementations • 18 Sep 2023 • Damian Sójka, Sebastian Cygert, Bartłomiej Twardowski, Tomasz Trzciński
Test-time adaptation is a promising research direction that allows the source model to adapt itself to changes in data distribution without any supervision.
1 code implementation • 18 Sep 2023 • Valeriya Khan, Sebastian Cygert, Kamil Deja, Tomasz Trzciński, Bartłomiej Twardowski
We notice that in VAE-based generative replay, this could be attributed to the fact that the generated features are far from the original ones when mapped to the latent space.
no code implementations • 15 Sep 2023 • Dariusz Piotrowski, Renard Korzeniowski, Alessio Falai, Sebastian Cygert, Kamil Pokora, Georgi Tinchev, Ziyao Zhang, Kayoko Yanagisawa
In the first two stages, we use a VC model to convert utterances in the target locale to the voice of the target speaker.
no code implementations • 23 Aug 2023 • Daniel Marczak, Grzegorz Rypeść, Sebastian Cygert, Tomasz Trzciński, Bartłomiej Twardowski
However, these settings are not well aligned with real-life scenarios, where a learning agent has access to a vast amount of unlabeled data encompassing both novel (entirely unlabeled) classes and examples from known classes.
1 code implementation • 18 Aug 2023 • Filip Szatkowski, Mateusz Pyla, Marcin Przewięźlikowski, Sebastian Cygert, Bartłomiej Twardowski, Tomasz Trzciński
In this work, we investigate exemplar-free class incremental learning (CIL) with knowledge distillation (KD) as a regularization strategy, aiming to prevent forgetting.
no code implementations • 31 Jul 2023 • Guangyan Zhang, Thomas Merritt, Manuel Sam Ribeiro, Biel Tura-Vecino, Kayoko Yanagisawa, Kamil Pokora, Abdelhamid Ezzerg, Sebastian Cygert, Ammar Abbas, Piotr Bilinski, Roberto Barra-Chicote, Daniel Korzekwa, Jaime Lorenzo-Trueba
Neural text-to-speech systems are often optimized on L1/L2 losses, which make strong assumptions about the distributions of the target data space.
no code implementations • 25 Nov 2021 • Sebastian Cygert, Andrzej Czyzewski
In this work, a generalization of the MIMO approach is applied to the task of object detection using the general-purpose Faster R-CNN model.
no code implementations • 31 May 2021 • Sebastian Cygert, Bartłomiej Wróblewski, Karol Woźniak, Radosław Słowiński, Andrzej Czyżewski
While recent computer vision algorithms achieve impressive performance on many benchmarks, they lack robustness - presented with an image from a different distribution, (e. g. weather or lighting conditions not considered during training), they may produce an erroneous prediction.
no code implementations • 10 Feb 2021 • Sebastian Cygert, Andrzej Czyżewski
It was further shown that while data imbalance methods brought only a slight increase in accuracy for the baseline model (without compression), the impact was more striking at higher compression rates for the structured pruning.