Search Results for author: Bartlomiej Twardowski

Found 7 papers, 5 papers with code

FedFNN: Faster Training Convergence Through Update Predictions in Federated Recommender Systems

no code implementations14 Sep 2023 Francesco Fabbri, Xianghang Liu, Jack R. McKenzie, Bartlomiej Twardowski, Tri Kurniawan Wijaya

Federated Learning (FL) has emerged as a key approach for distributed machine learning, enhancing online personalization while ensuring user data privacy.

Federated Learning Recommendation Systems

Plasticity-Optimized Complementary Networks for Unsupervised Continual Learning

1 code implementation12 Sep 2023 Alex Gomez-Villa, Bartlomiej Twardowski, Kai Wang, Joost Van de Weijer

In the second phase, we combine this new knowledge with the previous network in an adaptation-retrospection phase to avoid forgetting and initialize a new expert with the knowledge of the old network.

Representation Learning Self-Supervised Learning +1

Continually Learning Self-Supervised Representations with Projected Functional Regularization

1 code implementation30 Dec 2021 Alex Gomez-Villa, Bartlomiej Twardowski, Lu Yu, Andrew D. Bagdanov, Joost Van de Weijer

Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised approaches.

Continual Learning Incremental Learning +1

Reducing Label Effort: Self-Supervised meets Active Learning

no code implementations25 Aug 2021 Javad Zolfaghari Bengar, Joost Van de Weijer, Bartlomiej Twardowski, Bogdan Raducanu

Our experiments reveal that self-training is remarkably more efficient than active learning at reducing the labeling effort, that for a low labeling budget, active learning offers no benefit to self-training, and finally that the combination of active learning and self-training is fruitful when the labeling budget is high.

Active Learning Object Recognition

Class-incremental learning: survey and performance evaluation on image classification

1 code implementation28 Oct 2020 Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D. Bagdanov, Joost Van de Weijer

For future learning systems, incremental learning is desirable because it allows for: efficient resource usage by eliminating the need to retrain from scratch at the arrival of new data; reduced memory usage by preventing or limiting the amount of data required to be stored -- also important when privacy limitations are imposed; and learning that more closely resembles human learning.

Class Incremental Learning General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.