no code implementations • 19 Sep 2023 • Chang Liu, Giulia Rizzoli, Francesco Barbato, Andrea Maracani, Marco Toldo, Umberto Michieli, Yi Niu, Pietro Zanuttigh
Catastrophic forgetting of previous knowledge is a critical issue in continual learning typically handled through various regularization strategies.
1 code implementation • 7 Apr 2023 • Donald Shenaj, Marco Toldo, Alberto Rigon, Pietro Zanuttigh
We introduce a novel federated learning setting (AFCL) where the continual learning of multiple tasks happens at each client with different orderings and in asynchronous time slots.
no code implementations • 13 Oct 2022 • Marco Toldo, Umberto Michieli, Pietro Zanuttigh
Then, we address the proposed setup by using style transfer techniques to extend knowledge across domains when learning incremental tasks and a robust distillation framework to effectively recollect task knowledge under incremental domain shift.
1 code implementation • 5 Oct 2022 • Donald Shenaj, Eros Fanì, Marco Toldo, Debora Caldarola, Antonio Tavera, Umberto Michieli, Marco Ciccone, Pietro Zanuttigh, Barbara Caputo
Federated Learning (FL) has recently emerged as a possible way to tackle the domain shift in real-world Semantic Segmentation (SS) without compromising the private nature of the collected data.
no code implementations • CVPR 2022 • Marco Toldo, Mete Ozay
In Class Incremental Learning (CIL), a classification model is progressively trained at each incremental step on an evolving dataset of new classes, while at the same time, it is required to preserve knowledge of all the classes observed so far.
1 code implementation • ICCV 2021 • Andrea Maracani, Umberto Michieli, Marco Toldo, Pietro Zanuttigh
Replay data are then blended with new samples during the incremental steps.
1 code implementation • 6 Aug 2021 • Francesco Barbato, Umberto Michieli, Marco Toldo, Pietro Zanuttigh
Deep learning models obtain impressive accuracy in road scenes understanding, however they need a large quantity of labeled samples for their training.
1 code implementation • 6 Apr 2021 • Francesco Barbato, Marco Toldo, Umberto Michieli, Pietro Zanuttigh
Deep convolutional neural networks for semantic segmentation achieve outstanding accuracy, however they also have a couple of major drawbacks: first, they do not generalize well to distributions slightly different from the one of the training data; second, they require a huge amount of labeled data for their optimization.
1 code implementation • 25 Nov 2020 • Marco Toldo, Umberto Michieli, Pietro Zanuttigh
Deep learning frameworks allowed for a remarkable advancement in semantic segmentation, but the data hungry nature of convolutional networks has rapidly raised the demand for adaptation techniques able to transfer learned knowledge from label-abundant domains to unlabeled ones.
no code implementations • 21 May 2020 • Marco Toldo, Andrea Maracani, Umberto Michieli, Pietro Zanuttigh
The aim of this paper is to give an overview of the recent advancements in the Unsupervised Domain Adaptation (UDA) of deep networks for semantic segmentation.
no code implementations • 27 Apr 2020 • Teo Spadotto, Marco Toldo, Umberto Michieli, Pietro Zanuttigh
We introduce a novel UDA framework where a standard supervised loss on labeled synthetic data is supported by an adversarial module and a self-training strategy aiming at aligning the two domain distributions.
no code implementations • 14 Jan 2020 • Marco Toldo, Umberto Michieli, Gianluca Agresti, Pietro Zanuttigh
The supervised training of deep networks for semantic segmentation requires a huge amount of labeled real world data.