1 code implementation • 29 Jan 2024 • Hamed Hemati, Damian Borth
This is done by first estimating sample weight parameters for each sample in the mini-batch, then, updating the model with the adapted sample weights.
2 code implementations • 20 Aug 2023 • Albin Soutif--Cormerais, Antonio Carta, Andrea Cossu, Julio Hurtado, Hamed Hemati, Vincenzo Lomonaco, Joost Van de Weijer
Online continual learning aims to get closer to a live learning experience by learning directly on a stream of data with temporally shifting distribution and by storing a minimum amount of data from that stream.
1 code implementation • 19 Jun 2023 • Hamed Hemati, Vincenzo Lomonaco, Davide Bacciu, Damian Borth
Inspired by latent replay methods in CL, we propose partial weight generation for the final layers of a model using hypernetworks while freezing the initial layers.
1 code implementation • 2 Feb 2023 • Antonio Carta, Lorenzo Pellegrini, Andrea Cossu, Hamed Hemati, Vincenzo Lomonaco
Continual learning is the problem of learning from a nonstationary stream of data, a fundamental issue for sustainable and efficient training of deep neural networks over time.
1 code implementation • 26 Jan 2023 • Hamed Hemati, Andrea Cossu, Antonio Carta, Julio Hurtado, Lorenzo Pellegrini, Davide Bacciu, Vincenzo Lomonaco, Damian Borth
We propose two stochastic stream generators that produce a wide range of CIR streams starting from a single dataset and a few interpretable control parameters.
no code implementations • 26 Oct 2022 • Marco Schreyer, Hamed Hemati, Damian Borth, Miklos A. Vasarhelyi
Our empirical results, using real-world datasets and combined federated continual learning strategies, demonstrate the learned model's ability to detect anomalies in audit settings of data distribution shifts.
1 code implementation • AAAI Workshop on AI in Financial Services: Adaptiveness, Resilience & Governance 2021 • Hamed Hemati, Marco Schreyer, Damian Borth
This work proposes a continual anomaly detection framework to overcome both challenges and designed to learn from a stream of journal entry data experiences.
no code implementations • 26 Mar 2021 • Hamed Hemati, Damian Borth
The naive solution of sequential fine-tuning of a model for new speakers can lead to poor performance of older speakers.
no code implementations • 12 Nov 2020 • Hamed Hemati, Damian Borth
Recent neural Text-to-Speech (TTS) models have been shown to perform very well when enough data is available.