no code implementations • 30 Jun 2023 • Fahad Sarfraz, Elahe Arani, Bahram Zonooz
As our understanding of the mechanisms of brain function is enhanced, the value of insights gained from neuroscience to the development of AI algorithms deserves further consideration.
1 code implementation • 13 Apr 2023 • Fahad Sarfraz, Elahe Arani, Bahram Zonooz
Humans excel at continually acquiring, consolidating, and retaining information from an ever-changing environment, whereas artificial neural networks (ANNs) exhibit catastrophic forgetting.
1 code implementation • 14 Feb 2023 • Fahad Sarfraz, Elahe Arani, Bahram Zonooz
To this end, we propose \textit{ESMER} which employs a principled mechanism to modulate error sensitivity in a dual-memory rehearsal-based system.
1 code implementation • 28 Dec 2022 • Fahad Sarfraz, Elahe Arani, Bahram Zonooz
Efficient continual learning in humans is enabled by a rich set of neurophysiological mechanisms and interactions between multiple memory systems.
1 code implementation • 8 Jun 2022 • Fahad Sarfraz, Elahe Arani, Bahram Zonooz
Continual learning (CL) in the brain is facilitated by a complex set of mechanisms.
1 code implementation • ICLR 2022 • Elahe Arani, Fahad Sarfraz, Bahram Zonooz
Humans excel at continually learning from an ever-changing environment whereas it remains a challenge for deep neural networks which exhibit catastrophic forgetting.
2 code implementations • 17 Sep 2020 • Fahad Sarfraz, Elahe Arani, Bahram Zonooz
Thus, we propose Noisy Concurrent Training (NCT) which leverages collaborative learning to use the consensus between two models as an additional source of supervision.
Ranked #34 on Image Classification on mini WebVision 1.0
1 code implementation • 16 Aug 2020 • Elahe Arani, Fahad Sarfraz, Bahram Zonooz
Adversarial training has been proven to be an effective technique for improving the adversarial robustness of models.
no code implementations • 3 Jul 2020 • Fahad Sarfraz, Elahe Arani, Bahram Zonooz
Knowledge distillation (KD) is commonly deemed as an effective model compression technique in which a compact model (student) is trained under the supervision of a larger pretrained model or an ensemble of models (teacher).
no code implementations • 11 Oct 2019 • Elahe Arani, Fahad Sarfraz, Bahram Zonooz
In doing so, we propose three different methods that target the common challenges in deep neural networks: minimizing the performance gap between a compact model and large model (Fickle Teacher), training high performance compact adversarially robust models (Soft Randomization), and training models efficiently under label noise (Messy Collaboration).