Search Results for author: Fahad Sarfraz

Found 10 papers, 7 papers with code

Towards Brain Inspired Design for Addressing the Shortcomings of ANNs

no code implementations30 Jun 2023 Fahad Sarfraz, Elahe Arani, Bahram Zonooz

As our understanding of the mechanisms of brain function is enhanced, the value of insights gained from neuroscience to the development of AI algorithms deserves further consideration.

A Study of Biologically Plausible Neural Network: The Role and Interactions of Brain-Inspired Mechanisms in Continual Learning

1 code implementation13 Apr 2023 Fahad Sarfraz, Elahe Arani, Bahram Zonooz

Humans excel at continually acquiring, consolidating, and retaining information from an ever-changing environment, whereas artificial neural networks (ANNs) exhibit catastrophic forgetting.

Continual Learning

Error Sensitivity Modulation based Experience Replay: Mitigating Abrupt Representation Drift in Continual Learning

1 code implementation14 Feb 2023 Fahad Sarfraz, Elahe Arani, Bahram Zonooz

To this end, we propose \textit{ESMER} which employs a principled mechanism to modulate error sensitivity in a dual-memory rehearsal-based system.

Continual Learning

Sparse Coding in a Dual Memory System for Lifelong Learning

1 code implementation28 Dec 2022 Fahad Sarfraz, Elahe Arani, Bahram Zonooz

Efficient continual learning in humans is enabled by a rich set of neurophysiological mechanisms and interactions between multiple memory systems.

Continual Learning

Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System

1 code implementation ICLR 2022 Elahe Arani, Fahad Sarfraz, Bahram Zonooz

Humans excel at continually learning from an ever-changing environment whereas it remains a challenge for deep neural networks which exhibit catastrophic forgetting.

Continual Learning

Noisy Concurrent Training for Efficient Learning under Label Noise

2 code implementations17 Sep 2020 Fahad Sarfraz, Elahe Arani, Bahram Zonooz

Thus, we propose Noisy Concurrent Training (NCT) which leverages collaborative learning to use the consensus between two models as an additional source of supervision.

Image Classification Memorization

Adversarial Concurrent Training: Optimizing Robustness and Accuracy Trade-off of Deep Neural Networks

1 code implementation16 Aug 2020 Elahe Arani, Fahad Sarfraz, Bahram Zonooz

Adversarial training has been proven to be an effective technique for improving the adversarial robustness of models.

Adversarial Robustness

Knowledge Distillation Beyond Model Compression

no code implementations3 Jul 2020 Fahad Sarfraz, Elahe Arani, Bahram Zonooz

Knowledge distillation (KD) is commonly deemed as an effective model compression technique in which a compact model (student) is trained under the supervision of a larger pretrained model or an ensemble of models (teacher).

Knowledge Distillation Model Compression +1

Noise as a Resource for Learning in Knowledge Distillation

no code implementations11 Oct 2019 Elahe Arani, Fahad Sarfraz, Bahram Zonooz

In doing so, we propose three different methods that target the common challenges in deep neural networks: minimizing the performance gap between a compact model and large model (Fickle Teacher), training high performance compact adversarially robust models (Soft Randomization), and training models efficiently under label noise (Messy Collaboration).

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.