Search Results for author: Arslan Chaudhry

Found 12 papers, 6 papers with code

Is forgetting less a good inductive bias for forward transfer?

no code implementations14 Mar 2023 Jiefeng Chen, Timothy Nguyen, Dilan Gorur, Arslan Chaudhry

We argue that the measure of forward transfer to a task should not be affected by the restrictions placed on the continual learner in order to preserve knowledge of previous tasks.

Continual Learning Image Classification +1

When does mixup promote local linearity in learned representations?

no code implementations28 Oct 2022 Arslan Chaudhry, Aditya Krishna Menon, Andreas Veit, Sadeep Jayasumana, Srikumar Ramalingam, Sanjiv Kumar

Towards this, we study two questions: (1) how does the Mixup loss that enforces linearity in the \emph{last} network layer propagate the linearity to the \emph{earlier} layers?

Representation Learning

Architecture Matters in Continual Learning

no code implementations1 Feb 2022 Seyed Iman Mirzadeh, Arslan Chaudhry, Dong Yin, Timothy Nguyen, Razvan Pascanu, Dilan Gorur, Mehrdad Farajtabar

However, in this work, we show that the choice of architecture can significantly impact the continual learning performance, and different architectures lead to different trade-offs between the ability to remember previous tasks and learning new ones.

Continual Learning

Wide Neural Networks Forget Less Catastrophically

no code implementations21 Oct 2021 Seyed Iman Mirzadeh, Arslan Chaudhry, Dong Yin, Huiyi Hu, Razvan Pascanu, Dilan Gorur, Mehrdad Farajtabar

A primary focus area in continual learning research is alleviating the "catastrophic forgetting" problem in neural networks by designing new algorithms that are more robust to the distribution shifts.

Continual Learning

Multilevel Knowledge Transfer for Cross-Domain Object Detection

no code implementations2 Aug 2021 Botos Csaba, Xiaojuan Qi, Arslan Chaudhry, Puneet Dokania, Philip Torr

The key ingredients to our approach are -- (a) mapping the source to the target domain on pixel-level; (b) training a teacher network on the mapped source and the unannotated target domain using adversarial feature alignment; and (c) finally training a student network using the pseudo-labels obtained from the teacher.

Object object-detection +2

Continual Learning in Low-rank Orthogonal Subspaces

1 code implementation NeurIPS 2020 Arslan Chaudhry, Naeemullah Khan, Puneet K. Dokania, Philip H. S. Torr

In continual learning (CL), a learner is faced with a sequence of tasks, arriving one after the other, and the goal is to remember all the tasks once the continual learning experience is finished.

Continual Learning

Efficient Lifelong Learning with A-GEM

2 code implementations ICLR 2019 Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, Mohamed Elhoseiny

In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task.

Class Incremental Learning

Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence

2 code implementations ECCV 2018 Arslan Chaudhry, Puneet K. Dokania, Thalaiyasingam Ajanthan, Philip H. S. Torr

We observe that, in addition to forgetting, a known issue while preserving knowledge, IL also suffers from a problem we call intransigence, inability of a model to update its knowledge.

Incremental Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.