no code implementations • 19 Feb 2024 • Akash Guna R. T, Arnav Chavan, Deepak Gupta
Our method is flexible towards skip connections a mainstay in modern vision transformers.
1 code implementation • 2 Feb 2024 • Arnav Chavan, Raghav Magazine, Shubham Kushwaha, Mérouane Debbah, Deepak Gupta
Despite the impressive performance of LLMs, their widespread adoption faces challenges due to substantial computational and memory requirements during inference.
1 code implementation • 12 Dec 2023 • Arnav Chavan, Nahush Lele, Deepak Gupta
Due to the substantial scale of Large Language Models (LLMs), the direct application of conventional compression methodologies proves impractical.
1 code implementation • 13 Jun 2023 • Arnav Chavan, Zhuang Liu, Deepak Gupta, Eric Xing, Zhiqiang Shen
We present Generalized LoRA (GLoRA), an advanced approach for universal parameter-efficient fine-tuning tasks.
no code implementations • 31 Jan 2023 • Deepak K. Gupta, Gowreesh Mago, Arnav Chavan, Dilip K. Prasad
Traditional CNN models are trained and tested on relatively low resolution images (<300 px), and cannot be directly operated on large-scale images due to compute and memory constraints.
1 code implementation • 24 Nov 2022 • Saksham Aggarwal, Taneesh Gupta, Pawan Kumar Sahu, Arnav Chavan, Rishabh Tiwari, Dilip K. Prasad, Deepak K. Gupta
A comparison between SOTA trackers using CNNs, transformers as well as the combination of the two is presented to study their stability at various compression ratios.
1 code implementation • CVPR 2022 • Arnav Chavan, Rishabh Tiwari, Udbhav Bamba, Deepak K. Gupta
MetaDOCK compresses the meta-model as well as the task-specific inner models, thus providing significant reduction in model size for each task, and through constraining the number of active kernels for every task, it implicitly mitigates the issue of meta-overfitting.
1 code implementation • CVPR 2022 • Arnav Chavan, Zhiqiang Shen, Zhuang Liu, Zechun Liu, Kwang-Ting Cheng, Eric Xing
This paper explores the feasibility of finding an optimal sub-model from a vision transformer and introduces a pure vision transformer slimming (ViT-Slim) framework.
no code implementations • 9 Aug 2021 • Oliver Rippel, Arnav Chavan, Chucai Lei, Dorit Merhof
In our work, we propose a new method to overcome catastrophic forgetting and thus successfully fine-tune pre-trained representations for AD in the transfer learning setting.
1 code implementation • ICLR 2021 • Rishabh Tiwari, Udbhav Bamba, Arnav Chavan, Deepak K. Gupta
Structured pruning methods are among the effective strategies for extracting small resource-efficient convolutional neural networks from their dense counterparts with minimal loss in accuracy.
1 code implementation • 14 Jan 2021 • Arnav Chavan, Udbhav Bamba, Rishabh Tiwari, Deepak Gupta
We show that small base networks when rescaled, can provide performance comparable to deeper networks with as low as 6% of optimization parameters of the deeper one.
no code implementations • 12 Oct 2020 • Sharib Ali, Mariia Dmitrieva, Noha Ghatwary, Sophia Bano, Gorkem Polat, Alptekin Temizel, Adrian Krenzer, Amar Hekalo, Yun Bo Guo, Bogdan Matuszewski, Mourad Gridach, Irina Voiculescu, Vishnusai Yoganand, Arnav Chavan, Aryan Raj, Nhan T. Nguyen, Dat Q. Tran, Le Duy Huynh, Nicolas Boutry, Shahadate Rezvy, Haijian Chen, Yoon Ho Choi, Anand Subramanian, Velmurugan Balasubramanian, Xiaohong W. Gao, Hongyu Hu, Yusheng Liao, Danail Stoyanov, Christian Daul, Stefano Realdon, Renato Cannizzaro, Dominique Lamarque, Terry Tran-Nguyen, Adam Bailey, Barbara Braden, James East, Jens Rittscher
The Endoscopy Computer Vision Challenge (EndoCV) is a crowd-sourcing initiative to address eminent problems in developing reliable computer aided detection and diagnosis endoscopy systems and suggest a pathway for clinical translation of technologies.
1 code implementation • 23 Mar 2020 • Suyog Jadhav, Udbhav Bamba, Arnav Chavan, Rishabh Tiwari, Aryan Raj
Endoscopic artefact detection challenge consists of 1) Artefact detection, 2) Semantic segmentation, and 3) Out-of-sample generalisation.