no code implementations • 21 Mar 2024 • Jinyung Hong, Eun Som Jeon, Changhoon Kim, Keun Hee Park, Utkarsh Nath, Yezhou Yang, Pavan Turaga, Theodore P. Pavlic
Biased attributes, spuriously correlated with target labels in a dataset, can problematically lead to neural networks that learn improper shortcuts for classifications and limit their capabilities for out-of-distribution (OOD) generalization.
no code implementations • 14 Feb 2024 • Rajeev Goel, Utkarsh Nath, Yancheng Wang, Alvin C. Silva, Teresa Wu, Yingzhen Yang
To address this challenge, we propose a novel Low-Rank Feature Learning (LRFL) method in this paper, which is universally applicable to the training of all neural networks.
no code implementations • 19 Jan 2023 • Utkarsh Nath, Yancheng Wang, Yingzhen Yang
In this paper, we propose Robust Neural Architecture Search by Cross-Layer Knowledge Distillation (RNAS-CL), a novel NAS algorithm that improves the robustness of NAS by learning from a robust teacher through cross-layer knowledge distillation.
no code implementations • 1 Jan 2021 • Utkarsh Nath, Shrinu Kushagra
Our one-shot learning paradigm trains both the original and the smaller networks together.
no code implementations • 19 Nov 2020 • Utkarsh Nath, Shikha Asrani, Rahul Katarya
To group categorical data many clustering algorithms are used, among which k- modes algorithm has so far given the most significant results.
1 code implementation • 10 Jun 2020 • Utkarsh Nath, Shrinu Kushagra, Yingzhen Yang
In this paper, we introduce Adjoined Networks, or AN, a learning paradigm that trains both the original base network and the smaller compressed network together.