Search Results for author: Utkarsh Nath

Found 6 papers, 1 papers with code

Learning Decomposable and Debiased Representations via Attribute-Centric Information Bottlenecks

no code implementations21 Mar 2024 Jinyung Hong, Eun Som Jeon, Changhoon Kim, Keun Hee Park, Utkarsh Nath, Yezhou Yang, Pavan Turaga, Theodore P. Pavlic

Biased attributes, spuriously correlated with target labels in a dataset, can problematically lead to neural networks that learn improper shortcuts for classifications and limit their capabilities for out-of-distribution (OOD) generalization.

Attribute Representation Learning

Learning Low-Rank Feature for Thorax Disease Classification

no code implementations14 Feb 2024 Rajeev Goel, Utkarsh Nath, Yancheng Wang, Alvin C. Silva, Teresa Wu, Yingzhen Yang

To address this challenge, we propose a novel Low-Rank Feature Learning (LRFL) method in this paper, which is universally applicable to the training of all neural networks.

Classification Self-Supervised Learning

RNAS-CL: Robust Neural Architecture Search by Cross-Layer Knowledge Distillation

no code implementations19 Jan 2023 Utkarsh Nath, Yancheng Wang, Yingzhen Yang

In this paper, we propose Robust Neural Architecture Search by Cross-Layer Knowledge Distillation (RNAS-CL), a novel NAS algorithm that improves the robustness of NAS by learning from a robust teacher through cross-layer knowledge distillation.

Knowledge Distillation Neural Architecture Search

Similarity-based Distance for Categorical Clustering using Space Structure

no code implementations19 Nov 2020 Utkarsh Nath, Shikha Asrani, Rahul Katarya

To group categorical data many clustering algorithms are used, among which k- modes algorithm has so far given the most significant results.

Clustering

Adjoined Networks: A Training Paradigm with Applications to Network Compression

1 code implementation10 Jun 2020 Utkarsh Nath, Shrinu Kushagra, Yingzhen Yang

In this paper, we introduce Adjoined Networks, or AN, a learning paradigm that trains both the original base network and the smaller compressed network together.

Knowledge Distillation Neural Architecture Search +1

Cannot find the paper you are looking for? You can Submit a new open access paper.