no code implementations • 28 Feb 2023 • Sudeep Kumar Sahoo, Sathish Chalasani, Abhishek Joshi, Kiran Nanjunda Iyer
We additionally introduce a novel block called Cross Attention on Multi-level queries with Prior (CAMP) Block that helps reduce error propagation from coarse level to finer level, which is a common problem in all hierarchical classifiers.
no code implementations • 23 Aug 2022 • Abhishek Joshi, Sathish Chalasani, Kiran Nanjunda Iyer
We achieve it by minimizing the energy for in-distribution samples and simultaneously learn respective class representations that are closer and maximizing energy for out-distribution samples and pushing their representation further out from known class representation.