no code implementations • ICCV 2023 • Jiexi Yan, Zhihui Yin, Erkun Yang, Yanhua Yang, Heng Huang
Most existing DML methods focus on improving the model robustness against category shift to keep the performance on unseen categories.
no code implementations • 4 Jun 2022 • Yingbin Bai, Erkun Yang, Zhaoqing Wang, Yuxuan Du, Bo Han, Cheng Deng, Dadong Wang, Tongliang Liu
With the training going on, the model begins to overfit noisy pairs.
no code implementations • CVPR 2022 • Erkun Yang, Dongren Yao, Tongliang Liu, Cheng Deng
More specifically, we propose a proxy-based contrastive (PC) loss to mitigate the gap between different modalities and train networks for different modalities jointly with small-loss samples that are selected with the PC loss and a mutual quantization loss.
1 code implementation • NeurIPS 2021 • Yingbin Bai, Erkun Yang, Bo Han, Yanhua Yang, Jiatong Li, Yinian Mao, Gang Niu, Tongliang Liu
Instead of the early stopping, which trains a whole DNN all at once, we initially train former DNN layers by optimizing the DNN with a relatively large number of epochs.
Ranked #8 on Learning with noisy labels on CIFAR-10N-Aggregate
no code implementations • 27 May 2021 • Shuo Yang, Erkun Yang, Bo Han, Yang Liu, Min Xu, Gang Niu, Tongliang Liu
Motivated by that classifiers mostly output Bayes optimal labels for prediction, in this paper, we study to directly model the transition from Bayes optimal labels to noisy labels (i. e., Bayes-label transition matrix (BLTM)) and learn a classifier to predict Bayes optimal labels.
no code implementations • ICCV 2021 • Yuru Song, Zan Lou, Shan You, Erkun Yang, Fei Wang, Chen Qian, ChangShui Zhang, Xiaogang Wang
Concretely, we introduce a privileged parameter so that the optimization direction does not necessarily follow the gradient from the privileged tasks, but concentrates more on the target tasks.
no code implementations • CVPR 2019 • Erkun Yang, Tongliang Liu, Cheng Deng, Wei Liu, DaCheng Tao
To address this issue, we propose a novel deep unsupervised hashing model, dubbed DistillHash, which can learn a distilled data set consisted of data pairs, which have confidence similarity signals.
no code implementations • 16 Apr 2019 • Erkun Yang, Cheng Deng, Chao Li, Wei Liu, Jie Li, DaCheng Tao
In this paper, we propose a deep quantization approach, which is among the early attempts of leveraging deep neural networks into quantization-based cross-modal similarity search.
1 code implementation • IJCAI2018 2018 • Erkun Yang, Cheng Deng, Tongliang Liu, Wei Liu, DaCheng Tao
Hashing is becoming increasingly popular for approximate nearest neighbor searching in massive databases due to its storage and search efficiency.