1 code implementation • 16 Nov 2022 • Xinyao Shu, ShiYang Yan, Zhenyu Lu, Xinshao Wang, Yuan Xie
Unsupervised domain adaption (UDA) is a transfer learning task where the data and annotations of the source domain are available but only have access to the unlabeled target data during training.
1 code implementation • 11 Oct 2022 • Zhaoqiang Liu, Xinshao Wang, Jiulong Liu
In this paper, we study phase retrieval under model misspecification and generative priors.
1 code implementation • 30 Jun 2022 • Xinshao Wang, Yang Hua, Elyor Kodirov, Sankha Subhra Mukherjee, David A. Clifton, Neil M. Robertson
For the issue (2), the effectiveness of ProSelfLC defends entropy minimisation.
no code implementations • 2 Jun 2021 • Ziyun Li, Xinshao Wang, Di Hu, Neil M. Robertson, David A. Clifton, Christoph Meinel, Haojin Yang
Additionally, CMD covers two special cases: zero-knowledge and all knowledge, leading to a unified MKD framework.
5 code implementations • CVPR 2021 • Xinshao Wang, Yang Hua, Elyor Kodirov, David A. Clifton, Neil M. Robertson
Keywords: entropy minimisation, maximum entropy, confidence penalty, self knowledge distillation, label correction, label noise, semi-supervised learning, output regularisation
no code implementations • 22 Nov 2019 • Xinshao Wang, Elyor Kodirov, Yang Hua, Neil Robertson
Loss functions play a crucial role in deep metric learning thus a variety of them have been proposed.
1 code implementation • 20 Nov 2019 • Xinshao Wang, Elyor Kodirov, Yang Hua, Neil M. Robertson
This way it can prevent overfitting to trivial images, and alleviate the influence of outliers.
no code implementations • 25 Sep 2019 • Xinshao Wang, Yang Hua, Elyor Kodirov, Neil M. Robertson
It is fundamental and challenging to train robust and accurate Deep Neural Networks (DNNs) when semantically abnormal examples exist.
3 code implementations • 27 May 2019 • Xinshao Wang, Elyor Kodirov, Yang Hua, Neil M. Robertson
By DM, we connect the design of loss function and example weighting together.
Ranked #30 on Image Classification on Clothing1M (using extra training data)
3 code implementations • 28 Mar 2019 • Xinshao Wang, Yang Hua, Elyor Kodirov, Neil M. Robertson
In this work, we study robust deep learning against abnormal training data from the perspective of example weighting built in empirical loss functions, i. e., gradient magnitude with respect to logits, an angle that is not thoroughly studied so far.
Ranked #33 on Image Classification on Clothing1M (using extra training data)
2 code implementations • CVPR 2019 • Xinshao Wang, Yang Hua, Elyor Kodirov, Neil M. Robertson
To address this, we propose to build a set-based similarity structure by exploiting all instances in the gallery.
3 code implementations • 4 Nov 2018 • Xinshao Wang, Yang Hua, Elyor Kodirov, Guosheng Hu, Neil M. Robertson
Therefore, we propose a novel sample mining method, called Online Soft Mining (OSM), which assigns one continuous score to each sample to make use of all samples in the mini-batch.