no code implementations • 26 Dec 2023 • Chenyi Jiang, Haofeng Zhang
Building upon this insight, we incorporate visual bias caused by compositions into the classifier's training and inference by estimating it as a proximate class prior.
1 code implementation • 20 Sep 2023 • Yazhou Zhu, Shidong Wang, Tong Xin, Zheng Zhang, Haofeng Zhang
In this work, we present an approach to extract multiple representative sub-regions from a given support medical image, enabling fine-grained selection over the generated image regions.
1 code implementation • 9 Sep 2023 • Yazhou Zhu, Shidong Wang, Tong Xin, Haofeng Zhang
First, a subdivision strategy is introduced to produce a collection of regional prototypes from the foreground of the support prototype.
no code implementations • 2 Aug 2023 • Ziyi Huang, Hongshan Liu, Haofeng Zhang, Xueshen Li, Haozhe Liu, Fuyong Xing, Andrew Laine, Elsa Angelini, Christine Hendon, Yu Gan
One key advantage of our model is its ability to train deep networks using SAM-generated pseudo labels without relying on a set of expert-level annotations while attaining good segmentation performance.
1 code implementation • 13 Apr 2023 • Adam N. Elmachtoub, Henry Lam, Haofeng Zhang, Yunfan Zhao
In this paper, we show that a reverse behavior appears when the model class is well-specified and there is sufficient data.
no code implementations • 23 Nov 2022 • Dubing Chen, Haofeng Zhang, Yuming Shen, Yang Long, Ling Shao
In this work, we propose a novel Evolutionary Generalized Zero-Shot Learning setting, which (i) avoids the domain shift problem in inductive GZSL, and (ii) is more in line with the needs of real-world deployments than transductive GZSL.
no code implementations • 19 Nov 2022 • Chenyi Jiang, Dubing Chen, Shidong Wang, Yuming Shen, Haofeng Zhang, Ling Shao
Compositional Zero-Shot Learning (CZSL) aims to recognize unseen compositions from seen states and objects.
1 code implementation • 28 Sep 2022 • Jiaguo Yu, Huming Qiu, Dubing Chen, Haofeng Zhang
The development of unsupervised hashing is advanced by the recent popular contrastive learning paradigm.
no code implementations • 9 Jun 2022 • Ziyi Huang, Yu Gan, Theresa Lye, Yanchen Liu, Haofeng Zhang, Andrew Laine, Elsa Angelini, Christine Hendon
To lessen the need for pixel-wise labeling, we develop a two-stage deep learning framework for cardiac adipose tissue segmentation using image-level annotations on OCT images of human cardiac substrates.
no code implementations • 9 Jun 2022 • Ziyi Huang, Henry Lam, Haofeng Zhang
To overcome these restrictions, we study conditional generative models for aleatoric uncertainty estimation.
1 code implementation • 25 Apr 2022 • Dubing Chen, Yuming Shen, Haofeng Zhang, Philip H. S. Torr
As a consequence of our derivation, the aforementioned two properties are incorporated into the classifier training as seen-unseen priors via logit adjustment.
Ranked #1 on Generalized Zero-Shot Learning on AwA2 (Accuracy Unseen metric)
1 code implementation • 24 Apr 2022 • Dubing Chen, Yuming Shen, Haofeng Zhang, Philip H. S. Torr
Recent research on Generalized Zero-Shot Learning (GZSL) has focused primarily on generation-based methods.
no code implementations • 31 Jan 2022 • Jiaguo Yu, Yuming Shen, Menghan Wang, Haofeng Zhang, Philip H. S. Torr
In this paper, we tackle this problem by introducing Naturally-Sorted Hashing (NSH).
1 code implementation • 23 Dec 2021 • Xiaojie Zhao, Yuming Shen, Shidong Wang, Haofeng Zhang
Most generative ZSL methods use category semantic attributes plus a Gaussian noise to generate visual features.
no code implementations • 23 Oct 2021 • Ziyi Huang, Henry Lam, Haofeng Zhang
Uncertainty quantification is at the core of the reliability and robustness of machine learning.
no code implementations • 26 Feb 2021 • Haoxian Chen, Ziyi Huang, Henry Lam, Huajie Qian, Haofeng Zhang
We study the generation of prediction intervals in regression for uncertainty quantification.
no code implementations • 31 Jan 2021 • Ziyi Huang, Haofeng Zhang, Andrew Laine, Elsa Angelini, Christine Hendon, Yu Gan
Supervised deep learning performance is heavily tied to the availability of high-quality labels for training.
no code implementations • 1 Jan 2021 • Ziyi Huang, Henry Lam, Haofeng Zhang
Deep learning has achieved state-of-the-art performance to generate high-quality prediction intervals (PIs) for uncertainty quantification in regression tasks.