no code implementations • 4 Jul 2023 • Yunqing Zhao, Keshigeyan Chandrasegaran, Milad Abdollahzadeh, Chao Du, Tianyu Pang, Ruoteng Li, Henghui Ding, Ngai-Man Cheung
However, a major limitation of existing methods is that their knowledge preserving criteria consider only source domain/task and fail to consider target domain/adaptation in selecting source knowledge, casting doubt on their suitability for setups of different proximity between source and target domain.
1 code implementation • NeurIPS 2023 • Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Chongxuan Li, Ngai-Man Cheung, Min Lin
Large vision-language models (VLMs) such as GPT-4 have achieved unprecedented performance in response generation, especially with visual inputs, enabling more creative and adaptable interaction than large language models such as ChatGPT.
1 code implementation • CVPR 2023 • Yunqing Zhao, Chao Du, Milad Abdollahzadeh, Tianyu Pang, Min Lin, Shuicheng Yan, Ngai-Man Cheung
To this end, we propose knowledge truncation to mitigate this issue in FSIG, which is a complementary operation to knowledge preservation and is implemented by a lightweight pruning-based method.
1 code implementation • 17 Mar 2023 • Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Ngai-Man Cheung, Min Lin
Diffusion models (DMs) have demonstrated advantageous potential on generative tasks.
2 code implementations • 29 Oct 2022 • Yunqing Zhao, Keshigeyan Chandrasegaran, Milad Abdollahzadeh, Ngai-Man Cheung
However, a major limitation of existing methods is that their knowledge preserving criteria consider only source domain/source task, and they fail to consider target domain/adaptation task in selecting source model's knowledge, casting doubt on their suitability for setups of different proximity between source and target domain.
Ranked #1 on 10-shot image generation on Babies
1 code implementation • 23 Aug 2022 • Yunqing Zhao, Ngai-Man Cheung
This improved generalization motivates us to study BAN for DG-FSC, and we show that BAN is promising to address the domain shift encountered in DG-FSC.
1 code implementation • 29 Jun 2022 • Keshigeyan Chandrasegaran, Ngoc-Trung Tran, Yunqing Zhao, Ngai-Man Cheung
Critically, there is no effort to understand and resolve these contradictory findings, leaving the primal question -- to smooth or not to smooth a teacher network?
no code implementations • CVPR 2022 • Yunqing Zhao, Henghui Ding, Houjing Huang, Ngai-Man Cheung
Informed by our analysis and to slow down the diversity degradation of the target generator during adaptation, our second contribution proposes to apply mutual information (MI) maximization to retain the source domain's rich multi-level diversity information in the target domain generator.
Ranked #2 on 10-shot image generation on Babies
no code implementations • 29 Sep 2021 • Keshigeyan Chandrasegaran, Ngoc-Trung Tran, Yunqing Zhao, Ngai-Man Cheung
On the contrary, Shen et al. [2] claim that LS enlarges the distance between semantically similar classes; therefore a LS-trained teacher is compatible with KD.
1 code implementation • 17 Jul 2020 • Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Yunqing Zhao, Ngai-Man Cheung, Alexander Binder
It leverages on the explanation scores, obtained from existing explanation methods when applied to the predictions of FSC models, computed for intermediate feature maps of the models.
Ranked #8 on Cross-Domain Few-Shot on ISIC2018