no code implementations • 30 Jul 2023 • Elvis Han Cui, Bingbin Li, Yanan Li, Weng Kee Wong, Donghui Wang
Many existing methods generate new samples from a parametric distribution, like the Gaussian, with little attention to generate samples along the data manifold in either the input or feature space.
no code implementations • 23 Dec 2021 • Bingbin Li, Elvis Han Cui, Yanan Li, Donghui Wang, Weng Kee Wong
Learning novel classes from a very few labeled samples has attracted increasing attention in machine learning areas.
no code implementations • 17 May 2021 • Pengyang Li, Yanan Li, Han Cui, Donghui Wang
To tackle this problem, we propose a novel method LEAST, which can transfer with Less forgetting, fEwer training resources, And Stronger Transfer capability.
no code implementations • 12 Jan 2021 • Pengyang Li, Donghui Wang
With mask as prior, the model in this paper is constrained so that the generated images conform to visual senses, which will reduce the unexpected diversity of samples generated from the generative adversarial network.
no code implementations • 2 Oct 2020 • Shengyu Zhang, Donghui Wang, Zhou Zhao, Siliang Tang, Di Xie, Fei Wu
In this paper, we investigate the problem of text-to-pedestrian synthesis, which has many potential applications in art, design, and video surveillance.
no code implementations • 26 May 2017 • Yanan Li, Donghui Wang
Zero-shot learning, which studies the problem of object classification for categories for which we have no training examples, is gaining increasing attention from community.
no code implementations • 26 May 2017 • Yanan Li, Donghui Wang
In this paper, we propose a new method to learn non-linear robust features by taking advantage of the data manifold structure.
no code implementations • CVPR 2017 • Yanan Li, Donghui Wang, Huanhang Hu, Yuetan Lin, Yueting Zhuang
This mapping is learned on training data of seen classes and is expected to have transfer ability to unseen classes.
no code implementations • 22 Feb 2017 • Yuetan Lin, Zhangyang Pang, Donghui Wang, Yueting Zhuang
Visual question answering (VQA) has witnessed great progress since May, 2015 as a classic problem unifying visual and textual data into a system.