no code implementations • 8 Dec 2023 • Yue Jiang, Yueming Lyu, Tianxiang Ma, Bo Peng, Jing Dong
Extensive empirical evaluations demonstrate that the introduced \themodel effectively corrects the racial stereotypes of the well-trained Stable Diffusion model while leaving the original model unchanged.
1 code implementation • 12 Oct 2023 • Yueming Lyu, Kang Zhao, Bo Peng, Yue Jiang, Yingya Zhang, Jing Dong
Based on DeltaSpace, we propose a novel framework called DeltaEdit, which maps the CLIP visual feature differences to the latent space directions of a generative model during the training phase, and predicts the latent space directions from the CLIP textual feature differences during the inference phase.
no code implementations • 30 Jul 2023 • Yueming Lyu, Yue Jiang, Bo Peng, Jing Dong
InfoStyler formulates the disentanglement representation learning as an information compression problem by eliminating style statistics from the content image and removing the content structure from the style image.
no code implementations • 26 Jun 2023 • Yueming Lyu, Yue Jiang, Ziwen He, Bo Peng, Yunfan Liu, Jing Dong
The privacy and security of face data on social media are facing unprecedented challenges as it is vulnerable to unauthorized access and identification.
1 code implementation • 28 Apr 2023 • Jing Li, Yuangang Pan, Yueming Lyu, Yinghua Yao, Yulei Sui, Ivor W. Tsang
Unlike existing model tuning methods where the target data is always ready for calculating model gradients, the model providers in EXPECTED only see some feedbacks which could be as simple as scalars, such as inference accuracy or usage rate.
no code implementations • 5 Apr 2023 • Kim Yong Tan, Yueming Lyu, Yew Soon Ong, Ivor W. Tsang
This need requires the ANN search algorithm to support fast online data deletion and insertion.
no code implementations • 2 Apr 2023 • Cheng Chen, Yueming Lyu, Ivor W. Tsang
However, conventional partial-label learning (PLL) methods are still vulnerable to the high ratio of noisy partial labels, especially in a large labelling space.
1 code implementation • CVPR 2023 • Yueming Lyu, Tianwei Lin, Fu Li, Dongliang He, Jing Dong, Tieniu Tan
Our key idea is to investigate and identify a space, namely delta image and text space that has well-aligned distribution between CLIP visual feature differences of two images and CLIP textual embedding differences of source and target texts.
1 code implementation • 29 Sep 2021 • Yueming Lyu, Peibin Chen, Jingna Sun, Bo Peng, Xu Wang, Jing Dong
To evaluate the effectiveness and show the general use of our method, we conduct a set of experiments on makeup transfer and semantic image synthesis.
no code implementations • 29 Sep 2021 • Jing Li, Yuangang Pan, Yueming Lyu, Yinghua Yao, Ivor Tsang
Instead of learning from scratch, fine-tuning a pre-trained model to fit a related target dataset of interest or downstream tasks has been a handy trick to achieve the desired performance.
no code implementations • 11 Jun 2021 • Yueming Lyu, Ivor Tsang
We further establish a new generalization bound of our deep structured approximated NOK architecture.
no code implementations • 21 Apr 2021 • Yueming Lyu, Jing Dong, Bo Peng, Wei Wang, Tieniu Tan
Since human faces are symmetrical in the UV space, we can conveniently remove the undesired shadow and occlusion from the reference image by carefully designing a Flip Attention Module (FAM).
no code implementations • 1 Jan 2021 • Xingrui Yu, Yueming Lyu, Ivor Tsang
Our method learns useful planning computations with a meaningful reward function that focuses on the resulting region of an agent executing an action.
no code implementations • 1 Jan 2021 • Yueming Lyu, Xingrui Yu, Ivor Tsang
In this work, we take an initial step to designing a simple robust layer as a lightweight plug-in for vanilla deep models.
no code implementations • NeurIPS 2020 • Yueming Lyu, Yuan Yuan, Ivor W. Tsang
We theoretically prove a lower and an upper bound of the minimum pairwise distance of any non-degenerate rank-1 lattice.
1 code implementation • ICML 2020 • Xingrui Yu, Yueming Lyu, Ivor W. Tsang
Thus, our module provides the imitation agent both the intrinsic intention of the demonstrator and a better exploration ability, which is critical for the agent to outperform the demonstrator.
no code implementations • 9 Oct 2019 • Yueming Lyu, Ivor W. Tsang
Empirically, our method with full matrix update achieves competitive performance compared with one of the state-of-the-art method CMA-ES on benchmark test problems.
no code implementations • 24 May 2019 • Yueming Lyu, Yuan Yuan, Ivor W. Tsang
In this work, we investigate black-box optimization from the perspective of frequentist kernel methods.
no code implementations • ICLR 2020 • Yueming Lyu, Ivor W. Tsang
Although the 0-1 loss has some robust properties, it is difficult to optimize.
no code implementations • ICLR 2019 • Yuan Yuan, Yueming Lyu, Xi Shen, Ivor W. Tsang, Dit-yan Yeung
The MAAN employs a novel marginalized average aggregation (MAA) module and learns a set of latent discriminative probabilities in an end-to-end fashion.
Ranked #11 on Weakly Supervised Action Localization on ActivityNet-1.3 (mAP@0.5 metric)
Weakly Supervised Action Localization Weakly-supervised Learning +2
no code implementations • ICML 2017 • Yueming Lyu
According to (Brauchart \& Grabner, 2015), optimizing the discrete Riesz s-energy can generate asymptotically uniformly distributed point set on $\mathbb{S}^{d-1}$.