Search Results for author: Jingfeng Zhang

Found 38 papers, 16 papers with code

StyleBooth: Image Style Editing with Multimodal Instruction

1 code implementation18 Apr 2024 Zhen Han, Chaojie Mao, Zeyinzi Jiang, Yulin Pan, Jingfeng Zhang

We integrate encoded textual instruction and image exemplar as a unified condition for diffusion model, enabling the editing of original image following multimodal instructions.

Locate, Assign, Refine: Taming Customized Image Inpainting with Text-Subject Guidance

no code implementations28 Mar 2024 Yulin Pan, Chaojie Mao, Zeyinzi Jiang, Zhen Han, Jingfeng Zhang

The process involves (i) Locate: concatenating the noise with masked scene image to achieve precise regional editing, (ii) Assign: employing decoupled cross-attention mechanism to accommodate multi-modal guidance, and (iii) Refine: using a novel RefineNet to supplement subject details.

Image Inpainting

Make Me Happier: Evoking Emotions Through Image Diffusion Models

no code implementations13 Mar 2024 Qing Lin, Jingfeng Zhang, Yew Soon Ong, Mengmi Zhang

For the first time, we present a novel challenge of emotion-evoked image generation, aiming to synthesize images that evoke target emotions while retaining the semantics and structures of the original scenes.

Image Generation

Privacy-Preserving Low-Rank Adaptation for Latent Diffusion Models

1 code implementation19 Feb 2024 Zihao Luo, Xilie Xu, Feng Liu, Yun Sing Koh, Di Wang, Jingfeng Zhang

To mitigate this issue, we propose Stable PrivateLoRA that adapts the LDM by minimizing the ratio of the adaptation loss to the MI gain, which implicitly rescales the gradient and thus stabilizes the optimization.

Privacy Preserving

SCEdit: Efficient and Controllable Image Diffusion Generation via Skip Connection Editing

2 code implementations18 Dec 2023 Zeyinzi Jiang, Chaojie Mao, Yulin Pan, Zhen Han, Jingfeng Zhang

Image diffusion models have been utilized in various tasks, such as text-to-image generation and controllable image synthesis.

Text-to-Image Generation

Fair Text-to-Image Diffusion via Fair Mapping

no code implementations29 Nov 2023 Jia Li, Lijie Hu, Jingfeng Zhang, Tianhang Zheng, Hua Zhang, Di Wang

In this paper, we address the limitations of existing text-to-image diffusion models in generating demographically fair results when given human-related descriptions.

Fairness Text-to-Image Generation

AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework

no code implementations3 Oct 2023 Xilie Xu, Jingfeng Zhang, Mohan Kankanhalli

To mitigate this issue, we propose a low-rank (LoRa) branch that disentangles RFT into two distinct components: optimizing natural objectives via the LoRa branch and adversarial objectives via the FE.

Adversarial Robustness Scheduling

BadLabel: A Robust Perspective on Evaluating and Enhancing Label-noise Learning

1 code implementation28 May 2023 Jingfeng Zhang, Bo Song, Haohan Wang, Bo Han, Tongliang Liu, Lei Liu, Masashi Sugiyama

To address the challenge posed by BadLabel, we further propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable.

Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization

1 code implementation NeurIPS 2023 Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli

To improve transferability, the existing work introduced the standard invariant regularization (SIR) to impose style-independence property to SCL, which can exempt the impact of nuisance style factors in the standard representation.

Contrastive Learning

Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection

1 code implementation NeurIPS 2023 Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli

Adversarial contrastive learning (ACL) does not require expensive data annotations but outputs a robust representation that withstands adversarial attacks and also generalizes to a wide range of downstream tasks.

Contrastive Learning

GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks

1 code implementation6 Feb 2023 Salah Ghamizi, Jingfeng Zhang, Maxime Cordy, Mike Papadakis, Masashi Sugiyama, Yves Le Traon

While leveraging additional training data is well established to improve adversarial robustness, it incurs the unavoidable cost of data collection and the heavy computation to train models.

Adversarial Robustness Data Augmentation +1

Accelerating Score-based Generative Models for High-Resolution Image Synthesis

no code implementations8 Jun 2022 Hengyuan Ma, Li Zhang, Xiatian Zhu, Jingfeng Zhang, Jianfeng Feng

To ensure stability of convergence in sampling and generation quality, however, this sequential sampling process has to take a small step size and many sampling iterations (e. g., 2000).

Image Generation Vocal Bursts Intensity Prediction

Diverse Instance Discovery: Vision-Transformer for Instance-Aware Multi-Label Image Recognition

no code implementations22 Apr 2022 Yunqing Hu, Xuan Jin, Yin Zhang, Haiwen Hong, Jingfeng Zhang, Feihu Yan, Yuan He, Hui Xue

Finally, we propose a weakly supervised object localization-based approach to extract multi-scale local features, to form a multi-view pipeline.

Weakly-Supervised Object Localization

On the Effectiveness of Adversarial Training against Backdoor Attacks

no code implementations22 Feb 2022 Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Guanhao Gan, Shu-Tao Xia, Gang Niu, Masashi Sugiyama

To explore whether adversarial training could defend against backdoor attacks or not, we conduct extensive experiments across different threat models and perturbation budgets, and find the threat model in adversarial training matters.

Adversarial Attack and Defense for Non-Parametric Two-Sample Tests

1 code implementation7 Feb 2022 Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli

Furthermore, we theoretically find that the adversary can also degrade the lower bound of a TST's test power, which enables us to iteratively minimize the test criterion in order to search for adversarial pairs.

Adversarial Attack Vocal Bursts Valence Prediction

Towards Adversarially Robust Deep Image Denoising

no code implementations12 Jan 2022 Hanshu Yan, Jingfeng Zhang, Jiashi Feng, Masashi Sugiyama, Vincent Y. F. Tan

Secondly, to robustify DIDs, we propose an adversarial training strategy, hybrid adversarial training ({\sc HAT}), that jointly trains DIDs with adversarial and non-adversarial noisy data to ensure that the reconstruction quality is high and the denoisers around non-adversarial data are locally smooth.

Adversarial Attack Adversarial Robustness +1

Collaborate to Defend Against Adversarial Attacks

no code implementations29 Sep 2021 Sen Cui, Jingfeng Zhang, Jian Liang, Masashi Sugiyama, ChangShui Zhang

However, an ensemble still wastes the limited capacity of multiple models.

Does Adversarial Robustness Really Imply Backdoor Vulnerability?

no code implementations29 Sep 2021 Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Shu-Tao Xia, Gang Niu, Masashi Sugiyama

Based on thorough experiments, we find that such trade-off ignores the interactions between the perturbation budget of adversarial training and the magnitude of the backdoor trigger.

Adversarial Robustness

DRDF: Determining the Importance of Different Multimodal Information with Dual-Router Dynamic Framework

no code implementations21 Jul 2021 Haiwen Hong, Xuan Jin, Yin Zhang, Yunqing Hu, Jingfeng Zhang, Yuan He, Hui Xue

In multimodal tasks, we find that the importance of text and image modal information is different for different input cases, and for this motivation, we propose a high-performance and highly general Dual-Router Dynamic Framework (DRDF), consisting of Dual-Router, MWF-Layer, experts and expert fusion unit.

RAMS-Trans: Recurrent Attention Multi-scale Transformer forFine-grained Image Recognition

no code implementations17 Jul 2021 Yunqing Hu, Xuan Jin, Yin Zhang, Haiwen Hong, Jingfeng Zhang, Yuan He, Hui Xue

We propose the recurrent attention multi-scale transformer (RAMS-Trans), which uses the transformer's self-attention to recursively learn discriminative region attention in a multi-scale manner.

Fine-Grained Image Classification Fine-Grained Image Recognition

Reliable Adversarial Distillation with Unreliable Teachers

2 code implementations ICLR 2022 Jianing Zhu, Jiangchao Yao, Bo Han, Jingfeng Zhang, Tongliang Liu, Gang Niu, Jingren Zhou, Jianliang Xu, Hongxia Yang

However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students.

Adversarial Robustness

NoiLIn: Improving Adversarial Training and Correcting Stereotype of Noisy Labels

1 code implementation31 May 2021 Jingfeng Zhang, Xilie Xu, Bo Han, Tongliang Liu, Gang Niu, Lizhen Cui, Masashi Sugiyama

First, we thoroughly investigate noisy labels (NLs) injection into AT's inner maximization and outer minimization, respectively and obtain the observations on when NL injection benefits AT.

Adversarial Robustness

Guided Interpolation for Adversarial Training

no code implementations15 Feb 2021 Chen Chen, Jingfeng Zhang, Xilie Xu, Tianlei Hu, Gang Niu, Gang Chen, Masashi Sugiyama

To enhance adversarial robustness, adversarial training learns deep neural networks on the adversarial variants generated by their natural data.

Adversarial Robustness

CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection

2 code implementations10 Feb 2021 Hanshu Yan, Jingfeng Zhang, Gang Niu, Jiashi Feng, Vincent Y. F. Tan, Masashi Sugiyama

By comparing \textit{non-robust} (normally trained) and \textit{robustified} (adversarially trained) models, we observe that adversarial training (AT) robustifies CNNs by aligning the channel-wise activations of adversarial data with those of their natural counterparts.

Adversarial Robustness feature selection

Understanding the Interaction of Adversarial Training with Noisy Labels

no code implementations6 Feb 2021 Jianing Zhu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Hongxia Yang, Mohan Kankanhalli, Masashi Sugiyama

A recent adversarial training (AT) study showed that the number of projected gradient descent (PGD) steps to successfully attack a point (i. e., find an adversarial example in its proximity) is an effective measure of the robustness of this point.

Learning Diverse-Structured Networks for Adversarial Robustness

1 code implementation3 Feb 2021 Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama

In adversarial training (AT), the main focus has been the objective and optimizer while the model has been less studied, so that the models being used are still those classic ones in standard training (ST).

Adversarial Robustness

Maximum Mean Discrepancy Test is Aware of Adversarial Attacks

2 code implementations22 Oct 2020 Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama

However, it has been shown that the MMD test is unaware of adversarial attacks -- the MMD test failed to detect the discrepancy between natural and adversarial data.

Adversarial Attack Detection

Geometry-aware Instance-reweighted Adversarial Training

2 code implementations ICLR 2021 Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, Mohan Kankanhalli

The belief was challenged by recent studies where we can maintain the robustness and improve the accuracy.

Robust Federated Recommendation System

no code implementations15 Jun 2020 Chen Chen, Jingfeng Zhang, Anthony K. H. Tung, Mohan Kankanhalli, Gang Chen

We argue that the key to Byzantine detection is monitoring of gradients of the model parameters of clients.

Recommendation Systems

Hierarchically Fair Federated Learning

no code implementations22 Apr 2020 Jingfeng Zhang, Cheng Li, Antonio Robles-Kelly, Mohan Kankanhalli

When the federated learning is adopted among competitive agents with siloed datasets, agents are self-interested and participate only if they are fairly rewarded.

Fairness Federated Learning +1

Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

1 code implementation ICML 2020 Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, Mohan Kankanhalli

Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models.

Adversarial Robustness

Where is the Bottleneck of Adversarial Learning with Unlabeled Data?

no code implementations20 Nov 2019 Jingfeng Zhang, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama

Deep neural networks (DNNs) are incredibly brittle due to adversarial examples.

Towards Robust ResNet: A Small Step but A Giant Leap

no code implementations28 Feb 2019 Jingfeng Zhang, Bo Han, Laura Wynter, Kian Hsiang Low, Mohan Kankanhalli

Our analytical studies reveal that the step factor h in the Euler method is able to control the robustness of ResNet in both its training and generalization.

Smooth Inter-layer Propagation of Stabilized Neural Networks for Classification

no code implementations27 Sep 2018 Jingfeng Zhang, Laura Wynter

Recent work has studied the reasons for the remarkable performance of deep neural networks in image classification.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.