Search Results for author: Chenshuang Zhang

Found 18 papers, 2 papers with code

Towards Understanding Dual BN In Hybrid Adversarial Training

no code implementations28 Mar 2024 Chenshuang Zhang, Chaoning Zhang, Kang Zhang, Axi Niu, Junmo Kim, In So Kweon

There is a growing concern about applying batch normalization (BN) in adversarial training (AT), especially when the model is trained on both adversarial samples and clean samples (termed Hybrid-AT).

ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object

1 code implementation27 Mar 2024 Chenshuang Zhang, Fei Pan, Junmo Kim, In So Kweon, Chengzhi Mao

In this work, we introduce generative model as a data source for synthesizing hard images that benchmark deep models' robustness.

Benchmarking

Robustness of SAM: Segment Anything Under Corruptions and Beyond

no code implementations13 Jun 2023 Yu Qiao, Chaoning Zhang, Taegoo Kang, Donghun Kim, Chenshuang Zhang, Choong Seon Hong

Following by interpreting the effects of synthetic corruption as style changes, we proceed to conduct a comprehensive evaluation for its robustness against 15 types of common corruption.

Style Transfer

Attack-SAM: Towards Attacking Segment Anything Model With Adversarial Examples

no code implementations1 May 2023 Chenshuang Zhang, Chaoning Zhang, Taegoo Kang, Donghun Kim, Sung-Ho Bae, In So Kweon

Beyond the basic goal of mask removal, we further investigate and find that it is possible to generate any desired mask by the adversarial attack.

Adversarial Attack Adversarial Robustness

A Survey on Graph Diffusion Models: Generative AI in Science for Molecule, Protein and Material

no code implementations4 Apr 2023 Mengchun Zhang, Maryam Qamar, Taegoo Kang, Yuna Jung, Chenshuang Zhang, Sung-Ho Bae, Chaoning Zhang

Diffusion models have become a new SOTA generative modeling method in various fields, for which there are multiple survey works that provide an overall survey.

A Survey on Audio Diffusion Models: Text To Speech Synthesis and Enhancement in Generative AI

no code implementations23 Mar 2023 Chenshuang Zhang, Chaoning Zhang, Sheng Zheng, Mengchun Zhang, Maryam Qamar, Sung-Ho Bae, In So Kweon

This work conducts a survey on audio diffusion model, which is complementary to existing surveys that either lack the recent progress of diffusion-based speech synthesis or highlight an overall picture of applying diffusion model in multiple fields.

Speech Enhancement Speech Synthesis +1

Text-to-image Diffusion Models in Generative AI: A Survey

no code implementations14 Mar 2023 Chenshuang Zhang, Chaoning Zhang, Mengchun Zhang, In So Kweon

This survey reviews text-to-image diffusion models in the context that diffusion models have emerged to be popular for a wide range of generative tasks.

text-guided-image-editing

A Survey on Masked Autoencoder for Self-supervised Learning in Vision and Beyond

no code implementations30 Jul 2022 Chaoning Zhang, Chenshuang Zhang, Junha Song, John Seon Keun Yi, Kang Zhang, In So Kweon

Masked autoencoders are scalable vision learners, as the title of MAE \cite{he2022masked}, which suggests that self-supervised learning (SSL) in vision might undertake a similar trajectory as in NLP.

Contrastive Learning Denoising +1

Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial Robustness

2 code implementations22 Jul 2022 Chaoning Zhang, Kang Zhang, Chenshuang Zhang, Axi Niu, Jiu Feng, Chang D. Yoo, In So Kweon

Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields.

Adversarial Robustness Contrastive Learning +3

Fast Adversarial Training with Noise Augmentation: A Unified Perspective on RandStart and GradAlign

no code implementations11 Feb 2022 Axi Niu, Kang Zhang, Chaoning Zhang, Chenshuang Zhang, In So Kweon, Chang D. Yoo, Yanning Zhang

The former works only for a relatively small perturbation 8/255 with the l_\infty constraint, and GradAlign improves it by extending the perturbation size to 16/255 (with the l_\infty constraint) but at the cost of being 3 to 4 times slower.

Data Augmentation

Early Stop And Adversarial Training Yield Better surrogate Model: Very Non-Robust Features Harm Adversarial Transferability

no code implementations29 Sep 2021 Chaoning Zhang, Gyusang Cho, Philipp Benz, Kang Zhang, Chenshuang Zhang, Chan-Hyun Youn, In So Kweon

The transferability of adversarial examples (AE); known as adversarial transferability, has attracted significant attention because it can be exploited for TransferableBlack-box Attacks (TBA).

Attribute

EENMF: An End-to-End Neural Matching Framework for E-Commerce Sponsored Search

no code implementations4 Dec 2018 Wenjin Wu, Guojun Liu, Hui Ye, Chenshuang Zhang, Tianshu Wu, Daorui Xiao, Wei. Lin, Xiaoyu Zhu

In the real traffic of a large-scale e-commerce sponsored search, the proposed approach significantly outperforms the baseline.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.