Search Results for author: Chaoning Zhang

Found 62 papers, 15 papers with code

FedCCL: Federated Dual-Clustered Feature Contrast Under Domain Heterogeneity

no code implementations14 Apr 2024 Yu Qiao, Huy Q. Le, Mengchun Zhang, Apurba Adhikary, Chaoning Zhang, Choong Seon Hong

First, we employ clustering on the local representations of each client, aiming to capture intra-class information based on these local clusters at a high level of granularity.

Clustering Federated Learning +1

Logit Calibration and Feature Contrast for Robust Federated Learning on Non-IID Data

no code implementations10 Apr 2024 Yu Qiao, Chaoning Zhang, Apurba Adhikary, Choong Seon Hong

Federated learning (FL) is a privacy-preserving distributed framework for collaborative model training on devices in edge networks.

Adversarial Robustness Federated Learning +1

Towards Understanding Dual BN In Hybrid Adversarial Training

no code implementations28 Mar 2024 Chenshuang Zhang, Chaoning Zhang, Kang Zhang, Axi Niu, Junmo Kim, In So Kweon

There is a growing concern about applying batch normalization (BN) in adversarial training (AT), especially when the model is trained on both adversarial samples and clean samples (termed Hybrid-AT).

Sora as an AGI World Model? A Complete Survey on Text-to-Video Generation

no code implementations8 Mar 2024 Joseph Cho, Fachrina Dewi Puspitasari, Sheng Zheng, Jingyao Zheng, Lik-Hang Lee, Tae-Ho Kim, Choong Seon Hong, Chaoning Zhang

Text-to-video generation marks a significant frontier in the rapidly evolving domain of generative AI, integrating advancements in text-to-image synthesis, video captioning, and text-guided editing.

Hallucination Image Generation +3

Towards Robust Federated Learning via Logits Calibration on Non-IID Data

no code implementations5 Mar 2024 Yu Qiao, Apurba Adhikary, Chaoning Zhang, Choong Seon Hong

Meanwhile, the non-independent and identically distributed (non-IID) challenge of data distribution between edge devices can further degrade the performance of models.

Federated Learning Privacy Preserving

MobileSAMv2: Faster Segment Anything to Everything

1 code implementation15 Dec 2023 Chaoning Zhang, Dongshen Han, Sheng Zheng, Jinwoo Choi, Tae-Ho Kim, Choong Seon Hong

The efficiency bottleneck of SegEvery with SAM, however, lies in its mask decoder because it needs to first generate numerous masks with redundant grid-search prompts and then perform filtering to obtain the final valid masks.

Knowledge Distillation Object Discovery +1

Single Image Reflection Removal with Reflection Intensity Prior Knowledge

no code implementations6 Dec 2023 Dongshen Han, Seungkyu Lee, Chaoning Zhang, Heechan Yoon, Hyukmin Kwon, HyunCheol Kim, HyonGon Choo

In this paper, we propose a general reflection intensity prior that captures the intensity of the reflection phenomenon and demonstrate its effectiveness.

Reflection Removal

Federated Learning with Diffusion Models for Privacy-Sensitive Vision Tasks

1 code implementation28 Nov 2023 Ye Lin Tun, Chu Myaet Thwal, Ji Su Yoon, Sun Moo Kang, Chaoning Zhang, Choong Seon Hong

We conduct experiments on various FL scenarios, and our findings demonstrate that federated diffusion models have great potential to deliver vision services to privacy-sensitive domains.

Federated Learning Image Generation +1

Segment Anything Meets Universal Adversarial Perturbation

no code implementations19 Oct 2023 Dongshen Han, Sheng Zheng, Chaoning Zhang

On top of the ablation study to understand various components in our proposed method, we shed light on the roles of positive and negative samples in making the generated UAP effective for attacking SAM.

Adversarial Attack Adversarial Robustness +1

Black-box Targeted Adversarial Attack on Segment Anything (SAM)

no code implementations16 Oct 2023 Sheng Zheng, Chaoning Zhang, Xinhong Hao

The task of TAA on SAM has been realized in a recent arXiv work in the white-box setup by assuming access to prompt and model, which is thus less practical.

Adversarial Attack

Toward a Deeper Understanding: RetNet Viewed through Convolution

1 code implementation11 Sep 2023 Chenghao Li, Chaoning Zhang

A straightforward way to locally adapt the self-attention matrix can be realized by an element-wise learnable weight mask (ELM), for which our preliminary results show promising results.

Language Modelling

FedMEKT: Distillation-based Embedding Knowledge Transfer for Multimodal Federated Learning

no code implementations25 Jul 2023 Huy Q. Le, Minh N. H. Nguyen, Chu Myaet Thwal, Yu Qiao, Chaoning Zhang, Choong Seon Hong

Bringing this concept into a system, we develop a distillation-based multimodal embedding knowledge transfer mechanism, namely FedMEKT, which allows the server and clients to exchange the joint knowledge of their learning models extracted from a small multimodal proxy dataset.

Federated Learning Human Activity Recognition +1

Internal-External Boundary Attention Fusion for Glass Surface Segmentation

no code implementations1 Jul 2023 Dongshen Han, Seungkyu Lee, Chaoning Zhang, Heechan Yoon, Hyukmin Kwon, Hyun-Cheol Kim, Hyon-Gon Choo

Inspired by prior semantic segmentation approaches with challenging image types such as X-ray or CT scans, we propose separated internal-external boundary attention modules that individually learn and selectively integrate visual characteristics of the inside and outside region of glass surface from a single color image.

Semantic Segmentation Transparent objects

Faster Segment Anything: Towards Lightweight SAM for Mobile Applications

2 code implementations25 Jun 2023 Chaoning Zhang, Dongshen Han, Yu Qiao, Jung Uk Kim, Sung-Ho Bae, Seungkyu Lee, Choong Seon Hong

Concretely, we distill the knowledge from the heavy image encoder (ViT-H in the original SAM) to a lightweight image encoder, which can be automatically compatible with the mask decoder in the original SAM.

Image Segmentation Instance Segmentation +1

Robustness of Segment Anything Model (SAM) for Autonomous Driving in Adverse Weather Conditions

no code implementations23 Jun 2023 Xinru Shan, Chaoning Zhang

Given its impressive performance, there is a strong desire to apply SAM in autonomous driving to improve the performance of vision tasks, particularly in challenging scenarios such as driving under adverse weather conditions.

Autonomous Driving

Robustness of SAM: Segment Anything Under Corruptions and Beyond

no code implementations13 Jun 2023 Yu Qiao, Chaoning Zhang, Taegoo Kang, Donghun Kim, Chenshuang Zhang, Choong Seon Hong

Following by interpreting the effects of synthetic corruption as style changes, we proceed to conduct a comprehensive evaluation for its robustness against 15 types of common corruption.

Style Transfer

Segment Anything Meets Semantic Communication

no code implementations3 Jun 2023 Shehbaz Tariq, Brian Estadimas Arfeto, Chaoning Zhang, Hyundong Shin

In light of the diminishing returns of traditional methods for enhancing transmission rates, the domain of semantic communication presents promising new frontiers.

Image Reconstruction Image Segmentation +3

Generative AI meets 3D: A Survey on Text-to-3D in AIGC Era

no code implementations10 May 2023 Chenghao Li, Chaoning Zhang, Atish Waghwase, Lik-Hang Lee, Francois Rameau, Yang Yang, Sung-Ho Bae, Choong Seon Hong

AI generated content) has made remarkable progress in the past few years, among which text-guided content generation is the most practical one since it enables the interaction between human instruction and AIGC.

Scene Generation Text to 3D +1

When ChatGPT for Computer Vision Will Come? From 2D to 3D

no code implementations10 May 2023 Chenghao Li, Chaoning Zhang

On top of that, this work presents an outlook on the development of AIGC in 3D from the data perspective.

Attack-SAM: Towards Attacking Segment Anything Model With Adversarial Examples

no code implementations1 May 2023 Chenshuang Zhang, Chaoning Zhang, Taegoo Kang, Donghun Kim, Sung-Ho Bae, In So Kweon

Beyond the basic goal of mask removal, we further investigate and find that it is possible to generate any desired mask by the adversarial attack.

Adversarial Attack Adversarial Robustness

A Survey on Graph Diffusion Models: Generative AI in Science for Molecule, Protein and Material

no code implementations4 Apr 2023 Mengchun Zhang, Maryam Qamar, Taegoo Kang, Yuna Jung, Chenshuang Zhang, Sung-Ho Bae, Chaoning Zhang

Diffusion models have become a new SOTA generative modeling method in various fields, for which there are multiple survey works that provide an overall survey.

A Survey on Audio Diffusion Models: Text To Speech Synthesis and Enhancement in Generative AI

no code implementations23 Mar 2023 Chenshuang Zhang, Chaoning Zhang, Sheng Zheng, Mengchun Zhang, Maryam Qamar, Sung-Ho Bae, In So Kweon

This work conducts a survey on audio diffusion model, which is complementary to existing surveys that either lack the recent progress of diffusion-based speech synthesis or highlight an overall picture of applying diffusion model in multiple fields.

Speech Enhancement Speech Synthesis +1

Text-to-image Diffusion Models in Generative AI: A Survey

no code implementations14 Mar 2023 Chenshuang Zhang, Chaoning Zhang, Mengchun Zhang, In So Kweon

This survey reviews text-to-image diffusion models in the context that diffusion models have emerged to be popular for a wide range of generative tasks.

text-guided-image-editing

Test-time Adaptation in the Dynamic World with Compound Domain Knowledge Management

no code implementations16 Dec 2022 Junha Song, KwanYong Park, Inkyu Shin, Sanghyun Woo, Chaoning Zhang, In So Kweon

In addition, to prevent overfitting of the TTA model, we devise novel regularization which modulates the adaptation rates using domain-similarity between the source and the current target domain.

Denoising Image Classification +4

On the Pros and Cons of Momentum Encoder in Self-Supervised Visual Representation Learning

no code implementations11 Aug 2022 Trung Pham, Chaoning Zhang, Axi Niu, Kang Zhang, Chang D. Yoo

Exponential Moving Average (EMA or momentum) is widely used in modern self-supervised learning (SSL) approaches, such as MoCo, for enhancing performance.

Representation Learning Self-Supervised Learning

A Survey on Masked Autoencoder for Self-supervised Learning in Vision and Beyond

no code implementations30 Jul 2022 Chaoning Zhang, Chenshuang Zhang, Junha Song, John Seon Keun Yi, Kang Zhang, In So Kweon

Masked autoencoders are scalable vision learners, as the title of MAE \cite{he2022masked}, which suggests that self-supervised learning (SSL) in vision might undertake a similar trajectory as in NLP.

Contrastive Learning Denoising +1

Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial Robustness

2 code implementations22 Jul 2022 Chaoning Zhang, Kang Zhang, Chenshuang Zhang, Axi Niu, Jiu Feng, Chang D. Yoo, In So Kweon

Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields.

Adversarial Robustness Contrastive Learning +3

Investigating Top-$k$ White-Box and Transferable Black-box Attack

no code implementations30 Mar 2022 Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, In So Kweon

It is widely reported that stronger I-FGSM transfers worse than simple FGSM, leading to a popular belief that transferability is at odds with the white-box attack strength.

Dual Temperature Helps Contrastive Learning Without Many Negative Samples: Towards Understanding and Simplifying MoCo

2 code implementations CVPR 2022 Chaoning Zhang, Kang Zhang, Trung X. Pham, Axi Niu, Zhinan Qiao, Chang D. Yoo, In So Kweon

Contrastive learning (CL) is widely known to require many negative samples, 65536 in MoCo for instance, for which the performance of a dictionary-free framework is often inferior because the negative sample size (NSS) is limited by its mini-batch size (MBS).

Contrastive Learning

Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation

no code implementations9 Mar 2022 Qilong Zhang, Chaoning Zhang, CHAOQUN LI, Jingkuan Song, Lianli Gao

In this paper, we move a step forward and show the existence of a \textbf{training-free} adversarial perturbation under the no-box threat model, which can be successfully used to attack different DNNs in real-time.

Fast Adversarial Training with Noise Augmentation: A Unified Perspective on RandStart and GradAlign

no code implementations11 Feb 2022 Axi Niu, Kang Zhang, Chaoning Zhang, Chenshuang Zhang, In So Kweon, Chang D. Yoo, Yanning Zhang

The former works only for a relatively small perturbation 8/255 with the l_\infty constraint, and GradAlign improves it by extending the perturbation size to 16/255 (with the l_\infty constraint) but at the cost of being 3 to 4 times slower.

Data Augmentation

Investigating Top-k White-Box and Transferable Black-Box Attack

no code implementations CVPR 2022 Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, In So Kweon

It is widely reported that stronger I-FGSM transfers worse than simple FGSM, leading to a popular belief that transferability is at odds with the white-box attack strength.

Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs

1 code implementation6 Oct 2021 Philipp Benz, Soomin Ham, Chaoning Zhang, Adil Karjauv, In So Kweon

Thus, it is critical for the community to know whether the newly proposed ViT and MLP-Mixer are also vulnerable to adversarial attacks.

Adversarial Attack Adversarial Robustness

Early Stop And Adversarial Training Yield Better surrogate Model: Very Non-Robust Features Harm Adversarial Transferability

no code implementations29 Sep 2021 Chaoning Zhang, Gyusang Cho, Philipp Benz, Kang Zhang, Chenshuang Zhang, Chan-Hyun Youn, In So Kweon

The transferability of adversarial examples (AE); known as adversarial transferability, has attracted significant attention because it can be exploited for TransferableBlack-box Attacks (TBA).

Attribute

Universal Adversarial Head: Practical Protection against Video Data Leakage

no code implementations ICML Workshop AML 2021 Jiawang Bai, Bin Chen, Dongxian Wu, Chaoning Zhang, Shu-Tao Xia

We propose $universal \ adversarial \ head$ (UAH), which crafts adversarial query videos by prepending the original videos with a sequence of adversarial frames to perturb the normal hash codes in the Hamming space.

Deep Hashing Video Retrieval

Restoration of Video Frames from a Single Blurred Image with Motion Understanding

no code implementations19 Apr 2021 Dawit Mureja Argaw, Junsik Kim, Francois Rameau, Chaoning Zhang, In So Kweon

We formulate video restoration from a single blurred image as an inverse problem by setting clean image sequence and their respective motion as latent factors, and the blurred image as an observation.

Video Restoration

Universal Adversarial Training with Class-Wise Perturbations

no code implementations7 Apr 2021 Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon

The SOTA universal adversarial training (UAT) method optimizes a single perturbation for all training samples in the mini-batch.

Adversarial Robustness

A Survey On Universal Adversarial Attack

1 code implementation2 Mar 2021 Chaoning Zhang, Philipp Benz, Chenguo Lin, Adil Karjauv, Jing Wu, In So Kweon

The intriguing phenomenon of adversarial examples has attracted significant attention in machine learning and what might be more surprising to the community is the existence of universal adversarial perturbations (UAPs), i. e. a single perturbation to fool the target DNN for most images.

Adversarial Attack

Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards A Fourier Perspective

no code implementations12 Feb 2021 Chaoning Zhang, Philipp Benz, Adil Karjauv, In So Kweon

We perform task-specific and joint analysis and reveal that (a) frequency is a key factor that influences their performance based on the proposed entropy metric for quantifying the frequency distribution; (b) their success can be attributed to a DNN being highly sensitive to high-frequency content.

Data-Free Universal Adversarial Perturbation and Black-Box Attack

no code implementations ICCV 2021 Chaoning Zhang, Philipp Benz, Adil Karjauv, In So Kweon

For a more practical universal attack, our investigation of untargeted UAP focuses on alleviating the dependence on the original training samples, from removing the need for sample labels to limiting the sample size.

Towards Robust Data Hiding Against (JPEG) Compression: A Pseudo-Differentiable Deep Learning Approach

1 code implementation30 Dec 2020 Chaoning Zhang, Adil Karjauv, Philipp Benz, In So Kweon

Recently, deep learning has shown large success in data hiding, while non-differentiability of JPEG makes it challenging to train a deep pipeline for improving robustness against lossy compression.

UDH: Universal Deep Hiding for Steganography, Watermarking, and Light Field Messaging

1 code implementation NeurIPS 2020 Chaoning Zhang, Philipp Benz, Adil Karjauv, Geng Sun, In Kweon

This is the first work demonstrating the success of (DNN-based) hiding a full image for watermarking and LFM.

Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy

no code implementations26 Oct 2020 Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon

Adversarial training is the most widely used technique for improving adversarial robustness to strong white-box attacks.

Adversarial Robustness Autonomous Driving +1

CD-UAP: Class Discriminative Universal Adversarial Perturbation

no code implementations7 Oct 2020 Chaoning Zhang, Philipp Benz, Tooba Imtiaz, In So Kweon

Since the proposed attack generates a universal adversarial perturbation that is discriminative to targeted and non-targeted classes, we term it class discriminative universal adversarial perturbation (CD-UAP).

Double Targeted Universal Adversarial Perturbations

1 code implementation7 Oct 2020 Philipp Benz, Chaoning Zhang, Tooba Imtiaz, In So Kweon

This universal perturbation attacks one targeted source class to sink class, while having a limited adversarial effect on other non-targeted source classes, for avoiding raising suspicions.

Autonomous Driving

Revisiting Batch Normalization for Improving Corruption Robustness

no code implementations7 Oct 2020 Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon

We find that simply estimating and adapting the BN statistics on a few (32 for instance) representation samples, without retraining the model, improves the corruption robustness by a large margin on several benchmark datasets with a wide range of model architectures.

Data from Model: Extracting Data from Non-robust and Robust Models

no code implementations13 Jul 2020 Philipp Benz, Chaoning Zhang, Tooba Imtiaz, In-So Kweon

We repeat the process of Data to Model (DtM) and Data from Model (DfM) in sequence and explore the loss of feature mapping information by measuring the accuracy drop on the original validation dataset.

Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations

1 code implementation CVPR 2020 Chaoning Zhang, Philipp Benz, Tooba Imtiaz, In-So Kweon

We utilize this vector representation to understand adversarial examples by disentangling the clean images and adversarial perturbations, and analyze their influence on each other.

Cannot find the paper you are looking for? You can Submit a new open access paper.