no code implementations • 14 Apr 2024 • Yu Qiao, Huy Q. Le, Mengchun Zhang, Apurba Adhikary, Chaoning Zhang, Choong Seon Hong
First, we employ clustering on the local representations of each client, aiming to capture intra-class information based on these local clusters at a high level of granularity.
no code implementations • 10 Apr 2024 • Yu Qiao, Chaoning Zhang, Apurba Adhikary, Choong Seon Hong
Federated learning (FL) is a privacy-preserving distributed framework for collaborative model training on devices in edge networks.
no code implementations • 28 Mar 2024 • Chenshuang Zhang, Chaoning Zhang, Kang Zhang, Axi Niu, Junmo Kim, In So Kweon
There is a growing concern about applying batch normalization (BN) in adversarial training (AT), especially when the model is trained on both adversarial samples and clean samples (termed Hybrid-AT).
no code implementations • 8 Mar 2024 • Joseph Cho, Fachrina Dewi Puspitasari, Sheng Zheng, Jingyao Zheng, Lik-Hang Lee, Tae-Ho Kim, Choong Seon Hong, Chaoning Zhang
Text-to-video generation marks a significant frontier in the rapidly evolving domain of generative AI, integrating advancements in text-to-image synthesis, video captioning, and text-guided editing.
no code implementations • 5 Mar 2024 • Yu Qiao, Apurba Adhikary, Chaoning Zhang, Choong Seon Hong
Meanwhile, the non-independent and identically distributed (non-IID) challenge of data distribution between edge devices can further degrade the performance of models.
1 code implementation • 15 Dec 2023 • Chaoning Zhang, Dongshen Han, Sheng Zheng, Jinwoo Choi, Tae-Ho Kim, Choong Seon Hong
The efficiency bottleneck of SegEvery with SAM, however, lies in its mask decoder because it needs to first generate numerous masks with redundant grid-search prompts and then perform filtering to obtain the final valid masks.
no code implementations • 6 Dec 2023 • Dongshen Han, Seungkyu Lee, Chaoning Zhang, Heechan Yoon, Hyukmin Kwon, HyunCheol Kim, HyonGon Choo
In this paper, we propose a general reflection intensity prior that captures the intensity of the reflection phenomenon and demonstrate its effectiveness.
1 code implementation • 28 Nov 2023 • Ye Lin Tun, Chu Myaet Thwal, Ji Su Yoon, Sun Moo Kang, Chaoning Zhang, Choong Seon Hong
We conduct experiments on various FL scenarios, and our findings demonstrate that federated diffusion models have great potential to deliver vision services to privacy-sensitive domains.
no code implementations • 19 Oct 2023 • Dongshen Han, Sheng Zheng, Chaoning Zhang
On top of the ablation study to understand various components in our proposed method, we shed light on the roles of positive and negative samples in making the generated UAP effective for attacking SAM.
no code implementations • 16 Oct 2023 • Sheng Zheng, Chaoning Zhang, Xinhong Hao
The task of TAA on SAM has been realized in a recent arXiv work in the white-box setup by assuming access to prompt and model, which is thus less practical.
no code implementations • 21 Sep 2023 • Thanh Nguyen, Trung Pham, Chaoning Zhang, Tung Luu, Thang Vu, Chang D. Yoo
Self-supervised learning (SSL) has gained remarkable success, for which contrastive learning (CL) plays a key role.
1 code implementation • 11 Sep 2023 • Chenghao Li, Chaoning Zhang
A straightforward way to locally adapt the self-attention matrix can be realized by an element-wise learnable weight mask (ELM), for which our preliminary results show promising results.
no code implementations • 25 Jul 2023 • Huy Q. Le, Minh N. H. Nguyen, Chu Myaet Thwal, Yu Qiao, Chaoning Zhang, Choong Seon Hong
Bringing this concept into a system, we develop a distillation-based multimodal embedding knowledge transfer mechanism, namely FedMEKT, which allows the server and clients to exchange the joint knowledge of their learning models extracted from a small multimodal proxy dataset.
no code implementations • 1 Jul 2023 • Dongshen Han, Seungkyu Lee, Chaoning Zhang, Heechan Yoon, Hyukmin Kwon, Hyun-Cheol Kim, Hyon-Gon Choo
Inspired by prior semantic segmentation approaches with challenging image types such as X-ray or CT scans, we propose separated internal-external boundary attention modules that individually learn and selectively integrate visual characteristics of the inside and outside region of glass surface from a single color image.
2 code implementations • 25 Jun 2023 • Chaoning Zhang, Dongshen Han, Yu Qiao, Jung Uk Kim, Sung-Ho Bae, Seungkyu Lee, Choong Seon Hong
Concretely, we distill the knowledge from the heavy image encoder (ViT-H in the original SAM) to a lightweight image encoder, which can be automatically compatible with the mask decoder in the original SAM.
no code implementations • 23 Jun 2023 • Xinru Shan, Chaoning Zhang
Given its impressive performance, there is a strong desire to apply SAM in autonomous driving to improve the performance of vision tasks, particularly in challenging scenarios such as driving under adverse weather conditions.
no code implementations • 13 Jun 2023 • Yu Qiao, Chaoning Zhang, Taegoo Kang, Donghun Kim, Chenshuang Zhang, Choong Seon Hong
Following by interpreting the effects of synthetic corruption as style changes, we proceed to conduct a comprehensive evaluation for its robustness against 15 types of common corruption.
no code implementations • 3 Jun 2023 • Chaoning Zhang, Yu Qiao, Shehbaz Tariq, Sheng Zheng, Chenshuang Zhang, Chenghao Li, Hyundong Shin, Choong Seon Hong
Different from label-oriented recognition tasks, the SAM is trained to predict a mask for covering the object shape based on a promt.
no code implementations • 3 Jun 2023 • Shehbaz Tariq, Brian Estadimas Arfeto, Chaoning Zhang, Hyundong Shin
In light of the diminishing returns of traditional methods for enhancing transmission rates, the domain of semantic communication presents promising new frontiers.
no code implementations • 12 May 2023 • Chaoning Zhang, Fachrina Dewi Puspitasari, Sheng Zheng, Chenghao Li, Yu Qiao, Taegoo Kang, Xinru Shan, Chenshuang Zhang, Caiyan Qin, Francois Rameau, Lik-Hang Lee, Sung-Ho Bae, Choong Seon Hong
This is an ongoing project and we intend to update the manuscript on a regular basis.
no code implementations • 10 May 2023 • Chenghao Li, Chaoning Zhang
On top of that, this work presents an outlook on the development of AIGC in 3D from the data perspective.
no code implementations • 10 May 2023 • Chenghao Li, Chaoning Zhang, Atish Waghwase, Lik-Hang Lee, Francois Rameau, Yang Yang, Sung-Ho Bae, Choong Seon Hong
AI generated content) has made remarkable progress in the past few years, among which text-guided content generation is the most practical one since it enables the interaction between human instruction and AIGC.
no code implementations • 1 May 2023 • Chenshuang Zhang, Chaoning Zhang, Taegoo Kang, Donghun Kim, Sung-Ho Bae, In So Kweon
Beyond the basic goal of mask removal, we further investigate and find that it is possible to generate any desired mask by the adversarial attack.
no code implementations • 29 Apr 2023 • Dongsheng Han, Chaoning Zhang, Yu Qiao, Maryam Qamar, Yuna Jung, Seungkyu Lee, Sung-Ho Bae, Choong Seon Hong
Meta AI Research has recently released SAM (Segment Anything Model) which is trained on a large segmentation dataset of over 1 billion masks.
no code implementations • 4 Apr 2023 • Chaoning Zhang, Chenshuang Zhang, Chenghao Li, Yu Qiao, Sheng Zheng, Sumit Kumar Dam, Mengchun Zhang, Jung Uk Kim, Seong Tae Kim, Jinwoo Choi, Gyeong-Moon Park, Sung-Ho Bae, Lik-Hang Lee, Pan Hui, In So Kweon, Choong Seon Hong
Overall, this work is the first to survey ChatGPT with a comprehensive review of its underlying technology, applications, and challenges.
no code implementations • 4 Apr 2023 • Mengchun Zhang, Maryam Qamar, Taegoo Kang, Yuna Jung, Chenshuang Zhang, Sung-Ho Bae, Chaoning Zhang
Diffusion models have become a new SOTA generative modeling method in various fields, for which there are multiple survey works that provide an overall survey.
no code implementations • 1 Apr 2023 • Yu Qiao, Md. Shirajum Munir, Apurba Adhikary, Huy Q. Le, Avi Deb Raha, Chaoning Zhang, Choong Seon Hong
The existing single prototype-based strategy represents a class by using the mean of the feature space.
no code implementations • 23 Mar 2023 • Chenshuang Zhang, Chaoning Zhang, Sheng Zheng, Mengchun Zhang, Maryam Qamar, Sung-Ho Bae, In So Kweon
This work conducts a survey on audio diffusion model, which is complementary to existing surveys that either lack the recent progress of diffusion-based speech synthesis or highlight an overall picture of applying diffusion model in multiple fields.
no code implementations • 21 Mar 2023 • Chaoning Zhang, Chenshuang Zhang, Sheng Zheng, Yu Qiao, Chenghao Li, Mengchun Zhang, Sumit Kumar Dam, Chu Myaet Thwal, Ye Lin Tun, Le Luang Huy, Donguk Kim, Sung-Ho Bae, Lik-Hang Lee, Yang Yang, Heng Tao Shen, In So Kweon, Choong Seon Hong
As ChatGPT goes viral, generative AI (AIGC, a. k. a AI-generated content) has made headlines everywhere because of its ability to analyze and create text, images, and beyond.
no code implementations • 14 Mar 2023 • Chenshuang Zhang, Chaoning Zhang, Mengchun Zhang, In So Kweon
This survey reviews text-to-image diffusion models in the context that diffusion models have emerged to be popular for a wide range of generative tasks.
no code implementations • 16 Dec 2022 • Junha Song, KwanYong Park, Inkyu Shin, Sanghyun Woo, Chaoning Zhang, In So Kweon
In addition, to prevent overfitting of the TTA model, we devise novel regularization which modulates the adaptation rates using domain-similarity between the source and the current target domain.
no code implementations • 11 Aug 2022 • Trung Pham, Chaoning Zhang, Axi Niu, Kang Zhang, Chang D. Yoo
Exponential Moving Average (EMA or momentum) is widely used in modern self-supervised learning (SSL) approaches, such as MoCo, for enhancing performance.
no code implementations • 30 Jul 2022 • Chaoning Zhang, Chenshuang Zhang, Junha Song, John Seon Keun Yi, Kang Zhang, In So Kweon
Masked autoencoders are scalable vision learners, as the title of MAE \cite{he2022masked}, which suggests that self-supervised learning (SSL) in vision might undertake a similar trajectory as in NLP.
2 code implementations • 22 Jul 2022 • Chaoning Zhang, Kang Zhang, Chenshuang Zhang, Axi Niu, Jiu Feng, Chang D. Yoo, In So Kweon
Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields.
no code implementations • 30 Mar 2022 • Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, In So Kweon
It is widely reported that stronger I-FGSM transfers worse than simple FGSM, leading to a popular belief that transferability is at odds with the white-box attack strength.
no code implementations • 30 Mar 2022 • Chaoning Zhang, Kang Zhang, Chenshuang Zhang, Trung X. Pham, Chang D. Yoo, In So Kweon
This yields a unified perspective on how negative samples and SimSiam alleviate collapse.
2 code implementations • CVPR 2022 • Chaoning Zhang, Kang Zhang, Trung X. Pham, Axi Niu, Zhinan Qiao, Chang D. Yoo, In So Kweon
Contrastive learning (CL) is widely known to require many negative samples, 65536 in MoCo for instance, for which the performance of a dictionary-free framework is often inferior because the negative sample size (NSS) is limited by its mini-batch size (MBS).
no code implementations • 9 Mar 2022 • Qilong Zhang, Chaoning Zhang, CHAOQUN LI, Jingkuan Song, Lianli Gao
In this paper, we move a step forward and show the existence of a \textbf{training-free} adversarial perturbation under the no-box threat model, which can be successfully used to attack different DNNs in real-time.
no code implementations • 11 Feb 2022 • Axi Niu, Kang Zhang, Chaoning Zhang, Chenshuang Zhang, In So Kweon, Chang D. Yoo, Yanning Zhang
The former works only for a relatively small perturbation 8/255 with the l_\infty constraint, and GradAlign improves it by extending the perturbation size to 16/255 (with the l_\infty constraint) but at the cost of being 3 to 4 times slower.
no code implementations • CVPR 2022 • Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, In So Kweon
It is widely reported that stronger I-FGSM transfers worse than simple FGSM, leading to a popular belief that transferability is at odds with the white-box attack strength.
1 code implementation • 17 Oct 2021 • Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Jun Zhu, Fangcheng Liu, Chao Zhang, Hongyang Zhang, Yichi Zhang, Shilong Liu, Chang Liu, Wenzhao Xiang, Yajie Wang, Huipeng Zhou, Haoran Lyu, Yidan Xu, Zixuan Xu, Taoyu Zhu, Wenjun Li, Xianfeng Gao, Guoqiu Wang, Huanqian Yan, Ying Guo, Chaoning Zhang, Zheng Fang, Yang Wang, Bingyang Fu, Yunfei Zheng, Yekui Wang, Haorong Luo, Zhen Yang
Many works have investigated the adversarial attacks or defenses under the settings where a bounded and imperceptible perturbation can be added to the input.
1 code implementation • 6 Oct 2021 • Philipp Benz, Soomin Ham, Chaoning Zhang, Adil Karjauv, In So Kweon
Thus, it is critical for the community to know whether the newly proposed ViT and MLP-Mixer are also vulnerable to adversarial attacks.
no code implementations • ICLR 2022 • Chaoning Zhang, Kang Zhang, Chenshuang Zhang, Trung X. Pham, Chang D. Yoo, In So Kweon
Towards avoiding collapse in self-supervised learning (SSL), contrastive loss is widely used but often requires a large number of negative samples.
no code implementations • 29 Sep 2021 • Zhinan Qiao, Xiaohui Yuan, Chaoning Zhang, Jianfang Shi, Jian Xia
Most deep learning backbones are evaluated on ImageNet.
no code implementations • 29 Sep 2021 • Chaoning Zhang, Gyusang Cho, Philipp Benz, Kang Zhang, Chenshuang Zhang, Chan-Hyun Youn, In So Kweon
The transferability of adversarial examples (AE); known as adversarial transferability, has attracted significant attention because it can be exploited for TransferableBlack-box Attacks (TBA).
no code implementations • ICML Workshop AML 2021 • Jiawang Bai, Bin Chen, Dongxian Wu, Chaoning Zhang, Shu-Tao Xia
We propose $universal \ adversarial \ head$ (UAH), which crafts adversarial query videos by prepending the original videos with a sequence of adversarial frames to perturb the normal hash codes in the Hamming space.
no code implementations • 19 Apr 2021 • Dawit Mureja Argaw, Junsik Kim, Francois Rameau, Chaoning Zhang, In So Kweon
We formulate video restoration from a single blurred image as an inverse problem by setting clean image sequence and their respective motion as latent factors, and the blurred image as an observation.
no code implementations • 7 Apr 2021 • Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon
The SOTA universal adversarial training (UAT) method optimizes a single perturbation for all training samples in the mini-batch.
1 code implementation • 2 Mar 2021 • Chaoning Zhang, Philipp Benz, Chenguo Lin, Adil Karjauv, Jing Wu, In So Kweon
The intriguing phenomenon of adversarial examples has attracted significant attention in machine learning and what might be more surprising to the community is the existence of universal adversarial perturbations (UAPs), i. e. a single perturbation to fool the target DNN for most images.
no code implementations • 2 Mar 2021 • Chaoning Zhang, Chenguo Lin, Philipp Benz, Kejiang Chen, Weiming Zhang, In So Kweon
Data hiding is the art of concealing messages with limited perceptual changes.
no code implementations • 12 Feb 2021 • Chaoning Zhang, Philipp Benz, Adil Karjauv, In So Kweon
We perform task-specific and joint analysis and reveal that (a) frequency is a key factor that influences their performance based on the proposed entropy metric for quantifying the frequency distribution; (b) their success can be attributed to a DNN being highly sensitive to high-frequency content.
no code implementations • ICCV 2021 • Chaoning Zhang, Philipp Benz, Adil Karjauv, In So Kweon
For a more practical universal attack, our investigation of untargeted UAP focuses on alleviating the dependence on the original training samples, from removing the need for sample labels to limiting the sample size.
1 code implementation • 30 Dec 2020 • Chaoning Zhang, Adil Karjauv, Philipp Benz, In So Kweon
Recently, deep learning has shown large success in data hiding, while non-differentiability of JPEG makes it challenging to train a deep pipeline for improving robustness against lossy compression.
1 code implementation • NeurIPS 2020 • Chaoning Zhang, Philipp Benz, Adil Karjauv, Geng Sun, In Kweon
This is the first work demonstrating the success of (DNN-based) hiding a full image for watermarking and LFM.
no code implementations • 26 Oct 2020 • Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon
Adversarial training is the most widely used technique for improving adversarial robustness to strong white-box attacks.
1 code implementation • 23 Oct 2020 • Chaoning Zhang, Philipp Benz, Dawit Mureja Argaw, Seokju Lee, Junsik Kim, Francois Rameau, Jean-Charles Bazin, In So Kweon
ResNet or DenseNet?
1 code implementation • 7 Oct 2020 • Philipp Benz, Chaoning Zhang, Tooba Imtiaz, In So Kweon
This universal perturbation attacks one targeted source class to sink class, while having a limited adversarial effect on other non-targeted source classes, for avoiding raising suspicions.
no code implementations • 7 Oct 2020 • Chaoning Zhang, Philipp Benz, Tooba Imtiaz, In So Kweon
Since the proposed attack generates a universal adversarial perturbation that is discriminative to targeted and non-targeted classes, we term it class discriminative universal adversarial perturbation (CD-UAP).
1 code implementation • ICCV 2021 • Philipp Benz, Chaoning Zhang, In So Kweon
This work attempts to understand the impact of BN on DNNs from a non-robust feature perspective.
no code implementations • 7 Oct 2020 • Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon
We find that simply estimating and adapting the BN statistics on a few (32 for instance) representation samples, without retraining the model, improves the corruption robustness by a large margin on several benchmark datasets with a wide range of model architectures.
1 code implementation • CVPR 2020 • Chaoning Zhang, Philipp Benz, Tooba Imtiaz, In-So Kweon
We utilize this vector representation to understand adversarial examples by disentangling the clean images and adversarial perturbations, and analyze their influence on each other.
no code implementations • 13 Jul 2020 • Philipp Benz, Chaoning Zhang, Tooba Imtiaz, In-So Kweon
We repeat the process of Data to Model (DtM) and Data from Model (DfM) in sequence and explore the loss of feature mapping information by measuring the accuracy drop on the original validation dataset.