no code implementations • ECCV 2020 • Junfeng Guo, Cong Liu
Importantly, we show that the effectiveness of BlackCard can be intuitively guaranteed by a set of analytical reasoning and observations, through exploiting an essential characteristic of gradient-descent optimization which is pervasively adopted in DNN models.
no code implementations • 14 Mar 2024 • Chenxi Liu, Zhenyi Wang, Tianyi Xiong, Ruibo Chen, Yihan Wu, Junfeng Guo, Heng Huang
Few-Shot Class-Incremental Learning (FSCIL) models aim to incrementally learn new classes with scarce samples while preserving knowledge of old ones.
1 code implementation • 19 Feb 2024 • Ruibo Chen, Yihan Wu, Lichang Chen, Guodong Liu, Qi He, Tianyi Xiong, Chenxi Liu, Junfeng Guo, Heng Huang
In the first stage, we devise a scoring network to evaluate the difficulty of training instructions, which is co-trained with the VLM.
no code implementations • 21 Dec 2023 • Lixu Wang, Chenxi Liu, Junfeng Guo, Jiahua Dong, Xiao Wang, Heng Huang, Qi Zhu
In a privacy-focused era, Federated Learning (FL) has emerged as a promising machine learning technique.
no code implementations • 3 Dec 2023 • Yiming Li, Mingyan Zhu, Junfeng Guo, Tao Wei, Shu-Tao Xia, Zhan Qin
We argue that the intensity constraint of existing SSBAs is mostly because their trigger patterns are `content-irrelevant' and therefore act as `noises' for both humans and DNNs.
no code implementations • 13 Sep 2023 • Hanqing Guo, Xun Chen, Junfeng Guo, Li Xiao, Qiben Yan
In this work, we propose a backdoor attack MASTERKEY, to compromise the SV models.
no code implementations • ICCV 2023 • Junfeng Guo, Ang Li, Lixu Wang, Cong Liu
To ensure the security of RL agents against malicious backdoors, in this work, we propose the problem of Backdoor Detection in multi-agent RL systems, with the objective of detecting Trojan agents as well as the corresponding potential trigger actions, and further trying to mitigate their bad impact.
no code implementations • 8 Feb 2022 • Junfeng Guo, Ang Li, Cong Liu
To ensure the security of RL agents against malicious backdoors, in this work, we propose the problem of Backdoor Detection in a multi-agent competitive reinforcement learning system, with the objective of detecting Trojan agents as well as the corresponding potential trigger actions, and further trying to mitigate their Trojan behavior.
1 code implementation • ICLR 2022 • Junfeng Guo, Ang Li, Cong Liu
We approach this problem from the optimization perspective and show that the objective of backdoor detection is bounded by an adversarial objective.
1 code implementation • 7 May 2021 • Bangjie Yin, Wenxuan Wang, Taiping Yao, Junfeng Guo, Zelun Kong, Shouhong Ding, Jilin Li, Cong Liu
Deep neural networks, particularly face recognition models, have been shown to be vulnerable to both digital and physical adversarial examples.
no code implementations • CVPR 2022 • Xin Dong, Junfeng Guo, Ang Li, Wei-Te Ting, Cong Liu, H. T. Kung
Based upon this observation, we propose a novel metric called Neural Mean Discrepancy (NMD), which compares neural means of the input examples and training data.
no code implementations • 4 Feb 2021 • Junfeng Guo, Yaswanth Yadlapalli, Thiele Lothar, Ang Li, Cong Liu
PredCoin poisons the gradient estimation step, an essential component of most QBHL attacks.
no code implementations • 16 Oct 2020 • Sarah E. Gerard, Jacob Herrmann, Yi Xin, Kevin T. Martin, Emanuele Rezoagli, Davide Ippolito, Giacomo Bellani, Maurizio Cereda, Junfeng Guo, Eric A. Hoffman, David W. Kaczka, Joseph M. Reinhardt
Regional lobar analysis was performed using hierarchical clustering to identify radiographic subtypes of COVID-19.
no code implementations • 24 Mar 2020 • Junfeng Guo, Ting Wang, Cong Liu
Being able to detect and mitigate poisoning attacks, typically categorized into backdoor and adversarial poisoning (AP), is critical in enabling safe adoption of DNNs in many application domains.
no code implementations • CVPR 2020 • Zelun Kong, Junfeng Guo, Ang Li, Cong Liu
We compare PhysGAN with a set of state-of-the-art baseline methods including several of our self-designed ones, which further demonstrate the robustness and efficacy of our approach.