Search Results for author: Yuyuan Zeng

Found 5 papers, 4 papers with code

Towards Effective Image Manipulation Detection with Proposal Contrastive Learning

1 code implementation16 Oct 2022 Yuyuan Zeng, Bowen Zhao, Shanzhao Qiu, Tao Dai, Shu-Tao Xia

Most existing methods mainly focus on extracting global features from tampered images, while neglecting the relationships of local features between tampered and authentic regions within a single tampered image.

Contrastive Learning Image Manipulation +1

Improving Adversarial Robustness via Channel-wise Activation Suppressing

1 code implementation ICLR 2021 Yang Bai, Yuyuan Zeng, Yong Jiang, Shu-Tao Xia, Xingjun Ma, Yisen Wang

The study of adversarial examples and their activation has attracted significant attention for secure and robust learning with deep neural networks (DNNs).

Adversarial Robustness

Improving Query Efficiency of Black-box Adversarial Attack

1 code implementation ECCV 2020 Yang Bai, Yuyuan Zeng, Yong Jiang, Yisen Wang, Shu-Tao Xia, Weiwei Guo

Deep neural networks (DNNs) have demonstrated excellent performance on various tasks, however they are under the risk of adversarial examples that can be easily generated when the target model is accessible to an attacker (white-box setting).

Adversarial Attack

Training Interpretable Convolutional Neural Networks by Differentiating Class-specific Filters

1 code implementation ECCV 2020 Haoyu Liang, Zhihao Ouyang, Yuyuan Zeng, Hang Su, Zihao He, Shu-Tao Xia, Jun Zhu, Bo Zhang

Most existing works attempt post-hoc interpretation on a pre-trained model, while neglecting to reduce the entanglement underlying the model.

Object Localization

Training Interpretable Convolutional Neural Networks towards Class-specific Filters

no code implementations25 Sep 2019 Haoyu Liang, Zhihao Ouyang, Hang Su, Yuyuan Zeng, Zihao He, Shu-Tao Xia, Jun Zhu, Bo Zhang

Convolutional neural networks (CNNs) have often been treated as “black-box” and successfully used in a range of tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.