Search Results for author: Jingzhi Li

Found 9 papers, 5 papers with code

Logit Standardization in Knowledge Distillation

2 code implementations3 Mar 2024 Shangquan Sun, Wenqi Ren, Jingzhi Li, Rui Wang, Xiaochun Cao

Knowledge distillation involves transferring soft labels from a teacher to a student using a shared temperature-based softmax function.

Knowledge Distillation

Less is More: Fewer Interpretable Region via Submodular Subset Selection

1 code implementation14 Feb 2024 Ruoyu Chen, Hua Zhang, Siyuan Liang, Jingzhi Li, Xiaochun Cao

For incorrectly predicted samples, our method achieves gains of 81. 0% and 18. 4% compared to the HSIC-Attribution algorithm in the average highest confidence and Insertion score respectively.

Interpretability Techniques for Deep Learning

Privacy-Enhancing Face Obfuscation Guided by Semantic-Aware Attribution Maps

no code implementations IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 2023 Jingzhi Li, Hua Zhang, Siyuan Liang, Pengwen Dai, Xiaochun Cao

Within this module, we introduce a pixel importance estimation model based on Shapley value to obtain a pixel-level attribution map, and then each pixel on the attribution map is aggregated into semantic facial parts, which are used to quantify the importance of different facial parts.

Face Recognition

Rethinking Feature-Based Knowledge Distillation for Face Recognition

no code implementations CVPR 2023 Jingzhi Li, Zidong Guo, Hui Li, Seungju Han, Ji-won Baek, Min Yang, Ran Yang, Sungjoo Suh

By constraining the teacher's search space with reverse distillation, we narrow the intrinsic gap and unleash the potential of feature-only distillation.

Face Recognition Knowledge Distillation

Towards Generalized Few-Shot Open-Set Object Detection

2 code implementations28 Oct 2022 Binyi Su, Hua Zhang, Jingzhi Li, Zhong Zhou

In this paper, we seek a solution for the generalized few-shot open-set object detection (G-FOOD), which aims to avoid detecting unknown classes as known classes with a high confidence score while maintaining the performance of few-shot detection.

Few Shot Open Set Object Detection Object +2

Exploring Inconsistent Knowledge Distillation for Object Detection with Data Augmentation

1 code implementation20 Sep 2022 Jiawei Liang, Siyuan Liang, Aishan Liu, Ke Ma, Jingzhi Li, Xiaochun Cao

Specifically, we propose a sample-specific data augmentation to transfer the teacher model's ability in capturing distinct frequency components and suggest an adversarial feature augmentation to extract the teacher model's perceptions of non-robust features in the data.

Data Augmentation Knowledge Distillation +2

A Large-scale Multiple-objective Method for Black-box Attack against Object Detection

no code implementations16 Sep 2022 Siyuan Liang, Longkang Li, Yanbo Fan, Xiaojun Jia, Jingzhi Li, Baoyuan Wu, Xiaochun Cao

Recent studies have shown that detectors based on deep models are vulnerable to adversarial examples, even in the black-box scenario where the attacker cannot access the model information.

object-detection Object Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.