Search Results for author: Wenjie Qu

Found 5 papers, 2 papers with code

A Certified Radius-Guided Attack Framework to Image Segmentation Models

1 code implementation5 Apr 2023 Wenjie Qu, Youqi Li, Binghui Wang

We are the first, from the attacker perspective, to leverage the properties of certified radius and propose a certified radius guided attack framework against image segmentation models.

Image Classification Image Segmentation +2

REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service

no code implementations7 Jan 2023 Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong

For the first question, we show that the cloud service only needs to provide two APIs, which we carefully design, to enable a client to certify the robustness of its downstream classifier with a minimal number of queries to the APIs.

Self-Supervised Learning

Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning

no code implementations6 Dec 2022 Hongbin Liu, Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong

In this work, we perform the first systematic, principled measurement study to understand whether and when a pre-trained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms.

Data Poisoning Machine Unlearning +2

MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples

1 code implementation3 Oct 2022 Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong

In this work, we propose MultiGuard, the first provably robust defense against adversarial examples to multi-label classification.

Classification Multi-class Classification +1

EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning

no code implementations25 Aug 2021 Hongbin Liu, Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong

EncoderMI can be used 1) by a data owner to audit whether its (public) data was used to pre-train an image encoder without its authorization or 2) by an attacker to compromise privacy of the training data when it is private/sensitive.

Contrastive Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.