1 code implementation • 5 Apr 2023 • Wenjie Qu, Youqi Li, Binghui Wang
We are the first, from the attacker perspective, to leverage the properties of certified radius and propose a certified radius guided attack framework against image segmentation models.
no code implementations • 7 Jan 2023 • Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong
For the first question, we show that the cloud service only needs to provide two APIs, which we carefully design, to enable a client to certify the robustness of its downstream classifier with a minimal number of queries to the APIs.
no code implementations • 6 Dec 2022 • Hongbin Liu, Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong
In this work, we perform the first systematic, principled measurement study to understand whether and when a pre-trained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms.
1 code implementation • 3 Oct 2022 • Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong
In this work, we propose MultiGuard, the first provably robust defense against adversarial examples to multi-label classification.
no code implementations • 25 Aug 2021 • Hongbin Liu, Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong
EncoderMI can be used 1) by a data owner to audit whether its (public) data was used to pre-train an image encoder without its authorization or 2) by an attacker to compromise privacy of the training data when it is private/sensitive.