Membership Inference Attacks Against Object Detection Models

12 Jan 2020  ·  Yeachan Park, Myungjoo Kang ·

Machine learning models can leak information regarding the dataset they have trained. In this paper, we present the first membership inference attack against black-boxed object detection models that determines whether the given data records are used in the training. To attack the object detection model, we devise a novel method named as called a canvas method, in which predicted bounding boxes are drawn on an empty image for the attack model input. Based on the experiments, we successfully reveal the membership status of privately sensitive data trained using one-stage and two-stage detection models. We then propose defense strategies and also conduct a transfer attack between the models and datasets. Our results show that object detection models are also vulnerable to inference attacks like other models.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods