Paper

Model Attribution of Face-swap Deepfake Videos

AI-created face-swap videos, commonly known as Deepfakes, have attracted wide attention as powerful impersonation attacks. Existing research on Deepfakes mostly focuses on binary detection to distinguish between real and fake videos. However, it is also important to determine the specific generation model for a fake video, which can help attribute it to the source for forensic investigation. In this paper, we fill this gap by studying the model attribution problem of Deepfake videos. We first introduce a new dataset with DeepFakes from Different Models (DFDM) based on several Autoencoder models. Specifically, five generation models with variations in encoder, decoder, intermediate layer, input resolution, and compression ratio have been used to generate a total of 6,450 Deepfake videos based on the same input. Then we take Deepfakes model attribution as a multiclass classification task and propose a spatial and temporal attention based method to explore the differences among Deepfakes in the new dataset. Experimental evaluation shows that most existing Deepfakes detection methods failed in Deepfakes model attribution, while the proposed method achieved over 70% accuracy on the high-quality DFDM dataset.

Results in Papers With Code
(↓ scroll down to see all results)