1 code implementation • 29 Feb 2024 • Binh M. Le, Simon S. Woo
Recent advancements in domain generalization (DG) for face anti-spoofing (FAS) have garnered considerable attention.
no code implementations • 9 Jan 2024 • Binh M. Le, Jiwon Kim, Shahroz Tariq, Kristen Moore, Alsharif Abuadbba, Simon S. Woo
Our systematized analysis and experimentation lay the groundwork for a deeper understanding of deepfake detectors and their generalizability, paving the way for future research focused on creating detectors adept at countering various attack scenarios.
1 code implementation • ICCV 2023 • Binh M. Le, Simon S. Woo
However, detecting low quality as well as simultaneously detecting different qualities of deepfakes still remains a grave challenge.
no code implementations • 21 Mar 2023 • Binh M. Le, Shahroz Tariq, Simon S. Woo
Our work is the first carefully analyzes and characterizes these two schools of approaches, both theoretically and empirically, to demonstrate how each approach impacts the robust learning of a classifier.
1 code implementation • 24 Aug 2022 • Shahroz Tariq, Binh M. Le, Simon S. Woo
To the best of our understanding, we demonstrate, for the first time, the vulnerabilities of anomaly detection systems against adversarial attacks.
1 code implementation • 19 Jan 2022 • Chingis Oinar, Binh M. Le, Simon S. Woo
However, the majority of the proposed methods do not consider the class imbalance issue, which is a major challenge in practice for developing deep face recognition models.
1 code implementation • 15 Dec 2021 • Binh M. Le, Simon S. Woo
The rapid progression of Generative Adversarial Networks (GANs) has raised a concern of their misuse for malicious purposes, especially in creating fake face images.
2 code implementations • 7 Dec 2021 • Binh M. Le, Simon S. Woo
In particular, we propose the Attention-based Deepfake detection Distiller (ADD), which consists of two novel distillations: 1) frequency attention distillation that effectively retrieves the removed high-frequency components in the student network, and 2) multi-view attention distillation that creates multiple attention vectors by slicing the teacher's and student's tensors under different views to transfer the teacher tensor's distribution to the student more efficiently.