Search Results for author: Matthew C. Stamm

Found 12 papers, 2 papers with code

Beyond Deepfake Images: Detecting AI-Generated Videos

no code implementations24 Apr 2024 Danial Samadi Vahdati, Tai D. Nguyen, Aref Azizpour, Matthew C. Stamm

Recent advances in generative AI have led to the development of techniques to generate visually realistic synthetic video.

E3: Ensemble of Expert Embedders for Adapting Synthetic Image Detectors to New Generators Using Limited Data

1 code implementation12 Apr 2024 Aref Azizpour, Tai D. Nguyen, Manil Shrestha, Kaidi Xu, Edward Kim, Matthew C. Stamm

To address these issues, we introduce the Ensemble of Expert Embedders (E3), a novel continual learning framework for updating synthetic image detectors.

Continual Learning Synthetic Image Detection +1

Open Set Synthetic Image Source Attribution

no code implementations22 Aug 2023 Shengbang Fang, Tai D. Nguyen, Matthew C. Stamm

To address this new threat, researchers have developed multiple algorithms to detect synthetic images and identify their source generators.

Attribute Image Generation +2

VideoFACT: Detecting Video Forgeries Using Attention, Scene Context, and Forensic Traces

no code implementations28 Nov 2022 Tai D. Nguyen, Shengbang Fang, Matthew C. Stamm

While existing forensic networks have demonstrated strong performance on image forgeries, recent results reported on the Adobe VideoSham dataset show that these networks fail to identify fake content in videos.

Misinformation

Making Generated Images Hard To Spot: A Transferable Attack On Synthetic Image Detectors

no code implementations25 Apr 2021 Xinwei Zhao, Matthew C. Stamm

Visually realistic GAN-generated images have recently emerged as an important misinformation threat.

Misinformation

Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers

no code implementations26 Jan 2021 Xinwei Zhao, Matthew C. Stamm

In this paper, we propose new defenses that can protect against multi-sticker attacks.

The Effect of Class Definitions on the Transferability of Adversarial Attacks Against Forensic CNNs

no code implementations26 Jan 2021 Xinwei Zhao, Matthew C. Stamm

Understanding the transferability of adversarial attacks, i. e. an attacks ability to attack a different CNN than the one it was trained against, has important implications for designing CNNs that are resistant to attacks.

Image Manipulation Object Recognition

A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative Adversarial Network

no code implementations23 Jan 2021 Xinwei Zhao, Chen Chen, Matthew C. Stamm

In this paper, we propose a new anti-forensic attack framework designed to remove forensic traces left by a variety of manipulation operations.

Generative Adversarial Network

Exposing Fake Images with Forensic Similarity Graphs

no code implementations5 Dec 2019 Owen Mayer, Matthew C. Stamm

We propose new image forgery detection and localization algorithms by recasting these problems as graph-based community detection problems.

Community Detection Image Forgery Detection

Forensic Similarity for Digital Images

1 code implementation13 Feb 2019 Owen Mayer, Matthew C. Stamm

In this paper we introduce a new digital image forensics approach called forensic similarity, which determines whether two image patches contain the same forensic trace or different forensic traces.

Image Forensics

Cannot find the paper you are looking for? You can Submit a new open access paper.