Search Results for author: Kassem Fawaz

Found 21 papers, 9 papers with code

A Picture is Worth 500 Labels: A Case Study of Demographic Disparities in Local Machine Learning Models for Instagram and TikTok

no code implementations27 Mar 2024 Jack West, Lea Thiemt, Shimaa Ahmed, Maggie Bartig, Kassem Fawaz, Suman Banerjee

Capitalizing on this new processing model of locally analyzing user images, we analyze two popular social media apps, TikTok and Instagram, to reveal (1) what insights vision models in both apps infer about users from their image and video data and (2) whether these models exhibit performance disparities with respect to demographics.

Gender Prediction

PRP: Propagating Universal Perturbations to Attack Large Language Model Guard-Rails

no code implementations24 Feb 2024 Neal Mangaokar, Ashish Hooda, Jihye Choi, Shreyas Chandrashekaran, Kassem Fawaz, Somesh Jha, Atul Prakash

More recent LLMs often incorporate an additional layer of defense, a Guard Model, which is a second LLM that is designed to check and moderate the output response of the primary LLM.

GPT-3.5 Language Modelling +2

Human-Producible Adversarial Examples

no code implementations30 Sep 2023 David Khachaturov, Yue Gao, Ilia Shumailov, Robert Mullins, Ross Anderson, Kassem Fawaz

Visual adversarial examples have so far been restricted to pixel-level image manipulations in the digital world, or have required sophisticated equipment such as 2D or 3D printers to be produced in the physical real world.

Limitations of Face Image Generation

1 code implementation13 Sep 2023 Harrison Rosenberg, Shimaa Ahmed, Guruprasad V Ramesh, Ramya Korlakai Vinayak, Kassem Fawaz

In particular, their ability to synthesize and modify human faces has spurred research into using generated face images in both training data augmentation and model performance assessments.

Data Augmentation Face Generation

SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks

no code implementations23 Aug 2023 Yue Gao, Ilia Shumailov, Kassem Fawaz

Machine Learning (ML) systems are vulnerable to adversarial examples, particularly those from query-based black-box attacks.

Attribute

Theoretically Principled Trade-off for Stateful Defenses against Query-Based Black-Box Attacks

no code implementations30 Jul 2023 Ashish Hooda, Neal Mangaokar, Ryan Feng, Kassem Fawaz, Somesh Jha, Atul Prakash

This work aims to address this gap by offering a theoretical characterization of the trade-off between detection and false positive rates for stateful defenses.

Stateful Defenses for Machine Learning Models Are Not Yet Secure Against Black-box Attacks

1 code implementation11 Mar 2023 Ryan Feng, Ashish Hooda, Neal Mangaokar, Kassem Fawaz, Somesh Jha, Atul Prakash

Such stateful defenses aim to defend against black-box attacks by tracking the query history and detecting and rejecting queries that are "similar" and thus preventing black-box attacks from finding useful gradients and making progress towards finding adversarial attacks within a reasonable query budget.

SkillFence: A Systems Approach to Practically Mitigating Voice-Based Confusion Attacks

no code implementations16 Dec 2022 Ashish Hooda, Matthew Wallace, Kushal Jhunjhunwalla, Earlence Fernandes, Kassem Fawaz

Our key insight is that we can interpret a user's intentions by analyzing their activity on counterpart systems of the web and smartphones.

On the Limitations of Stochastic Pre-processing Defenses

1 code implementation19 Jun 2022 Yue Gao, Ilia Shumailov, Kassem Fawaz, Nicolas Papernot

An example of such a defense is to apply a random transformation to inputs prior to feeding them to the model.

Adversarial Robustness

D4: Detection of Adversarial Diffusion Deepfakes Using Disjoint Ensembles

no code implementations11 Feb 2022 Ashish Hooda, Neal Mangaokar, Ryan Feng, Kassem Fawaz, Somesh Jha, Atul Prakash

D4 uses an ensemble of models over disjoint subsets of the frequency spectrum to significantly improve adversarial robustness.

Adversarial Robustness DeepFake Detection +1

An Exploration of Multicalibration Uniform Convergence Bounds

no code implementations9 Feb 2022 Harrison Rosenberg, Robi Bhattacharjee, Kassem Fawaz, Somesh Jha

Given the prevalence of ERM sample complexity bounds, our proposed framework enables machine learning practitioners to easily understand the convergence behavior of multicalibration error for a myriad of classifier architectures.

BIG-bench Machine Learning Fairness

Fairness Properties of Face Recognition and Obfuscation Systems

1 code implementation5 Aug 2021 Harrison Rosenberg, Brian Tang, Kassem Fawaz, Somesh Jha

We answer this question with an analytical and empirical exploration of recent face obfuscation systems.

Face Recognition Fairness

Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems

1 code implementation18 Apr 2021 Yue Gao, Ilia Shumailov, Kassem Fawaz

As real-world images come in varying sizes, the machine learning model is part of a larger system that includes an upstream image scaling algorithm.

BIG-bench Machine Learning

Face-Off: Adversarial Face Obfuscation

1 code implementation19 Mar 2020 Chuhan Gao, Varun Chandrasekaran, Kassem Fawaz, Somesh Jha

We implement and evaluate Face-Off to find that it deceives three commercial face recognition services from Microsoft, Amazon, and Face++.

Cryptography and Security

Analyzing Accuracy Loss in Randomized Smoothing Defenses

no code implementations3 Mar 2020 Yue Gao, Harrison Rosenberg, Kassem Fawaz, Somesh Jha, Justin Hsu

In test-time attacks an adversary crafts adversarial examples, which are specially crafted perturbations imperceptible to humans which, when added to an input example, force a machine learning model to misclassify the given input example.

Autonomous Driving BIG-bench Machine Learning +2

Rearchitecting Classification Frameworks For Increased Robustness

no code implementations26 May 2019 Varun Chandrasekaran, Brian Tang, Nicolas Papernot, Kassem Fawaz, Somesh Jha, Xi Wu

and how to design a classification paradigm that leverages these invariances to improve the robustness accuracy trade-off?

Autonomous Driving Classification +2

The Privacy Policy Landscape After the GDPR

1 code implementation22 Sep 2018 Thomas Linden, Rishabh Khandelwal, Hamza Harkous, Kassem Fawaz

In this analysis, we find evidence for positive changes triggered by the GDPR, with the specificity level improving on average.

Specificity

Polisis: Automated Analysis and Presentation of Privacy Policies Using Deep Learning

2 code implementations7 Feb 2018 Hamza Harkous, Kassem Fawaz, Rémi Lebret, Florian Schaub, Kang G. Shin, Karl Aberer

Companies, users, researchers, and regulators still lack usable and scalable tools to cope with the breadth and depth of privacy policies.

Language Modelling Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.