Search Results for author: Ivan Evtimov

Found 18 papers, 5 papers with code

Uncertainty-Based Abstention in LLMs Improves Safety and Reduces Hallucinations

no code implementations16 Apr 2024 Christian Tomani, Kamalika Chaudhuri, Ivan Evtimov, Daniel Cremers, Mark Ibrahim

A major barrier towards the practical deployment of large language models (LLMs) is their lack of reliability.

Question Answering

VPA: Fully Test-Time Visual Prompt Adaptation

no code implementations26 Sep 2023 Jiachen Sun, Mark Ibrahim, Melissa Hall, Ivan Evtimov, Z. Morley Mao, Cristian Canton Ferrer, Caner Hazirbas

Inspired by the success of textual prompting, several studies have investigated the efficacy of visual prompt tuning.

Pseudo Label Test-time Adaptation +3

Code Llama: Open Foundation Models for Code

2 code implementations24 Aug 2023 Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve

We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks.

16k Code Generation +1

You Only Need a Good Embeddings Extractor to Fix Spurious Correlations

no code implementations12 Dec 2022 Raghav Mehta, Vítor Albiero, Li Chen, Ivan Evtimov, Tamar Glaser, Zhiheng Li, Tal Hassner

With experiments on a wide range of pre-trained models and pre-training datasets, we show that the capacity of the pre-training model and the size of the pre-training dataset matters.

A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others

1 code implementation CVPR 2023 Zhiheng Li, Ivan Evtimov, Albert Gordo, Caner Hazirbas, Tal Hassner, Cristian Canton Ferrer, Chenliang Xu, Mark Ibrahim

Key to advancing the reliability of vision systems is understanding whether existing methods can overcome multiple shortcuts or struggle in a Whac-A-Mole game, i. e., where mitigating one shortcut amplifies reliance on others.

Domain Generalization Image Classification +1

ImageNet-X: Understanding Model Mistakes with Factor of Variation Annotations

no code implementations3 Nov 2022 Badr Youbi Idrissi, Diane Bouchacourt, Randall Balestriero, Ivan Evtimov, Caner Hazirbas, Nicolas Ballas, Pascal Vincent, Michal Drozdzal, David Lopez-Paz, Mark Ibrahim

Equipped with ImageNet-X, we investigate 2, 200 current recognition models and study the types of mistakes as a function of model's (1) architecture, e. g. transformer vs. convolutional, (2) learning paradigm, e. g. supervised vs. self-supervised, and (3) training procedures, e. g., data augmentation.

Data Augmentation

Adversarial Text Normalization

no code implementations NAACL (ACL) 2022 Joanna Bitton, Maya Pavlova, Ivan Evtimov

Additionally, the process to retrain a model is time and resource intensive, creating a need for a lightweight, reusable defense.

Adversarial Text Natural Language Inference

Disrupting Model Training with Adversarial Shortcuts

no code implementations ICML Workshop AML 2021 Ivan Evtimov, Ian Covert, Aditya Kusupati, Tadayoshi Kohno

When data is publicly released for human consumption, it is unclear how to prevent its unauthorized usage for machine learning purposes.

BIG-bench Machine Learning Image Classification

FoggySight: A Scheme for Facial Lookup Privacy

1 code implementation15 Dec 2020 Ivan Evtimov, Pascal Sturmfels, Tadayoshi Kohno

Searches in these databases are now being offered as a service to law enforcement and others and carry a multitude of privacy risks for social media users.

Face Recognition Privacy Preserving

Adversarial Evaluation of Multimodal Models under Realistic Gray Box Assumption

no code implementations25 Nov 2020 Ivan Evtimov, Russel Howes, Brian Dolhansky, Hamed Firooz, Cristian Canton Ferrer

This work examines the vulnerability of multimodal (image + text) models to adversarial threats similar to those discussed in previous literature on unimodal (image- or text-only) models.

General Classification Text Augmentation

Security and Machine Learning in the Real World

no code implementations13 Jul 2020 Ivan Evtimov, Weidong Cui, Ece Kamar, Emre Kiciman, Tadayoshi Kohno, Jerry Li

Machine learning (ML) models deployed in many safety- and business-critical systems are vulnerable to exploitation through adversarial examples.

BIG-bench Machine Learning

Physical Adversarial Examples for Object Detectors

no code implementations20 Jul 2018 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, Tadayoshi Kohno, Dawn Song

In this work, we extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene.

Object object-detection +1

Robust Physical-World Attacks on Deep Learning Visual Classification

no code implementations CVPR 2018 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song

Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input.

Classification General Classification

Note on Attacking Object Detectors with Adversarial Stickers

no code implementations21 Dec 2017 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Dawn Song, Tadayoshi Kohno, Amir Rahmati, Atul Prakash, Florian Tramer

Given the fact that state-of-the-art objection detection algorithms are harder to be fooled by the same set of adversarial examples, here we show that these detectors can also be attacked by physical adversarial examples.

Object

Robust Physical-World Attacks on Deep Learning Models

1 code implementation27 Jul 2017 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song

We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions.

Cannot find the paper you are looking for? You can Submit a new open access paper.