Search Results for author: Luca Zancato

Found 14 papers, 1 papers with code

CPR: Retrieval Augmented Generation for Copyright Protection

no code implementations27 Mar 2024 Aditya Golatkar, Alessandro Achille, Luca Zancato, Yu-Xiang Wang, Ashwin Swaminathan, Stefano Soatto

To reduce risks of leaking private information contained in the retrieved set, we introduce Copy-Protected generation with Retrieval (CPR), a new method for RAG with strong copyright protection guarantees in a mixed-private setting for diffusion models. CPR allows to condition the output of diffusion models on a set of retrieved images, while also guaranteeing that unique identifiable information about those example is not exposed in the generated outputs.

Multi-Modal Hallucination Control by Visual Information Grounding

no code implementations20 Mar 2024 Alessandro Favero, Luca Zancato, Matthew Trager, Siddharth Choudhary, Pramuditha Perera, Alessandro Achille, Ashwin Swaminathan, Stefano Soatto

In particular, we show that as more tokens are generated, the reliance on the visual prompt decreases, and this behavior strongly correlates with the emergence of hallucinations.

Hallucination Visual Question Answering (VQA)

SemiGPC: Distribution-Aware Label Refinement for Imbalanced Semi-Supervised Learning Using Gaussian Processes

no code implementations3 Nov 2023 Abdelhak Lemkhenter, Manchen Wang, Luca Zancato, Gurumurthy Swaminathan, Paolo Favaro, Davide Modolo

We show that SemiGPC improves performance when paired with different Semi-Supervised methods such as FixMatch, ReMixMatch, SimMatch and FreeMatch and different pre-training strategies including MSN and Dino.

Gaussian Processes

Meaning Representations from Trajectories in Autoregressive Models

1 code implementation23 Oct 2023 Tian Yu Liu, Matthew Trager, Alessandro Achille, Pramuditha Perera, Luca Zancato, Stefano Soatto

We propose to extract meaning representations from autoregressive language models by considering the distribution of all possible trajectories extending an input text.

Semantic Similarity Semantic Textual Similarity

Prompt Algebra for Task Composition

no code implementations1 Jun 2023 Pramuditha Perera, Matthew Trager, Luca Zancato, Alessandro Achille, Stefano Soatto

We investigate whether prompts learned independently for different tasks can be later combined through prompt algebra to obtain a model that supports composition of tasks.

Attribute Classification

Train/Test-Time Adaptation with Retrieval

no code implementations CVPR 2023 Luca Zancato, Alessandro Achille, Tian Yu Liu, Matthew Trager, Pramuditha Perera, Stefano Soatto

Second, we apply ${\rm T^3AR}$ for test-time adaptation and show that exploiting a pool of external images at test-time leads to more robust representations over existing methods on DomainNet-126 and VISDA-C, especially when few adaptation data are available (up to 8%).

Retrieval Test-time Adaptation

STRIC: Stacked Residuals of Interpretable Components for Time Series Anomaly Detection

no code implementations29 Sep 2021 Luca Zancato, Alessandro Achille, Giovanni Paolini, Alessandro Chiuso, Stefano Soatto

After modeling the signals, we use an anomaly detection system based on the classic CUMSUM algorithm and a variational approximation of the $f$-divergence to detect both isolated point anomalies and change-points in statistics of the signals.

Anomaly Detection Time Series +1

A linearized framework and a new benchmark for model selection for fine-tuning

no code implementations29 Jan 2021 Aditya Deshpande, Alessandro Achille, Avinash Ravichandran, Hao Li, Luca Zancato, Charless Fowlkes, Rahul Bhotika, Stefano Soatto, Pietro Perona

Since all model selection algorithms in the literature have been tested on different use-cases and never compared directly, we introduce a new comprehensive benchmark for model selection comprising of: i) A model zoo of single and multi-domain models, and ii) Many target tasks.

Feature Correlation Model Selection

Predicting Training Time Without Training

no code implementations NeurIPS 2020 Luca Zancato, Alessandro Achille, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto

We tackle the problem of predicting the number of optimization steps that a pre-trained deep network needs to converge to a given value of the loss function.

Cannot find the paper you are looking for? You can Submit a new open access paper.