Search Results for author: Simone Bombari

Found 6 papers, 1 papers with code

Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random Features

no code implementations5 Feb 2024 Simone Bombari, Marco Mondelli

Unveiling the reasons behind the exceptional success of transformers requires a better understanding of why attention layers are suitable for NLP tasks.

Generalization Bounds Sentence +1

Stability, Generalization and Privacy: Precise Analysis for Random and NTK Features

no code implementations20 May 2023 Simone Bombari, Marco Mondelli

Deep learning models can be vulnerable to recovery attacks, raising privacy concerns to users, and widespread algorithms such as empirical risk minimization (ERM) often do not directly enforce safety guarantees.

Learning Theory

Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels

1 code implementation3 Feb 2023 Simone Bombari, Shayan Kiyani, Marco Mondelli

However, this "universal" law provides only a necessary condition for robustness, and it is unable to discriminate between models.

Memorization and Optimization in Deep Neural Networks with Minimum Over-parameterization

no code implementations20 May 2022 Simone Bombari, Mohammad Hossein Amani, Marco Mondelli

The Neural Tangent Kernel (NTK) has emerged as a powerful tool to provide memorization, optimization and generalization guarantees in deep neural networks.

Memorization Open-Ended Question Answering

Sharp asymptotics on the compression of two-layer neural networks

no code implementations17 May 2022 Mohammad Hossein Amani, Simone Bombari, Marco Mondelli, Rattana Pukdee, Stefano Rini

In this paper, we study the compression of a target two-layer neural network with N nodes into a compressed network with M<N nodes.

Vocal Bursts Valence Prediction

Towards Differential Relational Privacy and its use in Question Answering

no code implementations30 Mar 2022 Simone Bombari, Alessandro Achille, Zijian Wang, Yu-Xiang Wang, Yusheng Xie, Kunwar Yashraj Singh, Srikar Appalaraju, Vijay Mahadevan, Stefano Soatto

While bounding general memorization can have detrimental effects on the performance of a trained model, bounding RM does not prevent effective learning.

Memorization Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.