no code implementations • 22 Jan 2023 • Omar Fawzi, Aadil Oufkir, Daniel Stilck França
In the adaptive setting, we show a lower bound of $\Omega(2^{2. 5n}\epsilon^{-2})$ for $\epsilon=\mathcal{O}(2^{-n})$, and a lower bound of $\Omega(2^{2n}\epsilon^{-2} )$ for any $\epsilon > 0$.
no code implementations • NeurIPS 2021 • Aadil Oufkir, Omar Fawzi, Nicolas Flammarion, Aurélien Garivier
For a general alphabet size $n$, we give a sequential algorithm that uses no more samples than its batch counterpart, and possibly fewer if the actual distance between $\mathcal{D}_1$ and $\mathcal{D}_2$ is larger than $\epsilon$.
1 code implementation • 18 May 2020 • Hyejung H. Jee, Carlo Sparaciari, Omar Fawzi, Mario Berta
We give a converging semidefinite programming hierarchy of outer approximations for the set of quantum correlations of fixed dimension and derive analytical bounds on the convergence speed of the hierarchy.
Quantum Physics
no code implementations • NeurIPS 2019 • Alhussein Fawzi, Mateusz Malinowski, Hamza Fawzi, Omar Fawzi
In this work, we introduce a machine learning based method to search for a dynamic proof within these proof systems.
1 code implementation • 29 May 2018 • Frédéric Dupuis, Omar Fawzi
The entropy accumulation theorem states that the smooth min-entropy of an $n$-partite system $A = (A_1, \ldots, A_n)$ is lower-bounded by the sum of the von Neumann entropies of suitably chosen conditional states up to corrections that are sublinear in $n$.
Quantum Physics
no code implementations • NeurIPS 2018 • Alhussein Fawzi, Hamza Fawzi, Omar Fawzi
Despite achieving impressive performance, state-of-the-art classifiers remain highly vulnerable to small, imperceptible, adversarial perturbations.
no code implementations • 22 Feb 2018 • Jean-Yves Franceschi, Alhussein Fawzi, Omar Fawzi
We study the robustness of classifiers to various kinds of random noise models.
no code implementations • ICLR 2018 • Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard, Stefano Soatto
Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers.
10 code implementations • CVPR 2017 • Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard
Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability.
no code implementations • 6 Jul 2016 • Frederic Dupuis, Omar Fawzi, Renato Renner
We ask the question whether entropy accumulates, in the sense that the operationally relevant total uncertainty about an $n$-partite system $A = (A_1, \ldots A_n)$ corresponds to the sum of the entropies of its parts $A_i$.
Quantum Physics Information Theory Information Theory
no code implementations • 9 Feb 2015 • Alhussein Fawzi, Omar Fawzi, Pascal Frossard
To the best of our knowledge, our results provide the first theoretical work that addresses the phenomenon of adversarial instability recently observed for deep networks.
no code implementations • 8 Nov 2011 • Mario Berta, Omar Fawzi, Stephanie Wehner
Yet, when considering a physical randomness source, X is itself ultimately the result of a measurement on an underlying quantum system.
Quantum Physics