no code implementations • 29 Sep 2023 • Amir Hossein Saberi, Amir Najafi, Alireza Heidari, Mohammad Hosein Movasaghinia, Abolfazl Motahari, Babak H. Khalaj
From a theoretical standpoint, we apply our framework on the classification problem of a mixture of two Gaussians in $\mathbb{R}^d$, where in addition to the $m$ independent and labeled samples from the true distribution, a set of $n$ (usually with $n\gg m$) out of domain and unlabeled samples are given as well.
no code implementations • 9 Sep 2022 • Amir Hossein Saberi, Amir Najafi, Seyed Abolfazl Motahari, Babak H. Khalaj
Also, we theoretically show that in order to achieve this bound, it is sufficient to have $n\ge\left(K^2/\varepsilon^2\right)e^{\Omega\left(K/\mathrm{SNR}^2\right)}$ samples, where $\mathrm{SNR}$ stands for the signal-to-noise ratio.
no code implementations • 2 Nov 2021 • Hanie Barghi, Amir Najafi, Seyed Abolfazl Motahari
This paper aims to propose and theoretically analyze a new distributed scheme for sparse linear regression and feature selection.
no code implementations • 27 Nov 2020 • Armin Karamzade, Amir Najafi, Seyed Abolfazl Motahari
In this paper, we extend a class of celebrated regularization techniques originally proposed for feed-forward neural networks, namely Input Mixup (Zhang et al., 2017) and Manifold Mixup (Verma et al., 2018), to the realm of Recurrent Neural Networks (RNN).
no code implementations • NeurIPS 2019 • Amir Najafi, Shin-ichi Maeda, Masanori Koyama, Takeru Miyato
What is the role of unlabeled data in an inference problem, when the presumed underlying distribution is adversarially perturbed?
1 code implementation • ICLR 2019 • Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Aaron Courville, Ioannis Mitliagkas, Yoshua Bengio
Because the hidden states are learned, this has an important effect of encouraging the hidden states for a class to be concentrated in such a way so that interpolations within the same class or between two different classes do not intersect with the real data points from other classes.
no code implementations • 18 Oct 2018 • Amir Najafi, Saeed Ilchi, Amir H. Saberi, Seyed Abolfazl Motahari, Babak H. Khalaj, Hamid R. Rabiee
We study the sample complexity of learning a high-dimensional simplex from a set of points uniformly sampled from its interior.
12 code implementations • ICLR 2019 • Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, Aaron Courville, David Lopez-Paz, Yoshua Bengio
Deep neural networks excel at learning the training data, but often provide incorrect and confident predictions when evaluated on slightly different test examples.
Ranked #18 on Image Classification on OmniBenchmark
no code implementations • 5 Oct 2017 • Amir Najafi, Abolfazl Motahari, Hamid R. Rabiee
A Bernoulli Mixture Model (BMM) is a finite mixture of random binary vectors with independent dimensions.