no code implementations • 1 May 2024 • Gholamali Aminian, Amirhossien Bagheri, Mahyar JafariNodeh, Radmehr Karimian, Mohammad-Hossein Yassaee
This paper investigates a range of empirical risk functions and regularization methods suitable for self-training methods in semi-supervised learning.
no code implementations • 10 Feb 2024 • Gholamali Aminian, Yixuan He, Gesine Reinert, Łukasz Szpruch, Samuel N. Cohen
This work provides a theoretical framework for assessing the generalization error of graph classification tasks via graph neural networks in the over-parameterized regime, where the number of parameters surpasses the quantity of data points.
no code implementations • 20 Jun 2023 • Gholamali Aminian, Samuel N. Cohen, Łukasz Szpruch
We propose a novel framework for exploring weak and $L_2$ generalization errors of algorithms through the lens of differential calculus on the space of probability measures.
no code implementations • 27 Apr 2023 • Yuheng Bu, Harsha Vardhan Tetali, Gholamali Aminian, Miguel Rodrigues, Gregory Wornell
We analyze the generalization ability of joint-training meta learning algorithms via the Gibbs algorithm.
no code implementations • 15 Oct 2022 • Haiyun He, Gholamali Aminian, Yuheng Bu, Miguel Rodrigues, Vincent Y. F. Tan
Our findings offer new insights that the generalization performance of SSL with pseudo-labeling is affected not only by the information between the output hypothesis and input training data but also by the information {\em shared} between the {\em labeled} and {\em pseudo-labeled} data samples.
no code implementations • 2 Oct 2022 • Gholamali Aminian, Saeed Masiha, Laura Toni, Miguel R. D. Rodrigues
Additionally, we demonstrate how our auxiliary distribution method can be used to derive the upper bounds on excess risk of some learning algorithms in the supervised learning context {\blue and the generalization error under the distribution mismatch scenario in supervised learning algorithms, where the distribution mismatch is modeled as $\alpha$-Jensen-Shannon or $\alpha$-R\'enyi divergence between the distribution of test and training data samples distributions.}
no code implementations • 15 Sep 2022 • Gholamali Aminian, Armin Behnamnia, Roberto Vega, Laura Toni, Chengchun Shi, Hamid R. Rabiee, Omar Rivasplata, Miguel R. D. Rodrigues
We propose learning methods for problems where feedback is missing for some samples, so there are samples with feedback and samples missing-feedback in the logged data.
no code implementations • 24 Feb 2022 • Gholamali Aminian, Mahed Abroshan, Mohammad Mahdi Khalili, Laura Toni, Miguel R. D. Rodrigues
A common assumption in semi-supervised learning is that the labeled, unlabeled, and test data are drawn from the same distribution.
no code implementations • 24 Feb 2022 • Gholamali Aminian, Yuheng Bu, Gregory Wornell, Miguel Rodrigues
Due to the convexity of the information measures, the proposed bounds in terms of Wasserstein distance and total variation distance are shown to be tighter than their counterparts based on individual samples in the literature.
no code implementations • NeurIPS 2021 • Gholamali Aminian, Yuheng Bu, Laura Toni, Miguel Rodrigues, Gregory Wornell
Various approaches have been developed to upper bound the generalization error of a supervised learning algorithm.
no code implementations • 2 Nov 2021 • Yuheng Bu, Gholamali Aminian, Laura Toni, Miguel Rodrigues, Gregory Wornell
We provide an information-theoretic analysis of the generalization ability of Gibbs-based transfer learning algorithms by focusing on two popular transfer learning approaches, $\alpha$-weighted-ERM and two-stage-ERM.
no code implementations • 28 Jul 2021 • Gholamali Aminian, Yuheng Bu, Laura Toni, Miguel R. D. Rodrigues, Gregory Wornell
As a result, they may fail to characterize the exact generalization ability of a learning algorithm.
no code implementations • 3 Feb 2021 • Gholamali Aminian, Laura Toni, Miguel R. D. Rodrigues
Generalization error bounds are critical to understanding the performance of machine learning models.
no code implementations • 23 Oct 2020 • Gholamali Aminian, Laura Toni, Miguel R. D. Rodrigues
Generalization error bounds are critical to understanding the performance of machine learning models.