Search Results for author: Gholamali Aminian

Found 14 papers, 0 papers with code

Robust Semi-supervised Learning via $f$-Divergence and $α$-Rényi Divergence

no code implementations1 May 2024 Gholamali Aminian, Amirhossien Bagheri, Mahyar JafariNodeh, Radmehr Karimian, Mohammad-Hossein Yassaee

This paper investigates a range of empirical risk functions and regularization methods suitable for self-training methods in semi-supervised learning.

Pseudo Label

Generalization Error of Graph Neural Networks in the Mean-field Regime

no code implementations10 Feb 2024 Gholamali Aminian, Yixuan He, Gesine Reinert, Łukasz Szpruch, Samuel N. Cohen

This work provides a theoretical framework for assessing the generalization error of graph classification tasks via graph neural networks in the over-parameterized regime, where the number of parameters surpasses the quantity of data points.

Graph Classification

Mean-field Analysis of Generalization Errors

no code implementations20 Jun 2023 Gholamali Aminian, Samuel N. Cohen, Łukasz Szpruch

We propose a novel framework for exploring weak and $L_2$ generalization errors of algorithms through the lens of differential calculus on the space of probability measures.

On the Generalization Error of Meta Learning for the Gibbs Algorithm

no code implementations27 Apr 2023 Yuheng Bu, Harsha Vardhan Tetali, Gholamali Aminian, Miguel Rodrigues, Gregory Wornell

We analyze the generalization ability of joint-training meta learning algorithms via the Gibbs algorithm.

Meta-Learning

How Does Pseudo-Labeling Affect the Generalization Error of the Semi-Supervised Gibbs Algorithm?

no code implementations15 Oct 2022 Haiyun He, Gholamali Aminian, Yuheng Bu, Miguel Rodrigues, Vincent Y. F. Tan

Our findings offer new insights that the generalization performance of SSL with pseudo-labeling is affected not only by the information between the output hypothesis and input training data but also by the information {\em shared} between the {\em labeled} and {\em pseudo-labeled} data samples.

regression

Learning Algorithm Generalization Error Bounds via Auxiliary Distributions

no code implementations2 Oct 2022 Gholamali Aminian, Saeed Masiha, Laura Toni, Miguel R. D. Rodrigues

Additionally, we demonstrate how our auxiliary distribution method can be used to derive the upper bounds on excess risk of some learning algorithms in the supervised learning context {\blue and the generalization error under the distribution mismatch scenario in supervised learning algorithms, where the distribution mismatch is modeled as $\alpha$-Jensen-Shannon or $\alpha$-R\'enyi divergence between the distribution of test and training data samples distributions.}

Semi-supervised Batch Learning From Logged Data

no code implementations15 Sep 2022 Gholamali Aminian, Armin Behnamnia, Roberto Vega, Laura Toni, Chengchun Shi, Hamid R. Rabiee, Omar Rivasplata, Miguel R. D. Rodrigues

We propose learning methods for problems where feedback is missing for some samples, so there are samples with feedback and samples missing-feedback in the logged data.

counterfactual

An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift

no code implementations24 Feb 2022 Gholamali Aminian, Mahed Abroshan, Mohammad Mahdi Khalili, Laura Toni, Miguel R. D. Rodrigues

A common assumption in semi-supervised learning is that the labeled, unlabeled, and test data are drawn from the same distribution.

Tighter Expected Generalization Error Bounds via Convexity of Information Measures

no code implementations24 Feb 2022 Gholamali Aminian, Yuheng Bu, Gregory Wornell, Miguel Rodrigues

Due to the convexity of the information measures, the proposed bounds in terms of Wasserstein distance and total variation distance are shown to be tighter than their counterparts based on individual samples in the literature.

An Exact Characterization of the Generalization Error for the Gibbs Algorithm

no code implementations NeurIPS 2021 Gholamali Aminian, Yuheng Bu, Laura Toni, Miguel Rodrigues, Gregory Wornell

Various approaches have been developed to upper bound the generalization error of a supervised learning algorithm.

Characterizing and Understanding the Generalization Error of Transfer Learning with Gibbs Algorithm

no code implementations2 Nov 2021 Yuheng Bu, Gholamali Aminian, Laura Toni, Miguel Rodrigues, Gregory Wornell

We provide an information-theoretic analysis of the generalization ability of Gibbs-based transfer learning algorithms by focusing on two popular transfer learning approaches, $\alpha$-weighted-ERM and two-stage-ERM.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.