Search Results for author: Sharu Theresa Jose

Found 17 papers, 1 papers with code

Adversarial Quantum Machine Learning: An Information-Theoretic Generalization Analysis

no code implementations31 Jan 2024 Petros Georgiou, Sharu Theresa Jose, Osvaldo Simeone

Specifically, a quantum adversary maximizes the classifier's loss by transforming an input state $\rho(x)$ into a state $\lambda$ that is $\epsilon$-close to the original state $\rho(x)$ in $p$-Schatten distance.

Quantum Machine Learning

Thompson Sampling for Stochastic Bandits with Noisy Contexts: An Information-Theoretic Regret Analysis

no code implementations21 Jan 2024 Sharu Theresa Jose, Shana Moothedath

We explore a stochastic contextual linear bandit problem where the agent observes a noisy, corrupted version of the true context through a noise channel with an unknown noise parameter.

Thompson Sampling

Statistical Complexity of Quantum Learning

no code implementations20 Sep 2023 Leonardo Banchi, Jason Luke Pereira, Sharu Theresa Jose, Osvaldo Simeone

Recent years have seen significant activity on the problem of using data for the purpose of learning properties of quantum systems or of processing classical or quantum data via quantum computing.

Learning Theory

Bayesian and Multi-Armed Contextual Meta-Optimization for Efficient Wireless Radio Resource Management

no code implementations16 Jan 2023 Yunchuan Zhang, Osvaldo Simeone, Sharu Theresa Jose, Lorenzo Maggi, Alvaro Valcarce

Optimal resource allocation in modern communication networks calls for the optimization of objective functions that are only accessible via costly separate evaluations for each candidate solution.

Bayesian Optimization Management +1

Error Mitigation-Aided Optimization of Parameterized Quantum Circuits: Convergence Analysis

no code implementations23 Sep 2022 Sharu Theresa Jose, Osvaldo Simeone

It is shown that quantum gate noise induces a non-zero error-floor on the convergence error of SGD (evaluated with respect to a reference noiseless PQC), which depends on the number of noisy gates, the strength of the noise, as well as the eigenspectrum of the observable being measured and minimized.

Transfer Learning for Quantum Classifiers: An Information-Theoretic Generalization Analysis

no code implementations17 Jan 2022 Sharu Theresa Jose, Osvaldo Simeone

An upper bound on the optimality gap is derived in terms of the proposed task (dis)similarity measure, two R$\'e$nyi mutual information terms between classical input and quantum embedding under source and target tasks, as well as a measure of complexity of the combined space of quantum embeddings and classifiers under the source task.

Binary Classification Quantum Machine Learning +1

Transfer Bayesian Meta-learning via Weighted Free Energy Minimization

1 code implementation20 Jun 2021 Yunchuan Zhang, Sharu Theresa Jose, Osvaldo Simeone

Meta-learning optimizes the hyperparameters of a training procedure, such as its initialization, kernel, or learning rate, based on data sampled from a number of auxiliary tasks.

Gaussian Processes Meta-Learning +1

A unified PAC-Bayesian framework for machine unlearning via information risk minimization

no code implementations1 Jun 2021 Sharu Theresa Jose, Osvaldo Simeone

Machine unlearning refers to mechanisms that can remove the influence of a subset of training data upon request from a trained model without incurring the cost of re-training from scratch.

Machine Unlearning

Information-Theoretic Analysis of Epistemic Uncertainty in Bayesian Meta-learning

no code implementations1 Jun 2021 Sharu Theresa Jose, Sangwoo Park, Osvaldo Simeone

Under a Bayesian formulation, assuming a well-specified model, the two contributions can be exactly expressed (for the log-loss) or bounded (for more general losses) in terms of information-theoretic quantities (Xu and Raginsky, 2020).

Meta-Learning

An Information-Theoretic Analysis of the Impact of Task Similarity on Meta-Learning

no code implementations21 Jan 2021 Sharu Theresa Jose, Osvaldo Simeone

The goal of the meta-learner is to ensure that the hyperparameters obtain a small loss when applied for training of a new task sampled from the task environment.

Meta-Learning

Free Energy Minimization: A Unified Framework for Modelling, Inference, Learning,and Optimization

no code implementations25 Nov 2020 Sharu Theresa Jose, Osvaldo Simeone

The goal of these lecture notes is to review the problem of free energy minimization as a unified framework underlying the definition of maximum entropy modelling, generalized Bayesian inference, learning with latent variables, statistical learning analysis of generalization, and local optimization.

Bayesian Inference

Transfer Meta-Learning: Information-Theoretic Bounds and Information Meta-Risk Minimization

no code implementations4 Nov 2020 Sharu Theresa Jose, Osvaldo Simeone, Giuseppe Durisi

In this paper, we introduce the problem of transfer meta-learning, in which tasks are drawn from a target task environment during meta-testing that may differ from the source task environment observed during meta-training.

Inductive Bias Meta-Learning

Conditional Mutual Information-Based Generalization Bound for Meta Learning

no code implementations21 Oct 2020 Arezou Rezazadeh, Sharu Theresa Jose, Giuseppe Durisi, Osvaldo Simeone

Meta-learning optimizes an inductive bias---typically in the form of the hyperparameters of a base-learning algorithm---by observing data from a finite number of related tasks.

Inductive Bias Meta-Learning

Information-Theoretic Generalization Bounds for Meta-Learning and Applications

no code implementations9 May 2020 Sharu Theresa Jose, Osvaldo Simeone

Extending the existing work for conventional learning, an upper bound on the meta-generalization gap is derived for the former class that depends on the mutual information (MI) between the output of the meta-learning algorithm and its input meta-training data.

Generalization Bounds Inductive Bias +1

Cannot find the paper you are looking for? You can Submit a new open access paper.