no code implementations • 31 Jan 2024 • Petros Georgiou, Sharu Theresa Jose, Osvaldo Simeone
Specifically, a quantum adversary maximizes the classifier's loss by transforming an input state $\rho(x)$ into a state $\lambda$ that is $\epsilon$-close to the original state $\rho(x)$ in $p$-Schatten distance.
no code implementations • 21 Jan 2024 • Sharu Theresa Jose, Shana Moothedath
We explore a stochastic contextual linear bandit problem where the agent observes a noisy, corrupted version of the true context through a noise channel with an unknown noise parameter.
no code implementations • 20 Sep 2023 • Leonardo Banchi, Jason Luke Pereira, Sharu Theresa Jose, Osvaldo Simeone
Recent years have seen significant activity on the problem of using data for the purpose of learning properties of quantum systems or of processing classical or quantum data via quantum computing.
no code implementations • 16 Jan 2023 • Yunchuan Zhang, Osvaldo Simeone, Sharu Theresa Jose, Lorenzo Maggi, Alvaro Valcarce
Optimal resource allocation in modern communication networks calls for the optimization of objective functions that are only accessible via costly separate evaluations for each candidate solution.
no code implementations • 3 Oct 2022 • Lisha Chen, Sharu Theresa Jose, Ivana Nikoloska, Sangwoo Park, Tianyi Chen, Osvaldo Simeone
This review monograph provides an introduction to meta-learning by covering principles, algorithms, theory, and engineering applications.
no code implementations • 23 Sep 2022 • Sharu Theresa Jose, Osvaldo Simeone
It is shown that quantum gate noise induces a non-zero error-floor on the convergence error of SGD (evaluated with respect to a reference noiseless PQC), which depends on the number of noisy gates, the strength of the noise, as well as the eigenspectrum of the observable being measured and minimized.
no code implementations • 17 Jan 2022 • Sharu Theresa Jose, Osvaldo Simeone
An upper bound on the optimality gap is derived in terms of the proposed task (dis)similarity measure, two R$\'e$nyi mutual information terms between classical input and quantum embedding under source and target tasks, as well as a measure of complexity of the combined space of quantum embeddings and classifiers under the source task.
no code implementations • 11 Oct 2021 • Sharu Theresa Jose, Osvaldo Simeone
In vertical federated learning (FL), the features of a data sample are distributed across multiple agents.
1 code implementation • 20 Jun 2021 • Yunchuan Zhang, Sharu Theresa Jose, Osvaldo Simeone
Meta-learning optimizes the hyperparameters of a training procedure, such as its initialization, kernel, or learning rate, based on data sampled from a number of auxiliary tasks.
no code implementations • 1 Jun 2021 • Sharu Theresa Jose, Osvaldo Simeone
Machine unlearning refers to mechanisms that can remove the influence of a subset of training data upon request from a trained model without incurring the cost of re-training from scratch.
no code implementations • 1 Jun 2021 • Sharu Theresa Jose, Sangwoo Park, Osvaldo Simeone
Under a Bayesian formulation, assuming a well-specified model, the two contributions can be exactly expressed (for the log-loss) or bounded (for more general losses) in terms of information-theoretic quantities (Xu and Raginsky, 2020).
no code implementations • 21 Jan 2021 • Sharu Theresa Jose, Osvaldo Simeone
The goal of the meta-learner is to ensure that the hyperparameters obtain a small loss when applied for training of a new task sampled from the task environment.
no code implementations • 25 Nov 2020 • Sharu Theresa Jose, Osvaldo Simeone
The goal of these lecture notes is to review the problem of free energy minimization as a unified framework underlying the definition of maximum entropy modelling, generalized Bayesian inference, learning with latent variables, statistical learning analysis of generalization, and local optimization.
no code implementations • 4 Nov 2020 • Sharu Theresa Jose, Osvaldo Simeone, Giuseppe Durisi
In this paper, we introduce the problem of transfer meta-learning, in which tasks are drawn from a target task environment during meta-testing that may differ from the source task environment observed during meta-training.
no code implementations • 21 Oct 2020 • Arezou Rezazadeh, Sharu Theresa Jose, Giuseppe Durisi, Osvaldo Simeone
Meta-learning optimizes an inductive bias---typically in the form of the hyperparameters of a base-learning algorithm---by observing data from a finite number of related tasks.
no code implementations • 13 Oct 2020 • Sharu Theresa Jose, Osvaldo Simeone
In transfer learning, training and testing data sets are drawn from different data distributions.
no code implementations • 9 May 2020 • Sharu Theresa Jose, Osvaldo Simeone
Extending the existing work for conventional learning, an upper bound on the meta-generalization gap is derived for the former class that depends on the mutual information (MI) between the output of the meta-learning algorithm and its input meta-training data.