Search Results for author: Maruan Al-Shedivat

Found 23 papers, 14 papers with code

Knowledge-Aware Meta-learning for Low-Resource Text Classification

1 code implementation EMNLP 2021 Huaxiu Yao, Yingxin Wu, Maruan Al-Shedivat, Eric P. Xing

Meta-learning has achieved great success in leveraging the historical learned knowledge to facilitate the learning process of the new task.

Meta-Learning Sentence +2

On Data Efficiency of Meta-learning

no code implementations30 Jan 2021 Maruan Al-Shedivat, Liam Li, Eric Xing, Ameet Talwalkar

Meta-learning has enabled learning statistical models that can be quickly adapted to new prediction tasks.

Meta-Learning Personalized Federated Learning

Federated Learning via Posterior Averaging: A New Perspective and Practical Algorithms

1 code implementation ICLR 2021 Maruan Al-Shedivat, Jennifer Gillenwater, Eric Xing, Afshin Rostamizadeh

Federated learning is typically approached as an optimization problem, where the goal is to minimize a global loss function by distributing computation across client devices that possess local data and specify different parts of the global objective.

Federated Learning

Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)

no code implementations31 May 2019 Gregory Plumb, Maruan Al-Shedivat, Eric Xing, Ameet Talwalkar

Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, which lack guarantees about their explanation quality.

BIG-bench Machine Learning Interpretable Machine Learning

Consistency by Agreement in Zero-shot Neural Machine Translation

2 code implementations NAACL 2019 Maruan Al-Shedivat, Ankur P. Parikh

Generalization and reliability of multilingual translation often highly depend on the amount of available parallel data for each language pair of interest.

Machine Translation NMT +3

Regularizing Black-box Models for Improved Interpretability

1 code implementation NeurIPS 2020 Gregory Plumb, Maruan Al-Shedivat, Angel Alexander Cabrera, Adam Perer, Eric Xing, Ameet Talwalkar

Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, whose explanation quality can be unpredictable.

BIG-bench Machine Learning Interpretable Machine Learning

On the Complexity of Exploration in Goal-Driven Navigation

no code implementations16 Nov 2018 Maruan Al-Shedivat, Lisa Lee, Ruslan Salakhutdinov, Eric Xing

Next, we propose to measure the complexity of each environment by constructing dependency graphs between the goals and analytically computing \emph{hitting times} of a random walk in the graph.

Navigate

DiCE: The Infinitely Differentiable Monte Carlo Estimator

1 code implementation ICML 2018 Jakob Foerster, Gregory Farquhar, Maruan Al-Shedivat, Tim Rocktäschel, Eric Xing, Shimon Whiteson

Lastly, to match the first-order gradient under differentiation, SL treats part of the cost as a fixed sample, which we show leads to missing and wrong terms for estimators of higher-order derivatives.

Meta-Learning

DiCE: The Infinitely Differentiable Monte-Carlo Estimator

5 code implementations14 Feb 2018 Jakob Foerster, Gregory Farquhar, Maruan Al-Shedivat, Tim Rocktäschel, Eric P. Xing, Shimon Whiteson

Lastly, to match the first-order gradient under differentiation, SL treats part of the cost as a fixed sample, which we show leads to missing and wrong terms for estimators of higher-order derivatives.

Meta-Learning

The Intriguing Properties of Model Explanations

1 code implementation30 Jan 2018 Maruan Al-Shedivat, Avinava Dubey, Eric P. Xing

Linear approximations to the decision boundary of a complex model have become one of the most popular tools for interpreting predictions.

Personalized Survival Prediction with Contextual Explanation Networks

1 code implementation30 Jan 2018 Maruan Al-Shedivat, Avinava Dubey, Eric P. Xing

Accurate and transparent prediction of cancer survival times on the level of individual patients can inform and improve patient care and treatment practices.

Survival Prediction

Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments

1 code implementation ICLR 2018 Maruan Al-Shedivat, Trapit Bansal, Yuri Burda, Ilya Sutskever, Igor Mordatch, Pieter Abbeel

Ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence.

Meta-Learning

Learning with Opponent-Learning Awareness

6 code implementations13 Sep 2017 Jakob N. Foerster, Richard Y. Chen, Maruan Al-Shedivat, Shimon Whiteson, Pieter Abbeel, Igor Mordatch

We also show that the LOLA update rule can be efficiently calculated using an extension of the policy gradient estimator, making the method suitable for model-free RL.

Multi-agent Reinforcement Learning

Contextual Explanation Networks

1 code implementation ICLR 2018 Maruan Al-Shedivat, Avinava Dubey, Eric P. Xing

Our results on image and text classification and survival analysis tasks demonstrate that CENs are not only competitive with the state-of-the-art methods but also offer additional insights behind each prediction, that can be valuable for decision support.

Image Classification Interpretability Techniques for Deep Learning +5

Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

no code implementations14 Nov 2015 Emre O. Neftci, Bruno U. Pedroni, Siddharth Joshi, Maruan Al-Shedivat, Gert Cauwenberghs

Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex.

Learning Non-deterministic Representations with Energy-based Ensembles

no code implementations23 Dec 2014 Maruan Al-Shedivat, Emre Neftci, Gert Cauwenberghs

These mappings are encoded in a distribution over a (possibly infinite) collection of models.

One-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.