2 code implementations • 10 Apr 2024 • Sara Kangaslahti, David Alvarez-Melis
We empirically show that varying the interpolation weights yields predictable and consistent change in the model outputs with respect to all of the controlled attributes.
1 code implementation • 1 Mar 2024 • Tian Qin, Zhiwei Deng, David Alvarez-Melis
What does a neural network learn when training from a task-specific dataset?
1 code implementation • 6 Feb 2024 • Junhong Shen, Neil Tenenholtz, James Brian Hall, David Alvarez-Melis, Nicolo Fusi
Large Language Models (LLMs) have demonstrated remarkable proficiency in understanding and generating natural language.
no code implementations • 12 Jun 2023 • Jiaojiao Fan, David Alvarez-Melis
We compute these geodesics using a recent notion of distance between labeled datasets, and derive alternative interpolation schemes based on it: using either barycentric projections or optimal transport maps, the latter computed using recent neural OT methods.
1 code implementation • 3 Mar 2023 • Kianoush Falahkheirkhah, Alex Lu, David Alvarez-Melis, Grace Huynh
Histopathology is critical for the diagnosis of many diseases, including cancer.
no code implementations • 26 Nov 2022 • Abhi Gupta, Ted Moskovitz, David Alvarez-Melis, Aldo Pacchiano
Transferring knowledge across domains is one of the most fundamental problems in machine learning, but doing so effectively in the context of reinforcement learning remains largely an open problem.
no code implementations • 24 Oct 2022 • David Alvarez-Melis, Nicolò Fusi, Lester Mackey, Tal Wagner
Optimal Transport (OT) is a fundamental tool for comparing probability distributions, but its exact computation remains prohibitive for large datasets.
1 code implementation • 6 Oct 2022 • Ching-Yao Chuang, Stefanie Jegelka, David Alvarez-Melis
Optimal transport aligns samples across distributions by minimizing the transportation cost between them, e. g., the geometric distances.
no code implementations • 30 Sep 2022 • Frederike Lübeck, Charlotte Bunne, Gabriele Gut, Jacobo Sarabia del Castillo, Lucas Pelkmans, David Alvarez-Melis
However, the usual formulation of OT assumes conservation of mass, which is violated in unbalanced scenarios in which the population size changes (e. g., cell proliferation or death) between measurements.
no code implementations • 4 Aug 2022 • Neha Hulkund, Nicolo Fusi, Jennifer Wortman Vaughan, David Alvarez-Melis
We propose a method to identify and characterize distribution shifts in classification datasets based on optimal transport.
no code implementations • 19 May 2022 • David Alvarez-Melis, Vikas Garg, Adam Tauman Kalai
We show that, while it may seem that maximizing likelihood is inherently different than minimizing distinguishability, this distinction is largely artificial and only holds for limited models.
no code implementations • 18 Apr 2022 • Anna Yeaton, Rahul G. Krishnan, Rebecca Mieloszyk, David Alvarez-Melis, Grace Huynh
Scarcity of labeled histopathology data limits the applicability of deep learning methods to under-profiled cancer types and labels.
no code implementations • 1 Jun 2021 • David Alvarez-Melis, Yair Schiff, Youssef Mroueh
Gradient flows are a powerful tool for optimizing functionals in general metric spaces, including the space of probabilities endowed with the Wasserstein metric.
1 code implementation • 27 Apr 2021 • David Alvarez-Melis, Harmanpreet Kaur, Hal Daumé III, Hanna Wallach, Jennifer Wortman Vaughan
We take inspiration from the study of human explanation to inform the design and evaluation of interpretability methods in machine learning.
BIG-bench Machine Learning Interpretable Machine Learning +1
1 code implementation • 24 Oct 2020 • David Alvarez-Melis, Nicolò Fusi
Various machine learning tasks, from generative modeling to domain adaptation, revolve around the concept of dataset transformation and manipulation.
1 code implementation • NeurIPS 2020 • David Alvarez-Melis, Nicolò Fusi
The notion of task similarity is at the core of various machine learning paradigms, such as domain adaptation and meta-learning.
no code implementations • 6 Nov 2019 • David Alvarez-Melis, Youssef Mroueh, Tommi S. Jaakkola
This paper focuses on the problem of unsupervised alignment of hierarchical data such as ontologies or lexical databases.
no code implementations • 31 Oct 2019 • Hailey James, David Alvarez-Melis
In this work we propose a probabilistic view of word embedding bias.
1 code implementation • 29 Oct 2019 • David Alvarez-Melis, Hal Daumé III, Jennifer Wortman Vaughan, Hanna Wallach
Interpretability is an elusive but highly sought-after characteristic of modern machine learning methods.
no code implementations • ICLR 2019 • Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola
In this paper, we propose a new learning problem to encourage deep networks to have stable derivatives over larger regions.
no code implementations • 14 May 2019 • Charlotte Bunne, David Alvarez-Melis, Andreas Krause, Stefanie Jegelka
Generative Adversarial Networks have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety.
no code implementations • 26 Feb 2019 • Guang-He Lee, Wengong Jin, David Alvarez-Melis, Tommi S. Jaakkola
We provide a new approach to training neural models to exhibit transparency in a well-defined, functional manner.
no code implementations • EMNLP 2018 • David Alvarez-Melis, Tommi S. Jaakkola
Cross-lingual or cross-domain correspondences play key roles in tasks ranging from machine translation to transfer learning.
no code implementations • 30 Jun 2018 • Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola
In contrast, we focus on temporal modeling and the problem of tailoring the predictor, functionally, towards an interpretable family.
no code implementations • 25 Jun 2018 • David Alvarez-Melis, Stefanie Jegelka, Tommi S. Jaakkola
Many problems in machine learning involve calculating correspondences between sets of objects, such as point clouds or images.
2 code implementations • 21 Jun 2018 • David Alvarez-Melis, Tommi S. Jaakkola
We argue that robustness of explanations---i. e., that similar inputs should give rise to similar explanations---is a key desideratum for interpretability.
no code implementations • NeurIPS 2018 • David Alvarez-Melis, Tommi S. Jaakkola
Most recent work on interpretability of complex machine learning models has focused on estimating $\textit{a posteriori}$ explanations for previously trained models around specific predictions.
no code implementations • 17 Dec 2017 • David Alvarez-Melis, Tommi S. Jaakkola, Stefanie Jegelka
Optimal Transport has recently gained interest in machine learning for applications ranging from domain adaptation, sentence similarities to deep learning.
no code implementations • EMNLP 2017 • David Alvarez-Melis, Tommi S. Jaakkola
We interpret the predictions of any black-box structured input-structured output model around a specific input-output pair.
1 code implementation • ICLR 2018 • Chengtao Li, David Alvarez-Melis, Keyulu Xu, Stefanie Jegelka, Suvrit Sra
We propose a framework for adversarial training that relies on a sample rather than a single sample point as the fundamental unit of discrimination.
no code implementations • TACL 2016 • Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola
Continuous word representations have been remarkably useful across NLP tasks but remain poorly understood.
no code implementations • 18 Sep 2015 • Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola
Continuous vector representations of words and objects appear to carry surprisingly rich semantic content.