Search Results for author: Ilya Feige

Found 15 papers, 1 papers with code

{Learning disentangled representations with the Wasserstein Autoencoder

no code implementations1 Jan 2021 Benoit Gaujac, Ilya Feige, David Barber

We further study the trade off between disentanglement and reconstruction on more-difficult data sets with unknown generative factors, where we expect improved reconstructions due to the flexibility of the WAE paradigm.

Disentanglement

Representation Learning for High-Dimensional Data Collection under Local Differential Privacy

no code implementations23 Oct 2020 Alex Mansbridge, Gregory Barbour, Davide Piras, Michael Murray, Christopher Frye, Ilya Feige, David Barber

In this work, our contributions are two-fold: first, by adapting state-of-the-art techniques from representation learning, we introduce a novel approach to learning LDP mechanisms.

Denoising Representation Learning +1

Explainability for fair machine learning

no code implementations14 Oct 2020 Tom Begley, Tobias Schwedes, Christopher Frye, Ilya Feige

Moreover, motivated by the linearity of Shapley explainability, we propose a meta algorithm for applying existing training-time fairness interventions, wherein one trains a perturbation to the original model, rather than a new model entirely.

Attribute BIG-bench Machine Learning +1

Human-interpretable model explainability on high-dimensional data

no code implementations14 Oct 2020 Damien de Mijolla, Christopher Frye, Markus Kunesch, John Mansir, Ilya Feige

The importance of explainability in machine learning continues to grow, as both neural-network architectures and the data they model become increasingly complex.

Image Classification Image-to-Image Translation +2

Learning disentangled representations with the Wasserstein Autoencoder

no code implementations7 Oct 2020 Benoit Gaujac, Ilya Feige, David Barber

We further study the trade off between disentanglement and reconstruction on more-difficult data sets with unknown generative factors, where the flexibility of the WAE paradigm in the reconstruction term improves reconstructions.

Disentanglement

Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders

no code implementations7 Oct 2020 Benoit Gaujac, Ilya Feige, David Barber

Probabilistic models with hierarchical-latent-variable structures provide state-of-the-art results amongst non-autoregressive, unsupervised density-based models.

Shapley explainability on the data manifold

no code implementations ICLR 2021 Christopher Frye, Damien de Mijolla, Tom Begley, Laurence Cowton, Megan Stanley, Ilya Feige

Explainability in AI is crucial for model development, compliance with regulation, and providing operational nuance to predictions.

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

1 code implementation NeurIPS 2020 Christopher Frye, Colin Rowat, Ilya Feige

We introduce a less restrictive framework, Asymmetric Shapley values (ASVs), which are rigorously founded on a set of axioms, applicable to any AI system, and flexible enough to incorporate any causal structure known to be respected by the data.

feature selection Time Series +1

Binary JUNIPR: an interpretable probabilistic model for discrimination

no code implementations24 Jun 2019 Anders Andreassen, Ilya Feige, Christopher Frye, Matthew D. Schwartz

We refer to this refined approach as Binary JUNIPR.

High Energy Physics - Phenomenology

Parenting: Safe Reinforcement Learning from Human Input

no code implementations18 Feb 2019 Christopher Frye, Ilya Feige

Autonomous agents trained via reinforcement learning present numerous safety concerns: reward hacking, negative side effects, and unsafe exploration, among others.

reinforcement-learning Reinforcement Learning (RL) +1

Invariant-equivariant representation learning for multi-class data

no code implementations ICLR 2019 Ilya Feige

Representations learnt through deep neural networks tend to be highly informative, but opaque in terms of what information they learn to encode.

Representation Learning

Improving latent variable descriptiveness with AutoGen

no code implementations12 Jun 2018 Alex Mansbridge, Roberto Fierimonte, Ilya Feige, David Barber

Powerful generative models, particularly in Natural Language Modelling, are commonly trained by maximizing a variational lower bound on the data log likelihood.

Language Modelling

Gaussian mixture models with Wasserstein distance

no code implementations12 Jun 2018 Benoit Gaujac, Ilya Feige, David Barber

Generative models with both discrete and continuous latent variables are highly motivated by the structure of many real-world data sets.

Descriptive

JUNIPR: a Framework for Unsupervised Machine Learning in Particle Physics

no code implementations25 Apr 2018 Anders Andreassen, Ilya Feige, Christopher Frye, Matthew D. Schwartz

As a third application, JUNIPR models can reweight events from one (e. g. simulated) data set to agree with distributions from another (e. g. experimental) data set.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.