Search Results for author: Marton Havasi

Found 10 papers, 6 papers with code

Guarantee Regions for Local Explanations

1 code implementation20 Feb 2024 Marton Havasi, Sonali Parbhoo, Finale Doshi-Velez

Interpretability methods that utilise local surrogate models (e. g. LIME) are very good at describing the behaviour of the predictive model at a point of interest, but they are not guaranteed to extrapolate to the local region surrounding the point.

What Makes a Good Explanation?: A Harmonized View of Properties of Explanations

no code implementations10 Nov 2022 Zixi Chen, Varshini Subhash, Marton Havasi, Weiwei Pan, Finale Doshi-Velez

In this work, we survey properties defined in interpretable machine learning papers, synthesize them based on what they actually measure, and describe the trade-offs between different formulations of these properties.

Interpretable Machine Learning

Training independent subnetworks for robust prediction

2 code implementations ICLR 2021 Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Zhe Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew M. Dai, Dustin Tran

Recent approaches to efficiently ensemble neural networks have shown that strong robustness and uncertainty performance can be achieved with a negligible gain in parameters over the original network.

Refining the variational posterior through iterative optimization

no code implementations25 Sep 2019 Marton Havasi, Jasper Snoek, Dustin Tran, Jonathan Gordon, José Miguel Hernández-Lobato

Variational inference (VI) is a popular approach for approximate Bayesian inference that is particularly promising for highly parameterized models such as deep neural networks.

Bayesian Inference Variational Inference

Compression without Quantization

no code implementations25 Sep 2019 Gergely Flamich, Marton Havasi, José Miguel Hernández-Lobato

Standard compression algorithms work by mapping an image to discrete code using an encoder from which the original image can be reconstructed through a decoder.

Image Compression Quantization

Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters

2 code implementations ICLR 2019 Marton Havasi, Robert Peharz, José Miguel Hernández-Lobato

While deep neural networks are a highly successful model class, their large memory footprint puts considerable strain on energy consumption, communication bandwidth, and storage requirements.

Neural Network Compression Quantization

Deep Gaussian Processes with Decoupled Inducing Inputs

no code implementations9 Jan 2018 Marton Havasi, José Miguel Hernández-Lobato, Juan José Murillo-Fuentes

Deep Gaussian Processes (DGP) are hierarchical generalizations of Gaussian Processes (GP) that have proven to work effectively on a multiple supervised regression tasks.

Gaussian Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.