Search Results for author: Robert Peharz

Found 23 papers, 14 papers with code

Rao-Blackwellising Bayesian Causal Inference

no code implementations22 Feb 2024 Christian Toth, Christian Knoll, Franz Pernkopf, Robert Peharz

Specifically, we decompose the problem of inferring the causal structure into (i) inferring a topological order over variables and (ii) inferring the parent sets for each variable.

Causal Inference Gaussian Processes +1

Probabilistic Integral Circuits

no code implementations25 Oct 2023 Gennaro Gala, Cassio de Campos, Robert Peharz, Antonio Vergari, Erik Quaeghebeur

In contrast, probabilistic circuits (PCs) are hierarchical discrete mixtures represented as computational graphs composed of input, sum and product units.

How to Turn Your Knowledge Graph Embeddings into Generative Models

1 code implementation NeurIPS 2023 Lorenzo Loconte, Nicola Di Mauro, Robert Peharz, Antonio Vergari

Some of the most successful knowledge graph embedding (KGE) models for link prediction -- CP, RESCAL, TuckER, ComplEx -- can be interpreted as energy-based models.

Knowledge Graph Embedding Knowledge Graph Embeddings +1

Bayesian Structure Scores for Probabilistic Circuits

1 code implementation23 Feb 2023 Yang Yang, Gennaro Gala, Robert Peharz

Probabilistic circuits (PCs) are a prominent representation of probability distributions with tractable inference.

Continuous Mixtures of Tractable Probabilistic Models

1 code implementation21 Sep 2022 Alvaro H. C. Correia, Gennaro Gala, Erik Quaeghebeur, Cassio de Campos, Robert Peharz

Meanwhile, tractable probabilistic models such as probabilistic circuits (PCs) can be understood as hierarchical discrete mixture models, and thus are capable of performing exact inference efficiently but often show subpar performance in comparison to continuous latent-space models.

Density Estimation Numerical Integration

Active Bayesian Causal Inference

1 code implementation4 Jun 2022 Christian Toth, Lars Lorch, Christian Knoll, Andreas Krause, Franz Pernkopf, Robert Peharz, Julius von Kügelgen

In this work, we propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning, which jointly infers a posterior over causal models and queries of interest.

Active Learning Causal Discovery +2

Towards Robust Classification with Deep Generative Forests

1 code implementation11 Jul 2020 Alvaro H. C. Correia, Robert Peharz, Cassio de Campos

Decision Trees and Random Forests are among the most widely used machine learning models, and often achieve state-of-the-art performance in tabular, domain-agnostic datasets.

BIG-bench Machine Learning Classification +2

Joints in Random Forests

1 code implementation NeurIPS 2020 Alvaro H. C. Correia, Robert Peharz, Cassio de Campos

Decision Trees (DTs) and Random Forests (RFs) are powerful discriminative learners and tools of central importance to the everyday machine learning practitioner and data scientist.

Imputation

Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic Circuits

1 code implementation ICML 2020 Robert Peharz, Steven Lang, Antonio Vergari, Karl Stelzner, Alejandro Molina, Martin Trapp, Guy Van Den Broeck, Kristian Kersting, Zoubin Ghahramani

Probabilistic circuits (PCs) are a promising avenue for probabilistic modeling, as they permit a wide range of exact and efficient inference routines.

Resource-Efficient Neural Networks for Embedded Systems

no code implementations7 Jan 2020 Wolfgang Roth, Günther Schindler, Bernhard Klein, Robert Peharz, Sebastian Tschiatschek, Holger Fröning, Franz Pernkopf, Zoubin Ghahramani

While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches.

Autonomous Navigation BIG-bench Machine Learning +2

Sum-Product Network Decompilation

no code implementations20 Dec 2019 Cory J. Butz, Jhonatan S. Oliveira, Robert Peharz

Due to this dichotomy, tools to convert between BNs and SPNs are desirable.

Deep Structured Mixtures of Gaussian Processes

1 code implementation10 Oct 2019 Martin Trapp, Robert Peharz, Franz Pernkopf, Carl E. Rasmussen

Gaussian Processes (GPs) are powerful non-parametric Bayesian regression models that allow exact posterior inference, but exhibit high computational and memory costs.

Gaussian Processes

Bayesian Learning of Sum-Product Networks

1 code implementation NeurIPS 2019 Martin Trapp, Robert Peharz, Hong Ge, Franz Pernkopf, Zoubin Ghahramani

While parameter learning in SPNs is well developed, structure learning leaves something to be desired: Even though there is a plethora of SPN structure learners, most of them are somewhat ad-hoc and based on intuition rather than a clear learning principle.

Conditional Sum-Product Networks: Imposing Structure on Deep Probabilistic Architectures

no code implementations21 May 2019 Xiaoting Shao, Alejandro Molina, Antonio Vergari, Karl Stelzner, Robert Peharz, Thomas Liebig, Kristian Kersting

In contrast, deep probabilistic models such as sum-product networks (SPNs) capture joint distributions in a tractable fashion, but still lack the expressive power of intractable models based on deep neural networks.

Image Classification

Optimisation of Overparametrized Sum-Product Networks

1 code implementation20 May 2019 Martin Trapp, Robert Peharz, Franz Pernkopf

It seems to be a pearl of conventional wisdom that parameter learning in deep sum-product networks is surprisingly fast compared to shallow mixture models.

SPFlow: An Easy and Extensible Library for Deep Probabilistic Learning using Sum-Product Networks

1 code implementation11 Jan 2019 Alejandro Molina, Antonio Vergari, Karl Stelzner, Robert Peharz, Pranav Subramani, Nicola Di Mauro, Pascal Poupart, Kristian Kersting

We introduce SPFlow, an open-source Python library providing a simple interface to inference, learning and manipulation routines for deep and tractable probabilistic models called Sum-Product Networks (SPNs).

Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters

2 code implementations ICLR 2019 Marton Havasi, Robert Peharz, José Miguel Hernández-Lobato

While deep neural networks are a highly successful model class, their large memory footprint puts considerable strain on energy consumption, communication bandwidth, and storage requirements.

Neural Network Compression Quantization

Learning Deep Mixtures of Gaussian Process Experts Using Sum-Product Networks

1 code implementation12 Sep 2018 Martin Trapp, Robert Peharz, Carl E. Rasmussen, Franz Pernkopf

In this paper, we introduce a natural and expressive way to tackle these problems, by incorporating GPs in sum-product networks (SPNs), a recently proposed tractable probabilistic model allowing exact and efficient inference.

Gaussian Processes regression +1

Automatic Bayesian Density Analysis

no code implementations24 Jul 2018 Antonio Vergari, Alejandro Molina, Robert Peharz, Zoubin Ghahramani, Kristian Kersting, Isabel Valera

Classical approaches for {exploratory data analysis} are usually not flexible enough to deal with the uncertainty inherent to real-world data: they are often restricted to fixed latent interaction models and homogeneous likelihoods; they are sensitive to missing, corrupt and anomalous data; moreover, their expressiveness generally comes at the price of intractable inference.

Anomaly Detection Bayesian Inference +1

Probabilistic Deep Learning using Random Sum-Product Networks

no code implementations5 Jun 2018 Robert Peharz, Antonio Vergari, Karl Stelzner, Alejandro Molina, Martin Trapp, Kristian Kersting, Zoubin Ghahramani

The need for consistent treatment of uncertainty has recently triggered increased interest in probabilistic deep learning methods.

Probabilistic Deep Learning

Safe Semi-Supervised Learning of Sum-Product Networks

1 code implementation10 Oct 2017 Martin Trapp, Tamas Madl, Robert Peharz, Franz Pernkopf, Robert Trappl

In several domains obtaining class annotations is expensive while at the same time unlabelled data are abundant.

On the Latent Variable Interpretation in Sum-Product Networks

no code implementations22 Jan 2016 Robert Peharz, Robert Gens, Franz Pernkopf, Pedro Domingos

We discuss conditional independencies in augmented SPNs, formally establish the probabilistic interpretation of the sum-weights and give an interpretation of augmented SPNs as Bayesian networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.