Search Results for author: Mateja Jamnik

Found 45 papers, 19 papers with code

Sphere Neural-Networks for Rational Reasoning

no code implementations22 Mar 2024 Tiansi Dong, Mateja Jamnik, Pietro Liò

SphNN is the first neural model that can determine the validity of long-chained syllogistic reasoning in one epoch by constructing sphere configurations as Euler diagrams, with the worst computational complexity of O(N^2).

Hallucination Logical Reasoning +2

Do Concept Bottleneck Models Obey Locality?

no code implementations2 Jan 2024 Naveen Raman, Mateo Espinosa Zarlenga, Juyeon Heo, Mateja Jamnik

Deep learning models trained under this paradigm heavily rely on the assumption that neural networks can learn to predict the presence or absence of a given concept independently of other concepts.

HEALNet -- Hybrid Multi-Modal Fusion for Heterogeneous Biomedical Data

no code implementations15 Nov 2023 Konstantin Hemker, Nikola Simidjievski, Mateja Jamnik

Technological advances in medical data collection such as high-resolution histopathology and high-throughput genomic sequencing have contributed to the rising requirement for multi-modal biomedical modelling, specifically for image, tabular, and graph data.

Survival Analysis whole slide images

Multilingual Mathematical Autoformalization

1 code implementation7 Nov 2023 Albert Q. Jiang, Wenda Li, Mateja Jamnik

In this work, we create $\texttt{MMA}$, a large, flexible, multilingual, and multi-domain dataset of informal-formal pairs, by using a language model to translate in the reverse direction, that is, from formal mathematical statements into corresponding informal ones.

Few-Shot Learning Language Acquisition +1

Learning to Receive Help: Intervention-Aware Concept Embedding Models

1 code implementation NeurIPS 2023 Mateo Espinosa Zarlenga, Katherine M. Collins, Krishnamurthy Dvijotham, Adrian Weller, Zohreh Shams, Mateja Jamnik

To address this, we propose Intervention-aware Concept Embedding models (IntCEMs), a novel CBM-based architecture and training paradigm that improves a model's receptiveness to test-time interventions.

Enhancing Representation Learning on High-Dimensional, Small-Size Tabular Data: A Divide and Conquer Method with Ensembled VAEs

no code implementations27 Jun 2023 Navindu Leelarathna, Andrei Margeloiu, Mateja Jamnik, Nikola Simidjievski

Variational Autoencoders and their many variants have displayed impressive ability to perform dimensionality reduction, often achieving state-of-the-art performance.

Data Augmentation Dimensionality Reduction +1

ProtoGate: Prototype-based Neural Networks with Local Feature Selection for Tabular Biomedical Data

no code implementations21 Jun 2023 Xiangjian Jiang, Andrei Margeloiu, Nikola Simidjievski, Mateja Jamnik

In this paper, we propose ProtoGate, a prototype-based neural model that introduces an inductive bias by attending to both homogeneity and heterogeneity across samples.

feature selection Inductive Bias

CGXplain: Rule-Based Deep Neural Network Explanations Using Dual Linear Programs

1 code implementation11 Apr 2023 Konstantin Hemker, Zohreh Shams, Mateja Jamnik

Rule-based surrogate models are an effective and interpretable way to approximate a Deep Neural Network's (DNN) decision boundaries, allowing humans to easily understand deep learning models.

Human Uncertainty in Concept-Based AI Systems

no code implementations22 Mar 2023 Katherine M. Collins, Matthew Barker, Mateo Espinosa Zarlenga, Naveen Raman, Umang Bhatt, Mateja Jamnik, Ilia Sucholutsky, Adrian Weller, Krishnamurthy Dvijotham

We study how existing concept-based models deal with uncertain interventions from humans using two novel datasets: UMNIST, a visual dataset with controlled simulated uncertainty based on the MNIST dataset, and CUB-S, a relabeling of the popular CUB concept dataset with rich, densely-annotated soft labels from humans.

Decision Making

GCI: A (G)raph (C)oncept (I)nterpretation Framework

1 code implementation9 Feb 2023 Dmitry Kazhdan, Botty Dimanov, Lucie Charlotte Magister, Pietro Barbiero, Mateja Jamnik, Pietro Lio

Explainable AI (XAI) underwent a recent surge in research on concept extraction, focusing on extracting human-interpretable concepts from Deep Neural Networks.

Explainable Artificial Intelligence (XAI) Molecular Property Prediction +1

Towards Robust Metrics for Concept Representation Evaluation

1 code implementation25 Jan 2023 Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Adrian Weller, Mateja Jamnik

In this paper, we show that such metrics are not appropriate for concept learning and propose novel metrics for evaluating the purity of concept representations in both approaches.

Benchmarking Disentanglement

Discrete Lagrangian Neural Networks with Automatic Symmetry Discovery

1 code implementation20 Nov 2022 Yana Lishkova, Paul Scherer, Steffen Ridderbusch, Mateja Jamnik, Pietro Liò, Sina Ober-Blöbaum, Christian Offen

By one of the most fundamental principles in physics, a dynamical system will exhibit those motions which extremise an action functional.

GCondNet: A Novel Method for Improving Neural Networks on Small High-Dimensional Tabular Data

no code implementations11 Nov 2022 Andrei Margeloiu, Nikola Simidjievski, Pietro Lio, Mateja Jamnik

We create a graph between samples for each data dimension, and utilise Graph Neural Networks (GNNs) for extracting this implicit structure, and for conditioning the parameters of the first layer of an underlying predictor network.

Vocal Bursts Intensity Prediction

Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs

3 code implementations21 Oct 2022 Albert Q. Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée Lacroix, Yuhuai Wu, Guillaume Lample

In this work, we introduce Draft, Sketch, and Prove (DSP), a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems.

Ranked #3 on Automated Theorem Proving on miniF2F-valid (Pass@100 metric)

Automated Theorem Proving Language Modelling

Distributed representations of graphs for drug pair scoring

1 code implementation19 Sep 2022 Paul Scherer, Pietro Liò, Mateja Jamnik

In this paper we study the practicality and usefulness of incorporating distributed representations of graphs into models within the context of drug pair scoring.

Transductive Learning

Representational Systems Theory: A Unified Approach to Encoding, Analysing and Transforming Representations

no code implementations7 Jun 2022 Daniel Raggi, Gem Stapleton, Mateja Jamnik, Aaron Stockdill, Grecia Garcia Garcia, Peter C-H. Cheng

Since Representational Systems Theory provides a universal approach to encoding representational systems, a further key barrier is eliminated: the need to devise system-specific structural transformation algorithms, that are necessary when different systems adopt different formalisation approaches.

Autoformalization with Large Language Models

no code implementations25 May 2022 Yuhuai Wu, Albert Q. Jiang, Wenda Li, Markus N. Rabe, Charles Staats, Mateja Jamnik, Christian Szegedy

Autoformalization is the process of automatically translating from natural language mathematics to formal specifications and proofs.

 Ranked #1 on Automated Theorem Proving on miniF2F-test (using extra training data)

Automated Theorem Proving Program Synthesis

Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers

no code implementations22 May 2022 Albert Q. Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz Odrzygóźdź, Piotr Miłoś, Yuhuai Wu, Mateja Jamnik

Thor increases a language model's success rate on the PISA dataset from $39\%$ to $57\%$, while solving $8. 2\%$ of problems neither language models nor automated theorem provers are able to solve on their own.

Automated Theorem Proving

Efficient Decompositional Rule Extraction for Deep Neural Networks

1 code implementation24 Nov 2021 Mateo Espinosa Zarlenga, Zohreh Shams, Mateja Jamnik

In recent years, there has been significant work on increasing both interpretability and debuggability of a Deep Neural Network (DNN) by extracting a rule-based model that approximates its decision boundary.

On The Quality Assurance Of Concept-Based Representations

no code implementations29 Sep 2021 Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Mateja Jamnik

Recent work on Explainable AI has focused on concept-based explanations, where deep learning models are explained in terms of high-level units of information, referred to as concepts.

Disentanglement

Do Concept Bottleneck Models Learn as Intended?

no code implementations10 May 2021 Andrei Margeloiu, Matthew Ashman, Umang Bhatt, Yanzhi Chen, Mateja Jamnik, Adrian Weller

Concept bottleneck models map from raw inputs to concepts, and then from concepts to targets.

Failing Conceptually: Concept-Based Explanations of Dataset Shift

1 code implementation18 Apr 2021 Maleakhi A. Wijaya, Dmitry Kazhdan, Botty Dimanov, Mateja Jamnik

Using two case studies (dSprites and 3dshapes), we demonstrate how CBSD can accurately detect underlying concepts that are affected by shifts and achieve higher detection accuracy compared to state-of-the-art shift detection methods.

Is Disentanglement all you need? Comparing Concept-based & Disentanglement Approaches

1 code implementation14 Apr 2021 Dmitry Kazhdan, Botty Dimanov, Helena Andres Terre, Mateja Jamnik, Pietro Liò, Adrian Weller

Concept-based explanations have emerged as a popular way of extracting human-interpretable representations from deep discriminative models.

Disentanglement

Improving Interpretability in Medical Imaging Diagnosis using Adversarial Training

1 code implementation2 Dec 2020 Andrei Margeloiu, Nikola Simidjievski, Mateja Jamnik, Adrian Weller

We investigate the influence of adversarial training on the interpretability of convolutional neural networks (CNNs), specifically applied to diagnosing skin cancer.

Pairwise Relations Discriminator for Unsupervised Raven's Progressive Matrices

1 code implementation2 Nov 2020 Nicholas Quek Wei Kiat, Duo Wang, Mateja Jamnik

PRD reframes the RPM problem into a relation comparison task, which we can solve without requiring the labelling of the RPM problem.

Now You See Me (CME): Concept-based Model Extraction

1 code implementation25 Oct 2020 Dmitry Kazhdan, Botty Dimanov, Mateja Jamnik, Pietro Liò, Adrian Weller

Deep Neural Networks (DNNs) have achieved remarkable performance on a range of tasks.

Model extraction

Incorporating network based protein complex discovery into automated model construction

no code implementations29 Sep 2020 Paul Scherer, Maja Trȩbacz, Nikola Simidjievski, Zohreh Shams, Helena Andres Terre, Pietro Liò, Mateja Jamnik

We propose a method for gene expression based analysis of cancer phenotypes incorporating network biology knowledge through unsupervised construction of computational graphs.

Clustering

Learned Low Precision Graph Neural Networks

no code implementations19 Sep 2020 Yiren Zhao, Duo Wang, Daniel Bates, Robert Mullins, Mateja Jamnik, Pietro Lio

LPGNAS learns the optimal architecture coupled with the best quantisation strategy for different components in the GNN automatically using back-propagation in a single search round.

Abstract Diagrammatic Reasoning with Multiplex Graph Networks

no code implementations ICLR 2020 Duo Wang, Mateja Jamnik, Pietro Lio

We have tested MXGNet on two types of diagrammatic reasoning tasks, namely Diagram Syllogisms and Raven Progressive Matrices (RPM).

Visual Reasoning

Extrapolatable Relational Reasoning With Comparators in Low-Dimensional Manifolds

no code implementations15 Jun 2020 Duo Wang, Mateja Jamnik, Pietro Lio

We show that neural nets with this inductive bias achieve considerably better o. o. d generalisation performance for a range of relational reasoning tasks.

Inductive Bias Relational Reasoning

Probabilistic Dual Network Architecture Search on Graphs

no code implementations21 Mar 2020 Yiren Zhao, Duo Wang, Xitong Gao, Robert Mullins, Pietro Lio, Mateja Jamnik

We present the first differentiable Network Architecture Search (NAS) for Graph Neural Networks (GNNs).

Structural Inductive Biases in Emergent Communication

no code implementations4 Feb 2020 Agnieszka Słowik, Abhinav Gupta, William L. Hamilton, Mateja Jamnik, Sean B. Holden, Christopher Pal

In order to communicate, humans flatten a complex representation of ideas and their attributes into a single word or a sentence.

Representation Learning Sentence

Towards Graph Representation Learning in Emergent Communication

no code implementations24 Jan 2020 Agnieszka Słowik, Abhinav Gupta, William L. Hamilton, Mateja Jamnik, Sean B. Holden

Recent findings in neuroscience suggest that the human brain represents information in a geometric structure (for instance, through conceptual spaces).

Graph Representation Learning Sentence

Decoupling feature propagation from the design of graph auto-encoders

no code implementations18 Oct 2019 Paul Scherer, Helena Andres-Terre, Pietro Lio, Mateja Jamnik

We present two instances, L-GAE and L-VGAE, of the variational graph auto-encoding family (VGAE) based on separating feature propagation operations from graph convolution layers typically found in graph learning methods to a single linear matrix computation made prior to input in standard auto-encoder architectures.

Graph Learning Graph Representation Learning +1

Unsupervised and interpretable scene discovery with Discrete-Attend-Infer-Repeat

no code implementations14 Mar 2019 Duo Wang, Mateja Jamnik, Pietro Lio

In this work we present Discrete Attend Infer Repeat (Discrete-AIR), a Recurrent Auto-Encoder with structured latent distributions containing discrete categorical distributions, continuous attribute distributions, and factorised spatial attention.

Attribute

Step-wise Sensitivity Analysis: Identifying Partially Distributed Representations for Interpretable Deep Learning

no code implementations27 Sep 2018 Botty Dimanov, Mateja Jamnik

In this paper, we introduce a novel method, called step-wise sensitivity analysis, which makes three contributions towards increasing the interpretability of Deep Neural Networks (DNNs).

Cannot find the paper you are looking for? You can Submit a new open access paper.