no code implementations • 22 Mar 2024 • Tiansi Dong, Mateja Jamnik, Pietro Liò
SphNN is the first neural model that can determine the validity of long-chained syllogistic reasoning in one epoch by constructing sphere configurations as Euler diagrams, with the worst computational complexity of O(N^2).
no code implementations • 2 Jan 2024 • Naveen Raman, Mateo Espinosa Zarlenga, Juyeon Heo, Mateja Jamnik
Deep learning models trained under this paradigm heavily rely on the assumption that neural networks can learn to predict the presence or absence of a given concept independently of other concepts.
no code implementations • 15 Nov 2023 • Konstantin Hemker, Nikola Simidjievski, Mateja Jamnik
Technological advances in medical data collection such as high-resolution histopathology and high-throughput genomic sequencing have contributed to the rising requirement for multi-modal biomedical modelling, specifically for image, tabular, and graph data.
1 code implementation • 7 Nov 2023 • Albert Q. Jiang, Wenda Li, Mateja Jamnik
In this work, we create $\texttt{MMA}$, a large, flexible, multilingual, and multi-domain dataset of informal-formal pairs, by using a language model to translate in the reverse direction, that is, from formal mathematical statements into corresponding informal ones.
1 code implementation • NeurIPS 2023 • Mateo Espinosa Zarlenga, Katherine M. Collins, Krishnamurthy Dvijotham, Adrian Weller, Zohreh Shams, Mateja Jamnik
To address this, we propose Intervention-aware Concept Embedding models (IntCEMs), a novel CBM-based architecture and training paradigm that improves a model's receptiveness to test-time interventions.
no code implementations • 27 Jun 2023 • Navindu Leelarathna, Andrei Margeloiu, Mateja Jamnik, Nikola Simidjievski
Variational Autoencoders and their many variants have displayed impressive ability to perform dimensionality reduction, often achieving state-of-the-art performance.
no code implementations • 21 Jun 2023 • Xiangjian Jiang, Andrei Margeloiu, Nikola Simidjievski, Mateja Jamnik
In this paper, we propose ProtoGate, a prototype-based neural model that introduces an inductive bias by attending to both homogeneity and heterogeneity across samples.
1 code implementation • 2 Jun 2023 • Katherine M. Collins, Albert Q. Jiang, Simon Frieder, Lionel Wong, Miri Zilka, Umang Bhatt, Thomas Lukasiewicz, Yuhuai Wu, Joshua B. Tenenbaum, William Hart, Timothy Gowers, Wenda Li, Adrian Weller, Mateja Jamnik
There is much excitement about the opportunity to harness the power of large language models (LLMs) when building problem-solving assistants.
1 code implementation • 27 Apr 2023 • Pietro Barbiero, Gabriele Ciravegna, Francesco Giannini, Mateo Espinosa Zarlenga, Lucie Charlotte Magister, Alberto Tonda, Pietro Lio', Frederic Precioso, Mateja Jamnik, Giuseppe Marra
Deep learning methods are highly accurate, yet their opaque decision process prevents them from earning full human trust.
1 code implementation • 11 Apr 2023 • Konstantin Hemker, Zohreh Shams, Mateja Jamnik
Rule-based surrogate models are an effective and interpretable way to approximate a Deep Neural Network's (DNN) decision boundaries, allowing humans to easily understand deep learning models.
no code implementations • 22 Mar 2023 • Katherine M. Collins, Matthew Barker, Mateo Espinosa Zarlenga, Naveen Raman, Umang Bhatt, Mateja Jamnik, Ilia Sucholutsky, Adrian Weller, Krishnamurthy Dvijotham
We study how existing concept-based models deal with uncertain interventions from humans using two novel datasets: UMNIST, a visual dataset with controlled simulated uncertainty based on the MNIST dataset, and CUB-S, a relabeling of the popular CUB concept dataset with rich, densely-annotated soft labels from humans.
1 code implementation • 9 Feb 2023 • Dmitry Kazhdan, Botty Dimanov, Lucie Charlotte Magister, Pietro Barbiero, Mateja Jamnik, Pietro Lio
Explainable AI (XAI) underwent a recent surge in research on concept extraction, focusing on extracting human-interpretable concepts from Deep Neural Networks.
Explainable Artificial Intelligence (XAI) Molecular Property Prediction +1
1 code implementation • 25 Jan 2023 • Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Adrian Weller, Mateja Jamnik
In this paper, we show that such metrics are not appropriate for concept learning and propose novel metrics for evaluating the purity of concept representations in both approaches.
1 code implementation • 28 Nov 2022 • Andrei Margeloiu, Nikola Simidjievski, Pietro Lio, Mateja Jamnik
Tabular biomedical data is often high-dimensional but with a very small number of samples.
1 code implementation • 20 Nov 2022 • Yana Lishkova, Paul Scherer, Steffen Ridderbusch, Mateja Jamnik, Pietro Liò, Sina Ober-Blöbaum, Christian Offen
By one of the most fundamental principles in physics, a dynamical system will exhibit those motions which extremise an action functional.
no code implementations • 14 Nov 2022 • Shea Cardozo, Gabriel Islas Montero, Dmitry Kazhdan, Botty Dimanov, Maleakhi Wijaya, Mateja Jamnik, Pietro Lio
Recent work has suggested post-hoc explainers might be ineffective for detecting spurious correlations in Deep Neural Networks (DNNs).
no code implementations • 11 Nov 2022 • Andrei Margeloiu, Nikola Simidjievski, Pietro Lio, Mateja Jamnik
We create a graph between samples for each data dimension, and utilise Graph Neural Networks (GNNs) for extracting this implicit structure, and for conditioning the parameters of the first layer of an underlying predictor network.
3 code implementations • 21 Oct 2022 • Albert Q. Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée Lacroix, Yuhuai Wu, Guillaume Lample
In this work, we introduce Draft, Sketch, and Prove (DSP), a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems.
Ranked #3 on Automated Theorem Proving on miniF2F-valid (Pass@100 metric)
1 code implementation • 19 Sep 2022 • Mateo Espinosa Zarlenga, Pietro Barbiero, Gabriele Ciravegna, Giuseppe Marra, Francesco Giannini, Michelangelo Diligenti, Zohreh Shams, Frederic Precioso, Stefano Melacci, Adrian Weller, Pietro Lio, Mateja Jamnik
Deploying AI-powered systems requires trustworthy models supporting effective human interactions, going beyond raw prediction accuracy.
1 code implementation • 19 Sep 2022 • Paul Scherer, Pietro Liò, Mateja Jamnik
In this paper we study the practicality and usefulness of incorporating distributed representations of graphs into models within the context of drug pair scoring.
no code implementations • 27 Jul 2022 • Lucie Charlotte Magister, Pietro Barbiero, Dmitry Kazhdan, Federico Siciliano, Gabriele Ciravegna, Fabrizio Silvestri, Mateja Jamnik, Pietro Lio
The opaque reasoning of Graph Neural Networks induces a lack of human trust.
no code implementations • 7 Jun 2022 • Daniel Raggi, Gem Stapleton, Mateja Jamnik, Aaron Stockdill, Grecia Garcia Garcia, Peter C-H. Cheng
Since Representational Systems Theory provides a universal approach to encoding representational systems, a further key barrier is eliminated: the need to devise system-specific structural transformation algorithms, that are necessary when different systems adopt different formalisation approaches.
no code implementations • 25 May 2022 • Yuhuai Wu, Albert Q. Jiang, Wenda Li, Markus N. Rabe, Charles Staats, Mateja Jamnik, Christian Szegedy
Autoformalization is the process of automatically translating from natural language mathematics to formal specifications and proofs.
Ranked #1 on Automated Theorem Proving on miniF2F-test (using extra training data)
no code implementations • 22 May 2022 • Albert Q. Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz Odrzygóźdź, Piotr Miłoś, Yuhuai Wu, Mateja Jamnik
Thor increases a language model's success rate on the PISA dataset from $39\%$ to $57\%$, while solving $8. 2\%$ of problems neither language models nor automated theorem provers are able to solve on their own.
Ranked #3 on Automated Theorem Proving on miniF2F-test
1 code implementation • 24 Nov 2021 • Mateo Espinosa Zarlenga, Zohreh Shams, Mateja Jamnik
In recent years, there has been significant work on increasing both interpretability and debuggability of a Deep Neural Network (DNN) by extracting a rule-based model that approximates its decision boundary.
no code implementations • 29 Sep 2021 • Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Mateja Jamnik
Recent work on Explainable AI has focused on concept-based explanations, where deep learning models are explained in terms of high-level units of information, referred to as concepts.
no code implementations • 10 May 2021 • Andrei Margeloiu, Matthew Ashman, Umang Bhatt, Yanzhi Chen, Mateja Jamnik, Adrian Weller
Concept bottleneck models map from raw inputs to concepts, and then from concepts to targets.
1 code implementation • 18 Apr 2021 • Maleakhi A. Wijaya, Dmitry Kazhdan, Botty Dimanov, Mateja Jamnik
Using two case studies (dSprites and 3dshapes), we demonstrate how CBSD can accurately detect underlying concepts that are affected by shifts and achieve higher detection accuracy compared to state-of-the-art shift detection methods.
1 code implementation • 14 Apr 2021 • Dmitry Kazhdan, Botty Dimanov, Helena Andres Terre, Mateja Jamnik, Pietro Liò, Adrian Weller
Concept-based explanations have emerged as a popular way of extracting human-interpretable representations from deep discriminative models.
1 code implementation • 13 Dec 2020 • Dmitry Kazhdan, Botty Dimanov, Mateja Jamnik, Pietro Liò
Recurrent Neural Networks (RNNs) have achieved remarkable performance on a range of tasks.
1 code implementation • 2 Dec 2020 • Andrei Margeloiu, Nikola Simidjievski, Mateja Jamnik, Adrian Weller
We investigate the influence of adversarial training on the interpretability of convolutional neural networks (CNNs), specifically applied to diagnosing skin cancer.
no code implementations • 22 Nov 2020 • Maja Trębacz, Zohreh Shams, Mateja Jamnik, Paul Scherer, Nikola Simidjievski, Helena Andres Terre, Pietro Liò
Stratifying cancer patients based on their gene expression levels allows improving diagnosis, survival analysis and treatment planning.
1 code implementation • 2 Nov 2020 • Nicholas Quek Wei Kiat, Duo Wang, Mateja Jamnik
PRD reframes the RPM problem into a relation comparison task, which we can solve without requiring the labelling of the RPM problem.
1 code implementation • 25 Oct 2020 • Dmitry Kazhdan, Botty Dimanov, Mateja Jamnik, Pietro Liò, Adrian Weller
Deep Neural Networks (DNNs) have achieved remarkable performance on a range of tasks.
no code implementations • 29 Sep 2020 • Paul Scherer, Maja Trȩbacz, Nikola Simidjievski, Zohreh Shams, Helena Andres Terre, Pietro Liò, Mateja Jamnik
We propose a method for gene expression based analysis of cancer phenotypes incorporating network biology knowledge through unsupervised construction of computational graphs.
no code implementations • 19 Sep 2020 • Yiren Zhao, Duo Wang, Daniel Bates, Robert Mullins, Mateja Jamnik, Pietro Lio
LPGNAS learns the optimal architecture coupled with the best quantisation strategy for different components in the GNN automatically using back-propagation in a single search round.
no code implementations • ICLR 2020 • Duo Wang, Mateja Jamnik, Pietro Lio
We have tested MXGNet on two types of diagrammatic reasoning tasks, namely Diagram Syllogisms and Raven Progressive Matrices (RPM).
no code implementations • 15 Jun 2020 • Duo Wang, Mateja Jamnik, Pietro Lio
We show that neural nets with this inductive bias achieve considerably better o. o. d generalisation performance for a range of relational reasoning tasks.
no code implementations • 21 Mar 2020 • Yiren Zhao, Duo Wang, Xitong Gao, Robert Mullins, Pietro Lio, Mateja Jamnik
We present the first differentiable Network Architecture Search (NAS) for Graph Neural Networks (GNNs).
no code implementations • 4 Feb 2020 • Agnieszka Słowik, Abhinav Gupta, William L. Hamilton, Mateja Jamnik, Sean B. Holden, Christopher Pal
In order to communicate, humans flatten a complex representation of ideas and their attributes into a single word or a sentence.
no code implementations • 24 Jan 2020 • Agnieszka Słowik, Abhinav Gupta, William L. Hamilton, Mateja Jamnik, Sean B. Holden
Recent findings in neuroscience suggest that the human brain represents information in a geometric structure (for instance, through conceptual spaces).
no code implementations • 18 Oct 2019 • Paul Scherer, Helena Andres-Terre, Pietro Lio, Mateja Jamnik
We present two instances, L-GAE and L-VGAE, of the variational graph auto-encoding family (VGAE) based on separating feature propagation operations from graph convolution layers typically found in graph learning methods to a single linear matrix computation made prior to input in standard auto-encoder architectures.
no code implementations • 18 Sep 2019 • Agnieszka Słowik, Chaitanya Mangla, Mateja Jamnik, Sean B. Holden, Lawrence C. Paulson
Heuristics in theorem provers are often parameterised.
no code implementations • 14 Mar 2019 • Duo Wang, Mateja Jamnik, Pietro Lio
In this work we present Discrete Attend Infer Repeat (Discrete-AIR), a Recurrent Auto-Encoder with structured latent distributions containing discrete categorical distributions, continuous attribute distributions, and factorised spatial attention.
no code implementations • 27 Sep 2018 • Botty Dimanov, Mateja Jamnik
In this paper, we introduce a novel method, called step-wise sensitivity analysis, which makes three contributions towards increasing the interpretability of Deep Neural Networks (DNNs).