1 code implementation • 24 Aug 2023 • Matej Zečević, Moritz Willig, Devendra Singh Dhami, Kristian Kersting
We conjecture that in the cases where LLM succeed in doing causal inference, underlying was a respective meta SCM that exposed correlations between causal facts in natural language on whose data the LLM was ultimately trained.
no code implementations • 23 Dec 2022 • Matej Zečević, Moritz Willig, Jonas Seng, Florian Peter Busch
This short paper discusses continually updated causal abstractions as a potential direction of future research.
no code implementations • 23 Dec 2022 • Matej Zečević, Moritz Willig, Devendra Singh Dhami, Kristian Kersting
Many researchers have voiced their support towards Pearl's counterfactual theory of causation as a stepping stone for AI/ML research's ultimate goal of intelligent systems.
no code implementations • 23 Dec 2022 • Kieran Didi, Matej Zečević
Research around AI for Science has seen significant success since the rise of deep learning models over the past decade, even with longstanding challenges such as protein structure prediction.
1 code implementation • 14 Jun 2022 • Moritz Willig, Matej Zečević, Devendra Singh Dhami, Kristian Kersting
Foundation models are subject to an ongoing heated debate, leaving open the question of progress towards AGI and dividing the community into two camps: the ones who see the arguably impressive results as evidence to the scaling hypothesis, and the others who are worried about the lack of interpretability and reasoning capabilities.
no code implementations • 14 Jun 2022 • Salahedine Youssef, Matej Zečević, Devendra Singh Dhami, Kristian Kersting
Even though AI has advanced rapidly in recent years displaying success in solving highly complex problems, the class of Bongard Problems (BPs) yet remain largely unsolved by modern ML techniques.
no code implementations • 14 Jun 2022 • Jonas Seng, Matej Zečević, Devendra Singh Dhami, Kristian Kersting
Simulations are ubiquitous in machine learning.
no code implementations • 14 Jun 2022 • David Steinmann, Matej Zečević, Devendra Singh Dhami, Kristian Kersting
In this work, we extend the attribution methods for explaining neural networks to linear programs.
no code implementations • 14 Jun 2022 • Florian Peter Busch, Matej Zečević, Kristian Kersting, Devendra Singh Dhami
We introduce an approach where we consider neural encodings for LPs that justify the application of attribution methods from explainable artificial intelligence (XAI) designed for neural learning systems.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 29 Mar 2022 • Matej Zečević, Florian Peter Busch, Devendra Singh Dhami, Kristian Kersting
Linear Programs (LP) are celebrated widely, particularly so in machine learning where they have allowed for effectively solving probabilistic inference tasks or imposing structure on end-to-end learning systems.
no code implementations • 22 Oct 2021 • Matej Zečević, Devendra Singh Dhami, Kristian Kersting
More specifically, there are models capable of answering causal queries that are not SCM, which we refer to as \emph{partially causal models} (PCM).
no code implementations • 22 Oct 2021 • Moritz Willig, Matej Zečević, Devendra Singh Dhami, Kristian Kersting
Most algorithms in classical and contemporary machine learning focus on correlation-based dependence between features to drive performance.
no code implementations • 5 Oct 2021 • Matej Zečević, Devendra Singh Dhami, Constantin A. Rothkopf, Kristian Kersting
The question part on the user's end we believe to be solved since the user's mental model can provide the causal model.
no code implementations • 9 Sep 2021 • Matej Zečević, Devendra Singh Dhami, Petar Veličković, Kristian Kersting
Causality can be described in terms of a structural causal model (SCM) that carries information on the variables of interest and their mechanistic relations.
1 code implementation • 26 May 2021 • Matej Zečević, Devendra Singh Dhami, Kristian Kersting
The recent years have been marked by extended research on adversarial attacks, especially on deep neural networks.
1 code implementation • NeurIPS 2021 • Matej Zečević, Devendra Singh Dhami, Athresh Karanam, Sriraam Natarajan, Kristian Kersting
While probabilistic models are an important tool for studying causality, doing so suffers from the intractability of inference.