no code implementations • 2 Feb 2024 • Daniel Cunnington, Mark Law, Jorge Lobo, Alessandra Russo
In this paper, we leverage the implicit knowledge within foundation models to enhance the performance in NeSy tasks, whilst reducing the amount of data labelling and manual engineering.
no code implementations • 18 Oct 2023 • Zlatina Mileva, Antonis Bikakis, Fabio Aurelio D'Asaro, Mark Law, Alessandra Russo
In this paper we present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.
1 code implementation • 31 May 2022 • Daniel Furelos-Blanco, Mark Law, Anders Jonsson, Krysia Broda, Alessandra Russo
Reward machines (RMs) are a recent formalism for representing the reward function of a reinforcement learning task through a finite-state machine whose edges encode subgoals of the task using high-level events.
1 code implementation • 25 May 2022 • Daniel Cunnington, Mark Law, Jorge Lobo, Alessandra Russo
A promising direction for achieving this goal is Neuro-Symbolic AI, which aims to combine the interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
no code implementations • 14 May 2022 • Alice Tarzariol, Martin Gebser, Mark Law, Konstantin Schekotihin
Many industrial applications require finding solutions to challenging combinatorial problems.
1 code implementation • 24 Jun 2021 • Daniel Cunnington, Mark Law, Alessandra Russo, Jorge Lobo
To address this limitation, we propose a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FFNSL), that integrates a logic-based machine learning system capable of learning from noisy examples, with neural networks, in order to learn interpretable knowledge from labelled unstructured data.
no code implementations • 31 Dec 2020 • Mark Law
The fundamental idea of the approach, called Conflict-driven ILP (CDILP), is to iteratively interleave the search for a hypothesis with the generation of constraints which explain why the current hypothesis does not cover a particular example.
no code implementations • 9 Dec 2020 • Daniel Cunnington, Alessandra Russo, Mark Law, Jorge Lobo, Lance Kaplan
Using the scoring function of FastLAS, NSL searches for short, interpretable rules that generalise over such noisy examples.
no code implementations • 8 Sep 2020 • Daniel Furelos-Blanco, Mark Law, Anders Jonsson, Krysia Broda, Alessandra Russo
In this paper we present ISA, an approach for learning and exploiting subgoals in episodic reinforcement learning (RL) tasks.
no code implementations • 2 May 2020 • Mark Law, Alessandra Russo, Krysia Broda
The goal of Inductive Logic Programming (ILP) is to learn a program that explains a set of examples in the context of some pre-existing background knowledge.
no code implementations • 29 Nov 2019 • Daniel Furelos-Blanco, Mark Law, Alessandra Russo, Krysia Broda, Anders Jonsson
In this work we present ISA, a novel approach for learning and exploiting subgoals in reinforcement learning (RL).
2 code implementations • 23 Jun 2019 • Andrew Cropper, Richard Evans, Mark Law
This problem is central to inductive general game playing (IGGP).
no code implementations • 25 Aug 2018 • Mark Law, Alessandra Russo, Krysia Broda
In recent years, non-monotonic Inductive Logic Programming has received growing interest.
no code implementations • 5 Aug 2016 • Mark Law, Alessandra Russo, Krysia Broda
In ILP, examples must all be explained by a hypothesis together with a given background knowledge.
no code implementations • 23 Jul 2015 • Mark Law, Alessandra Russo, Krysia Broda
This paper contributes to the area of inductive logic programming by presenting a new learning framework that allows the learning of weak constraints in Answer Set Programming (ASP).