Search Results for author: Marco Valentino

Found 36 papers, 15 papers with code

To be or not to be an Integer? Encoding Variables for Mathematical Text

no code implementations Findings (ACL) 2022 Deborah Ferreira, Mokanarangan Thayaparan, Marco Valentino, Julia Rozanova, Andre Freitas

The application of Natural Language Inference (NLI) methods over large textual corpora can facilitate scientific discovery, reducing the gap between current research and the available large-scale scientific knowledge.

Natural Language Inference Sentence

TextGraphs 2021 Shared Task on Multi-Hop Inference for Explanation Regeneration

1 code implementation NAACL (TextGraphs) 2021 Peter Jansen, Mokanarangan Thayaparan, Marco Valentino, Dmitry Ustalov

While previous editions of this shared task aimed to evaluate explanatory completeness – finding a set of facts that form a complete inference chain, without gaps, to arrive from question to correct answer, this 2021 instantiation concentrates on the subtask of determining relevance in large multi-hop explanations.

TextGraphs 2022 Shared Task on Natural Language Premise Selection

1 code implementation COLING (TextGraphs) 2022 Marco Valentino, Deborah Ferreira, Mokanarangan Thayaparan, André Freitas, Dmitry Ustalov

In this summary paper, we present the results of the 1st edition of the NLPS task, providing a description of the evaluation data, and the participating systems.

Estimating the Causal Effects of Natural Logic Features in Transformer-Based NLI Models

no code implementations3 Apr 2024 Julia Rozanova, Marco Valentino, André Freitas

Rigorous evaluation of the causal effects of semantic features on language model predictions can be hard to achieve for natural language reasoning problems.

Language Modelling Negation

A Differentiable Integer Linear Programming Solver for Explanation-Based Natural Language Inference

no code implementations3 Apr 2024 Mokanarangan Thayaparan, Marco Valentino, André Freitas

Integer Linear Programming (ILP) has been proposed as a formalism for encoding precise structural and semantic constraints for Natural Language Inference (NLI).

Natural Language Inference

Inference to the Best Explanation in Large Language Models

no code implementations16 Feb 2024 Dhairya Dalal, Marco Valentino, André Freitas, Paul Buitelaar

While Large Language Models (LLMs) have found success in real-world applications, their underlying explanatory process is still poorly understood.

GPT-3.5 Llama +1

Enhancing Ethical Explanations of Large Language Models through Iterative Symbolic Refinement

1 code implementation1 Feb 2024 Xin Quan, Marco Valentino, Louise A. Dennis, André Freitas

An increasing amount of research in Natural Language Inference (NLI) focuses on the application and evaluation of Large Language Models (LLMs) and their reasoning capabilities.

In-Context Learning Natural Language Inference

Improving Semantic Control in Discrete Latent Spaces with Transformer Quantized Variational Autoencoders

1 code implementation1 Feb 2024 Yingji Zhang, Danilo S. Carvalho, Marco Valentino, Ian Pratt-Hartmann, Andre Freitas

Achieving precise semantic control over the latent spaces of Variational AutoEncoders (VAEs) holds significant value for downstream tasks in NLP as the underlying generative mechanisms could be better localised, explained and improved upon.

Graph-Induced Syntactic-Semantic Spaces in Transformer-Based Variational AutoEncoders

1 code implementation14 Nov 2023 Yingji Zhang, Marco Valentino, Danilo S. Carvalho, Ian Pratt-Hartmann, André Freitas

The injection of syntactic information in Variational AutoEncoders (VAEs) has been shown to result in an overall improvement of performances and generalisation.

Language Modelling Multi-Task Learning

Multi-Operational Mathematical Derivations in Latent Space

1 code implementation2 Nov 2023 Marco Valentino, Jordan Meadows, Lan Zhang, André Freitas

To this end, we introduce different multi-operational representation paradigms, modelling mathematical operations as explicit geometric transformations.

Generating Mathematical Derivations with Large Language Models

1 code implementation19 Jul 2023 Jordan Meadows, Marco Valentino, Andre Freitas

In addition, we analyse 1. 7K equations, and over 200 derivations, to highlight common reasoning errors such as the inclusion of incorrect, irrelevant, and redundant equations.

In-Context Learning Math

A Symbolic Framework for Evaluating Mathematical Reasoning and Generalisation with Transformers

no code implementations21 May 2023 Jordan Meadows, Marco Valentino, Damien Teney, Andre Freitas

This paper proposes a methodology for generating and perturbing detailed derivations of equations at scale, aided by a symbolic engine, to evaluate the generalisability of Transformers to out-of-distribution mathematical reasoning problems.

GPT-3.5 GPT-4 +1

Estimating the Causal Effects of Natural Logic Features in Neural NLI Models

no code implementations15 May 2023 Julia Rozanova, Marco Valentino, Andre Freitas

Rigorous evaluation of the causal effects of semantic features on language model predictions can be hard to achieve for natural language reasoning problems.

Language Modelling

Multi-Relational Hyperbolic Word Embeddings from Natural Language Definitions

1 code implementation12 May 2023 Marco Valentino, Danilo S. Carvalho, André Freitas

Natural language definitions possess a recursive, self-explanatory semantic structure that can support representation learning methods able to preserve explicit conceptual relations and constraints in the latent space.

Learning Word Embeddings

SemEval-2023 Task 7: Multi-Evidence Natural Language Inference for Clinical Trial Data

no code implementations4 May 2023 Maël Jullien, Marco Valentino, Hannah Frost, Paul O'Regan, Donal Landers, André Freitas

This paper describes the results of SemEval 2023 task 7 -- Multi-Evidence Natural Language Inference for Clinical Trial Data (NLI4CT) -- consisting of 2 tasks, a Natural Language Inference (NLI) task, and an evidence selection task on clinical trial data.

Evidence Selection Natural Language Inference +2

Interventional Probing in High Dimensions: An NLI Case Study

no code implementations20 Apr 2023 Julia Rozanova, Marco Valentino, Lucas Cordeiro, Andre Freitas

Probing strategies have been shown to detect the presence of various linguistic features in large language models; in particular, semantic features intermediate to the "natural logic" fragment of the Natural Language Inference task (NLI).

Natural Language Inference Vocal Bursts Intensity Prediction

Going Beyond Approximation: Encoding Constraints for Explainable Multi-hop Inference via Differentiable Combinatorial Solvers

no code implementations5 Aug 2022 Mokanarangan Thayaparan, Marco Valentino, André Freitas

Integer Linear Programming (ILP) provides a viable mechanism to encode explicit and controllable assumptions about explainable multi-hop inference with natural language.

Scientific Explanation and Natural Language: A Unified Epistemological-Linguistic Perspective for Explainable AI

no code implementations3 May 2022 Marco Valentino, André Freitas

A fundamental research goal for Explainable AI (XAI) is to build models that are capable of reasoning through the generation of natural language explanations.

Explainable Artificial Intelligence (XAI) Philosophy

Do Transformers Encode a Foundational Ontology? Probing Abstract Classes in Natural Language

no code implementations25 Jan 2022 Mael Jullien, Marco Valentino, Andre Freitas

With the methodological support of probing (or diagnostic classification), recent studies have demonstrated that Transformers encode syntactic and semantic information to some extent.

Decomposing Natural Logic Inferences in Neural NLI

1 code implementation15 Dec 2021 Julia Rozanova, Deborah Ferreira, Marco Valentino, Mokanrarangan Thayaparan, Andre Freitas

In the interest of interpreting neural NLI models and their reasoning strategies, we carry out a systematic probing study which investigates whether these models capture the crucial semantic features central to natural logic: monotonicity and concept inclusion.

Decision Making Negation +1

Hybrid Autoregressive Inference for Scalable Multi-hop Explanation Regeneration

1 code implementation25 Jul 2021 Marco Valentino, Mokanarangan Thayaparan, Deborah Ferreira, André Freitas

Regenerating natural language explanations in the scientific domain has been proposed as a benchmark to evaluate complex multi-hop and explainable inference.

Multi-hop Question Answering Natural Language Inference +1

Supporting Context Monotonicity Abstractions in Neural NLI Models

no code implementations ACL (NALOMA, IWCS) 2021 Julia Rozanova, Deborah Ferreira, Mokanarangan Thayaparan, Marco Valentino, André Freitas

Natural language contexts display logical regularities with respect to substitutions of related concepts: these are captured in a functional order-theoretic property called monotonicity.

Encoding Explanatory Knowledge for Zero-shot Science Question Answering

no code implementations IWCS (ACL) 2021 Zili Zhou, Marco Valentino, Donal Landers, Andre Freitas

This paper describes N-XKT (Neural encoding based on eXplanatory Knowledge Transfer), a novel method for the automatic transfer of explanatory knowledge through neural encoding mechanisms.

Science Question Answering Transfer Learning +1

Diff-Explainer: Differentiable Convex Optimization for Explainable Multi-hop Inference

no code implementations7 May 2021 Mokanarangan Thayaparan, Marco Valentino, Deborah Ferreira, Julia Rozanova, André Freitas

This paper presents Diff-Explainer, the first hybrid framework for explainable multi-hop inference that integrates explicit constraints with neural architectures through differentiable convex optimization.

Multi-hop Question Answering Natural Language Inference +3

Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards

no code implementations IWCS (ACL) 2021 Marco Valentino, Ian Pratt-Hartmann, André Freitas

An emerging line of research in Explainable NLP is the creation of datasets enriched with human-annotated explanations and rationales, used to build and evaluate models with step-wise inference and explanation generation capabilities.

Explanation Generation valid

Does My Representation Capture X? Probe-Ably

1 code implementation ACL 2021 Deborah Ferreira, Julia Rozanova, Mokanarangan Thayaparan, Marco Valentino, André Freitas

Probing (or diagnostic classification) has become a popular strategy for investigating whether a given set of intermediate features is present in the representations of neural models.

ExplanationLP: Abductive Reasoning for Explainable Science Question Answering

no code implementations25 Oct 2020 Mokanarangan Thayaparan, Marco Valentino, André Freitas

We propose a novel approach for answering and explaining multiple-choice science questions by reasoning on grounding and abstract inference chains.

Answer Selection Multiple-choice +1

A Survey on Explainability in Machine Reading Comprehension

no code implementations1 Oct 2020 Mokanarangan Thayaparan, Marco Valentino, André Freitas

This paper presents a systematic review of benchmarks and approaches for explainability in Machine Reading Comprehension (MRC).

Machine Reading Comprehension

Case-Based Abductive Natural Language Inference

no code implementations COLING 2022 Marco Valentino, Mokanarangan Thayaparan, André Freitas

Most of the contemporary approaches for multi-hop Natural Language Inference (NLI) construct explanations considering each test case in isolation.

Natural Language Inference Question Answering

Identifying Supporting Facts for Multi-hop Question Answering with Document Graph Networks

no code implementations WS 2019 Mokanarangan Thayaparan, Marco Valentino, Viktor Schlegel, Andre Freitas

Recent advances in reading comprehension have resulted in models that surpass human performance when the answer is contained in a single, continuous passage of text.

Multi-hop Question Answering Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.