Search Results for author: Zhangdie Yuan

Found 6 papers, 4 papers with code

PRobELM: Plausibility Ranking Evaluation for Language Models

no code implementations4 Apr 2024 Zhangdie Yuan, Chenxi Whitehouse, Eric Chamoun, Rami Aly, Andreas Vlachos

This paper introduces PRobELM (Plausibility Ranking Evaluation for Language Models), a benchmark designed to assess language models' ability to discern more plausible from less plausible scenarios through their parametric knowledge.

Question Answering World Knowledge

Language Models as Hierarchy Encoders

1 code implementation21 Jan 2024 Yuan He, Zhangdie Yuan, Jiaoyan Chen, Ian Horrocks

Interpreting hierarchical structures latent in language is a key limitation of current language models (LMs).

DIALIGHT: Lightweight Multilingual Development and Evaluation of Task-Oriented Dialogue Systems with Large Language Models

2 code implementations4 Jan 2024 Songbo Hu, Xiaobin Wang, Zhangdie Yuan, Anna Korhonen, Ivan Vulić

We present DIALIGHT, a toolkit for developing and evaluating multilingual Task-Oriented Dialogue (ToD) systems which facilitates systematic evaluations and comparisons between ToD systems using fine-tuning of Pretrained Language Models (PLMs) and those utilising the zero-shot and in-context learning capabilities of Large Language Models (LLMs).

In-Context Learning Task-Oriented Dialogue Systems

Zero-Shot Fact-Checking with Semantic Triples and Knowledge Graphs

no code implementations19 Dec 2023 Zhangdie Yuan, Andreas Vlachos

Despite progress in automated fact-checking, most systems require a significant amount of labeled training data, which is expensive.

Fact Checking Knowledge Graphs +1

Varifocal Question Generation for Fact-checking

1 code implementation22 Oct 2022 Nedjma Ousidhoum, Zhangdie Yuan, Andreas Vlachos

Our method outperforms previous work on a fact-checking question generation dataset on a wide range of automatic evaluation metrics.

Fact Checking Question Answering +2

Can Pretrained Language Models (Yet) Reason Deductively?

1 code implementation12 Oct 2022 Zhangdie Yuan, Songbo Hu, Ivan Vulić, Anna Korhonen, Zaiqiao Meng

Acquiring factual knowledge with Pretrained Language Models (PLMs) has attracted increasing attention, showing promising performance in many knowledge-intensive tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.