Search Results for author: Alistair Willis

Found 10 papers, 1 papers with code

Identifying Annotator Bias: A new IRT-based method for bias identification

no code implementations COLING 2020 Jacopo Amidei, Paul Piwek, Alistair Willis

Our interpretation of IRT offers an original bias identification method that can be used to compare annotators{'} bias and characterise annotation disagreement.

Agreement is overrated: A plea for correlation to assess human evaluation reliability

no code implementations WS 2019 Jacopo Amidei, Paul Piwek, Alistair Willis

Following Sampson and Babarczy (2008), Lommel et al. (2014), Joshi et al. (2016) and Amidei et al. (2018b), such phenomena can be explained in terms of irreducible human language variability.

nlg evaluation

Rethinking the Agreement in Human Evaluation Tasks

no code implementations COLING 2018 Jacopo Amidei, Paul Piwek, Alistair Willis

For this reason, we believe that annotation schemes for natural language generation tasks that are aimed at evaluating language quality need to be treated with great care.

Dialogue Generation Question Generation +1

Search Personalization with Embeddings

1 code implementation12 Dec 2016 Thanh Vu, Dat Quoc Nguyen, Mark Johnson, Dawei Song, Alistair Willis

Recent research has shown that the performance of search personalization depends on the richness of user profiles which normally represent the user's topical interests.

Cannot find the paper you are looking for? You can Submit a new open access paper.