2 code implementations • ALTA 2021 • Thomas Scelsi, Alfonso Martinez Arranz, Lea Frermann
With the increasing impact of Natural Language Processing tools like topic models in social science research, the experimental rigor and comparability of models and datasets has come under scrutiny.
no code implementations • ALTA 2021 • Karun Varghese Mathew, Venkata S Aditya Tarigoppula, Lea Frermann
These technologies aim to help the users with various reaching and grasping tasks in their daily lives such as picking up an object and transporting it to a desired location; and their utility critically depends on the ease and effectiveness of communication between the user and robot.
no code implementations • 14 Sep 2023 • Gisela Vallejo, Timothy Baldwin, Lea Frermann
The manifestation and effect of bias in news reporting have been central topics in the social sciences for decades, and have received increasing attention in the NLP community recently.
1 code implementation • 3 Jun 2023 • Lea Frermann, Jiatong Li, Shima Khanehzar, Gosia Mikolajczak
Despite increasing interest in the automatic detection of media frames in NLP, the problem is typically simplified as single-label classification and adopts a topic-like view on frames, evading modelling the broader document-level narrative.
no code implementations • 9 Feb 2023 • Uri Berger, Lea Frermann, Gabriel Stanovsky, Omri Abend
We study the relation between visual input and linguistic choices by training classifiers to predict the probability of expressing a property from raw images, and find evidence supporting the claim that linguistic properties are constrained by visual context across languages.
no code implementations • 17 Nov 2022 • Jinrui Yang, Sheilla Njoto, Marc Cheong, Leah Ruppanner, Lea Frermann
Gender discrimination in hiring is a pertinent and persistent bias in society, and a common motivating example for exploring bias in NLP.
1 code implementation • 17 Oct 2022 • Xudong Han, Aili Shen, Trevor Cohn, Timothy Baldwin, Lea Frermann
Mitigating bias in training on biased datasets is an important open problem.
1 code implementation • NAACL 2022 • Uri Berger, Gabriel Stanovsky, Omri Abend, Lea Frermann
Recent advances in self-supervised modeling of text and images open new opportunities for computational models of child language acquisition, which is believed to rely heavily on cross-modal signals.
1 code implementation • NAACL 2022 • Aili Shen, Xudong Han, Trevor Cohn, Timothy Baldwin, Lea Frermann
Real-world datasets often encode stereotypes and societal biases.
2 code implementations • 4 May 2022 • Xudong Han, Aili Shen, Yitong Li, Lea Frermann, Timothy Baldwin, Trevor Cohn
This paper presents fairlib, an open-source framework for assessing and improving classification fairness.
1 code implementation • NAACL 2022 • Kemal Kurniawan, Lea Frermann, Philip Schulz, Trevor Cohn
Providing technologies to communities or domains where training data is scarce or protected e. g., for privacy reasons, is becoming increasingly important.
no code implementations • 22 Sep 2021 • Aili Shen, Xudong Han, Trevor Cohn, Timothy Baldwin, Lea Frermann
Trained classification models can unintentionally lead to biased representations and predictions, which can reinforce societal preconceptions and stereotypes.
no code implementations • EMNLP 2021 • Shivashankar Subramanian, Afshin Rahimi, Timothy Baldwin, Trevor Cohn, Lea Frermann
Class imbalance is a common challenge in many NLP tasks, and has clear connections to bias, in that bias in training data often leads to higher accuracy for majority groups at the expense of minority groups.
no code implementations • EMNLP 2021 • Shivashankar Subramanian, Xudong Han, Timothy Baldwin, Trevor Cohn, Lea Frermann
Bias is pervasive in NLP models, motivating the development of automatic debiasing techniques.
1 code implementation • CoNLL (EMNLP) 2021 • Chunhua Liu, Trevor Cohn, Lea Frermann
Humans use countless basic, shared facts about the world to efficiently navigate in their environment.
no code implementations • SEMEVAL 2021 • Kemal Kurniawan, Lea Frermann, Philip Schulz, Trevor Cohn
This paper describes PTST, a source-free unsupervised domain adaptation technique for sequence tagging, and its application to the SemEval-2021 Task 10 on time expression recognition.
1 code implementation • NAACL 2021 • Shima Khanehzar, Trevor Cohn, Gosia Mikolajczak, Andrew Turpin, Lea Frermann
Understanding how news media frame political issues is important due to its impact on public attitudes, yet hard to automate.
1 code implementation • EACL 2021 • Kemal Kurniawan, Lea Frermann, Philip Schulz, Trevor Cohn
Cross-lingual transfer is a leading technique for parsing low-resource languages in the absence of explicit supervision.
2 code implementations • ACL 2020 • Pinelopi Papalampidi, Frank Keller, Lea Frermann, Mirella Lapata
Most general-purpose extractive summarization models are trained on news articles, which are short and present all important information upfront.
no code implementations • IJCNLP 2019 • Nikos Papasarantopoulos, Lea Frermann, Mirella Lapata, Shay B. Cohen
Multi-view learning algorithms are powerful representation learning tools, often exploited in the context of multimodal problems.
no code implementations • WS 2019 • Lea Frermann
On QA from full narratives, our model outperforms previous models on the METEOR metric.
no code implementations • WS 2019 • Stefanos Angelidis, Lea Frermann, Diego Marcheggiani, Roi Blanco, Llu{\'\i}s M{\`a}rquez
We present a system for answering questions based on the full text of books (BookQA), which first selects book passages given a question at hand, and then uses a memory network to reason and predict an answer.
no code implementations • 16 Oct 2019 • Lahari Poddar, Gyorgy Szarvas, Lea Frermann
The meaning of a word often varies depending on its usage in different domains.
no code implementations • 2 Oct 2019 • Stefanos Angelidis, Lea Frermann, Diego Marcheggiani, Roi Blanco, Lluís Màrquez
We present a system for answering questions based on the full text of books (BookQA), which first selects book passages given a question at hand, and then uses a memory network to reason and predict an answer.
1 code implementation • ACL 2019 • Lea Frermann, Alex Klementiev, re
In addition to improvements in summarization over topic-agnostic baselines, we demonstrate the benefit of the learnt document structure: we show that our models (a) learn to accurately segment documents by aspect; (b) can leverage the structure to produce both abstractive and extractive aspect-based summaries; and (c) that structure is particularly advantageous for summarizing long documents.
no code implementations • 23 Feb 2019 • Lea Frermann, Mirella Lapata
Categories such as animal or furniture are acquired at an early age and play an important role in processing, organizing, and communicating world knowledge.
no code implementations • NAACL 2018 • Maria Barrett, Ana Valeria Gonz{\'a}lez-Gardu{\~n}o, Lea Frermann, Anders S{\o}gaard
Even small dictionaries can improve the performance of unsupervised induction algorithms.
1 code implementation • TACL 2018 • Lea Frermann, Shay B. Cohen, Mirella Lapata
In this paper we argue that crime drama exemplified in television programs such as CSI:Crime Scene Investigation is an ideal testbed for approximating real-world natural language understanding and the complex inferences associated with it.
1 code implementation • 27 Sep 2017 • Lea Frermann, Michael C. Frank
The impressive ability of children to acquire language is a widely studied phenomenon, and the factors influencing the pace and patterns of word learning remains a subject of active research.
no code implementations • EMNLP 2017 • Lea Frermann, Gy{\"o}rgy Szarvas
Automatically understanding the plot of novels is important both for informing literary scholarship and applications such as summarization or recommendation.
no code implementations • TACL 2016 • Lea Frermann, Mirella Lapata
Word meanings change over time and an automated procedure for extracting this information from text would be useful for historical exploratory studies, information retrieval or question answering.