Search Results for author: Ana Marasović

Found 22 papers, 11 papers with code

Chain-of-Thought Unfaithfulness as Disguised Accuracy

no code implementations22 Feb 2024 Oliver Bentham, Nathan Stringham, Ana Marasović

Understanding the extent to which Chain-of-Thought (CoT) generations align with a large language model's (LLM) internal computations is critical for deciding whether to trust an LLM's output.

Whispers of Doubt Amidst Echoes of Triumph in NLP Robustness

1 code implementation16 Nov 2023 Ashim Gupta, Rishanth Rajendhran, Nathan Stringham, Vivek Srikumar, Ana Marasović

Do larger and more performant models resolve NLP's longstanding robustness issues?

How Much Consistency Is Your Accuracy Worth?

no code implementations20 Oct 2023 Jacob K. Johnson, Ana Marasović

Contrast set consistency is a robustness measurement that evaluates the rate at which a model correctly responds to all instances in a bundle of minimally different examples relying on the same knowledge.

CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation

1 code implementation1 Nov 2022 Abhilasha Ravichander, Matt Gardner, Ana Marasović

We also have workers make three kinds of edits to the passage -- paraphrasing the negated statement, changing the scope of the negation, and reversing the negation -- resulting in clusters of question-answer pairs that are difficult for models to answer with spurious shortcuts.

Natural Language Understanding Negation +1

Does Self-Rationalization Improve Robustness to Spurious Correlations?

no code implementations24 Oct 2022 Alexis Ross, Matthew E. Peters, Ana Marasović

Specifically, we evaluate how training self-rationalization models with free-text rationales affects robustness to spurious correlations in fine-tuned encoder-decoder and decoder-only models of six different sizes.

On Advances in Text Generation from Images Beyond Captioning: A Case Study in Self-Rationalization

no code implementations24 May 2022 Shruti Palaskar, Akshita Bhagia, Yonatan Bisk, Florian Metze, Alan W Black, Ana Marasović

Combining the visual modality with pretrained language models has been surprisingly effective for simple descriptive tasks such as image captioning.

Descriptive Image Captioning +5

Few-Shot Self-Rationalization with Natural Language Prompts

1 code implementation Findings (NAACL) 2022 Ana Marasović, Iz Beltagy, Doug Downey, Matthew E. Peters

We identify the right prompting approach by extensively exploring natural language prompts on FEB. Then, by using this prompt and scaling the model size, we demonstrate that making progress on few-shot self-rationalization is possible.

Effective Attention Sheds Light On Interpretability

1 code implementation Findings (ACL) 2021 Kaiser Sun, Ana Marasović

An attention matrix of a transformer self-attention sublayer can provably be decomposed into two components and only one of them (effective attention) contributes to the model output.

Language Modelling

Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing

no code implementations24 Feb 2021 Sarah Wiegreffe, Ana Marasović

Explainable NLP (ExNLP) has increasingly focused on collecting human-annotated textual explanations.

Data Augmentation

Promoting Graph Awareness in Linearized Graph-to-Text Generation

no code implementations Findings (ACL) 2021 Alexander Hoyle, Ana Marasović, Noah Smith

Generating text from structured inputs, such as meaning representations or RDF triples, has often involved the use of specialized graph-encoding neural networks.

Denoising Text Generation

Explaining NLP Models via Minimal Contrastive Editing (MiCE)

1 code implementation Findings (ACL) 2021 Alexis Ross, Ana Marasović, Matthew E. Peters

Humans have been shown to give contrastive explanations, which explain why an observed event happened rather than some other counterfactual event (the contrast case).

counterfactual Multiple-choice +4

Measuring Association Between Labels and Free-Text Rationales

1 code implementation EMNLP 2021 Sarah Wiegreffe, Ana Marasović, Noah A. Smith

In interpretable NLP, we require faithful rationales that reflect the model's decision-making process for an explained instance.

Decision Making Feature Importance +2

Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs

1 code implementation Findings of the Association for Computational Linguistics 2020 Ana Marasović, Chandra Bhagavatula, Jae Sung Park, Ronan Le Bras, Noah A. Smith, Yejin Choi

Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on gradients or attention weights.

Language Modelling Natural Language Inference +4

SRL4ORL: Improving Opinion Role Labeling using Multi-task Learning with Semantic Role Labeling

1 code implementation NAACL 2018 Ana Marasović, Anette Frank

For over a decade, machine learning has been used to extract opinion-holder-target structures from text to answer the question "Who expressed what kind of sentiment towards what?".

Ranked #2 on Fine-Grained Opinion Analysis on MPQA (using extra training data)

Fine-Grained Opinion Analysis Multi-Task Learning

A Mention-Ranking Model for Abstract Anaphora Resolution

1 code implementation EMNLP 2017 Ana Marasović, Leo Born, Juri Opitz, Anette Frank

We found model variants that outperform the baselines for nominal anaphors, without training on individual anaphor data, but still lag behind for pronominal anaphors.

Abstract Anaphora Resolution Representation Learning +1

Multilingual Modal Sense Classification using a Convolutional Neural Network

no code implementations WS 2016 Ana Marasović, Anette Frank

Modal sense classification (MSC) is a special WSD task that depends on the meaning of the proposition in the modal's scope.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.