Search Results for author: Lang Yu

Found 7 papers, 6 papers with code

“No, They Did Not”: Dialogue Response Dynamics in Pre-trained Language Models

no code implementations COLING 2022 Sanghee J. Kim, Lang Yu, Allyson Ettinger

A critical component of competence in language is being able to identify relevant components of an utterance and reply appropriately.

MELO: Enhancing Model Editing with Neuron-Indexed Dynamic LoRA

1 code implementation19 Dec 2023 Lang Yu, Qin Chen, Jie zhou, Liang He

Large language models (LLMs) have shown great success in various Natural Language Processing (NLP) tasks, whist they still need updates after deployment to fix errors or keep pace with the changing knowledge in the world.

Document Classification Hallucination +2

Counterfactual reasoning: Testing language models' understanding of hypothetical scenarios

1 code implementation26 May 2023 Jiaxuan Li, Lang Yu, Allyson Ettinger

Current pre-trained language models have enabled remarkable improvements in downstream tasks, but it remains difficult to distinguish effects of statistical correlation from more systematic logical reasoning grounded on the understanding of real world.

counterfactual Counterfactual Reasoning +2

Counterfactual reasoning: Do language models need world knowledge for causal understanding?

1 code implementation6 Dec 2022 Jiaxuan Li, Lang Yu, Allyson Ettinger

Current pre-trained language models have enabled remarkable improvements in downstream tasks, but it remains difficult to distinguish effects of statistical correlation from more systematic logical reasoning grounded on understanding of the real world.

counterfactual Counterfactual Reasoning +2

"No, they did not": Dialogue response dynamics in pre-trained language models

1 code implementation5 Oct 2022 Sanghee J. Kim, Lang Yu, Allyson Ettinger

A critical component of competence in language is being able to identify relevant components of an utterance and reply appropriately.

On the Interplay Between Fine-tuning and Composition in Transformers

2 code implementations Findings (ACL) 2021 Lang Yu, Allyson Ettinger

Here we investigate the impact of fine-tuning on the capacity of contextualized embeddings to capture phrase meaning information beyond lexical content.

Sentiment Analysis Sentiment Classification

Assessing Phrasal Representation and Composition in Transformers

1 code implementation EMNLP 2020 Lang Yu, Allyson Ettinger

Deep transformer models have pushed performance on NLP tasks to new limits, suggesting sophisticated treatment of complex linguistic inputs, such as phrases.

Cannot find the paper you are looking for? You can Submit a new open access paper.