Search Results for author: Tatsuya Aoyama

Found 6 papers, 2 papers with code

Probe-Less Probing of BERT’s Layer-Wise Linguistic Knowledge with Masked Word Prediction

no code implementations NAACL (ACL) 2022 Tatsuya Aoyama, Nathan Schneider

The current study quantitatively (and qualitatively for an illustrative purpose) analyzes BERT’s layer-wise masked word prediction on an English corpus, and finds that (1) the layerwise localization of linguistic knowledge primarily shown in probing studies is replicated in a behavior-based design and (2) that syntactic and semantic information is encoded at different layers for words of different syntactic categories.

Distributionally Robust Safe Screening

no code implementations25 Apr 2024 Hiroyuki Hanada, Satoshi Akahane, Tatsuya Aoyama, Tomonari Tanaka, Yoshito Okura, Yu Inatsu, Noriaki Hashimoto, Taro Murayama, Lee Hanju, Shinya Kojima, Ichiro Takeuchi

In this study, we propose a method Distributionally Robust Safe Screening (DRSS), for identifying unnecessary samples and features within a DR covariate shift setting.

eRST: A Signaled Graph Theory of Discourse Relations and Organization

no code implementations20 Mar 2024 Amir Zeldes, Tatsuya Aoyama, Yang Janet Liu, Siyao Peng, Debopam Das, Luke Gessler

In this article we present Enhanced Rhetorical Structure Theory (eRST), a new theoretical framework for computational discourse analysis, based on an expansion of Rhetorical Structure Theory (RST).

What's Hard in English RST Parsing? Predictive Models for Error Analysis

1 code implementation10 Sep 2023 Yang Janet Liu, Tatsuya Aoyama, Amir Zeldes

Despite recent advances in Natural Language Processing (NLP), hierarchical discourse parsing in the framework of Rhetorical Structure Theory remains challenging, and our understanding of the reasons for this are as yet limited.

Discourse Parsing

GENTLE: A Genre-Diverse Multilayer Challenge Set for English NLP and Linguistic Evaluation

1 code implementation3 Jun 2023 Tatsuya Aoyama, Shabnam Behzad, Luke Gessler, Lauren Levine, Jessica Lin, Yang Janet Liu, Siyao Peng, YIlun Zhu, Amir Zeldes

We evaluate state-of-the-art NLP systems on GENTLE and find severe degradation for at least some genres in their performance on all tasks, which indicates GENTLE's utility as an evaluation dataset for NLP systems.

coreference-resolution Dependency Parsing +2

Cannot find the paper you are looking for? You can Submit a new open access paper.