Search Results for author: Prathyusha Jwalapuram

Found 13 papers, 4 papers with code

Dynamic Scheduled Sampling with Imitation Loss for Neural Text Generation

no code implementations31 Jan 2023 Xiang Lin, Prathyusha Jwalapuram, Shafiq Joty

Scheduled sampling is a curriculum learning strategy that gradually exposes the model to its own predictions during training to mitigate this bias.

Machine Translation Text Generation

Rethinking Self-Supervision Objectives for Generalizable Coherence Modeling

no code implementations ACL 2022 Prathyusha Jwalapuram, Shafiq Joty, Xiang Lin

Given the claims of improved text generation quality across various pre-trained neural models, we consider the coherence evaluation of machine generated text to be one of the principal applications of coherence models that needs to be investigated.

Coherence Evaluation Contrastive Learning +1

DiP Benchmark Tests: Evaluation Benchmarks for Discourse Phenomena in MT

no code implementations1 Jan 2021 Prathyusha Jwalapuram, Barbara Rychalska, Shafiq Joty, Dominika Basaj

Despite increasing instances of machine translation (MT) systems including extrasentential context information, the evidence for translation quality improvement is sparse, especially for discourse phenomena.

Machine Translation Translation

Pronoun-Targeted Fine-tuning for NMT with Hybrid Losses

1 code implementation EMNLP 2020 Prathyusha Jwalapuram, Shafiq Joty, Youlin Shen

Our sentence-level model shows a 0. 5 BLEU improvement on both the WMT14 and the IWSLT13 De-En testsets, while our contextual model achieves the best results, improving from 31. 81 to 32 BLEU on WMT14 De-En testset, and from 32. 10 to 33. 13 on the IWSLT13 De-En testset, with corresponding improvements in pronoun translation.

Machine Translation NMT +2

Rethinking Coherence Modeling: Synthetic vs. Downstream Tasks

no code implementations EACL 2021 Tasnim Mohiuddin, Prathyusha Jwalapuram, Xiang Lin, Shafiq Joty

Although coherence modeling has come a long way in developing novel models, their evaluation on downstream applications for which they are purportedly developed has largely been neglected.

Benchmarking Coherence Evaluation +7

Can Your Context-Aware MT System Pass the DiP Benchmark Tests? : Evaluation Benchmarks for Discourse Phenomena in Machine Translation

no code implementations30 Apr 2020 Prathyusha Jwalapuram, Barbara Rychalska, Shafiq Joty, Dominika Basaj

Despite increasing instances of machine translation (MT) systems including contextual information, the evidence for translation quality improvement is sparse, especially for discourse phenomena.

Machine Translation Translation

Zero-Resource Cross-Lingual Named Entity Recognition

1 code implementation22 Nov 2019 M Saiful Bari, Shafiq Joty, Prathyusha Jwalapuram

Recently, neural methods have achieved state-of-the-art (SOTA) results in Named Entity Recognition (NER) tasks for many languages without the need for manually crafted features.

Cross-Lingual NER Low Resource Named Entity Recognition +3

Evaluating Pronominal Anaphora in Machine Translation: An Evaluation Measure and a Test Suite

2 code implementations IJCNLP 2019 Prathyusha Jwalapuram, Shafiq Joty, Irina Temnikova, Preslav Nakov

The ongoing neural revolution in machine translation has made it easier to model larger contexts beyond the sentence-level, which can potentially help resolve some discourse-level ambiguities such as pronominal anaphora, thus enabling better translations.

Machine Translation Sentence +1

A Unified Linear-Time Framework for Sentence-Level Discourse Parsing

2 code implementations ACL 2019 Xiang Lin, Shafiq Joty, Prathyusha Jwalapuram, M Saiful Bari

We propose an efficient neural framework for sentence-level discourse analysis in accordance with Rhetorical Structure Theory (RST).

Discourse Parsing Sentence

Evaluating Dialogs based on Grice's Maxims

no code implementations RANLP 2017 Prathyusha Jwalapuram

There is no agreed upon standard for the evaluation of conversational dialog systems, which are well-known to be hard to evaluate due to the difficulty in pinning down metrics that will correspond to human judgements and the subjective nature of human judgment itself.

Cannot find the paper you are looking for? You can Submit a new open access paper.