Search Results for author: Abhisek Chakrabarty

Found 8 papers, 0 papers with code

NICT-5’s Submission To WAT 2021: MBART Pre-training And In-Domain Fine Tuning For Indic Languages

no code implementations ACL (WAT) 2021 Raj Dabre, Abhisek Chakrabarty

The objective of the task was to explore the utility of multilingual approaches using a variety of in-domain and out-of-domain parallel and monolingual corpora.

NMT Translation

NICT‘s Submission To WAT 2020: How Effective Are Simple Many-To-Many Neural Machine Translation Models?

no code implementations AACL (WAT) 2020 Raj Dabre, Abhisek Chakrabarty

In this paper we describe our team‘s (NICT-5) Neural Machine Translation (NMT) models whose translations were submitted to shared tasks of the 7th Workshop on Asian Translation.

Machine Translation NMT +1

FeatureBART: Feature Based Sequence-to-Sequence Pre-Training for Low-Resource NMT

no code implementations COLING 2022 Abhisek Chakrabarty, Raj Dabre, Chenchen Ding, Hideki Tanaka, Masao Utiyama, Eiichiro Sumita

In this paper we present FeatureBART, a linguistically motivated sequence-to-sequence monolingual pre-training strategy in which syntactic features such as lemma, part-of-speech and dependency labels are incorporated into the span prediction based pre-training framework (BART).

LEMMA NMT

Simultaneous Multi-Pivot Neural Machine Translation

no code implementations15 Apr 2021 Raj Dabre, Aizhan Imankulova, Masahiro Kaneko, Abhisek Chakrabarty

Parallel corpora are indispensable for training neural machine translation (NMT) models, and parallel corpora for most language pairs do not exist or are scarce.

Machine Translation NMT +1

Improving Low-Resource NMT through Relevance Based Linguistic Features Incorporation

no code implementations COLING 2020 Abhisek Chakrabarty, Raj Dabre, Chenchen Ding, Masao Utiyama, Eiichiro Sumita

In this study, linguistic knowledge at different levels are incorporated into the neural machine translation (NMT) framework to improve translation quality for language pairs with extremely limited data.

Machine Translation NMT +1

A Neural Lemmatizer for Bengali

no code implementations LREC 2016 Abhisek Chakrabarty, Akshay Chaturvedi, Utpal Garain

Given a word along with its contextual neighbours as input, the model is designed to produce the lemma of the concerned word as output.

LEMMA Lemmatization

Cannot find the paper you are looking for? You can Submit a new open access paper.