Search Results for author: Anisia Katinskaia

Found 15 papers, 1 papers with code

Tools for supporting language learning for Sakha

no code implementations WS (NoDaLiDa) 2019 Sardana Ivanova, Anisia Katinskaia, Roman Yangarber

Revita is a freely available online language learning platform for learners beyond the beginner level.

Assessing Grammatical Correctness in Language Learning

no code implementations EACL (BEA) 2021 Anisia Katinskaia, Roman Yangarber

We approach the problem with the methods for grammatical error detection (GED), since we hypothesize that models for detecting grammatical mistakes can assess the correctness of potential alternative answers in a learning setting.

Grammatical Error Detection LEMMA

Applying Gamification Incentives in the Revita Language-learning System

no code implementations games (LREC) 2022 Jue Hou, Ilmari Kylliäinen, Anisia Katinskaia, Giacomo Furlan, Roman Yangarber

Our goal is to keep the learner engaged in long practice sessions over many months—rather than for the short-term.

Semi-automatically Annotated Learner Corpus for Russian

no code implementations LREC 2022 Anisia Katinskaia, Maria Lebedeva, Jue Hou, Roman Yangarber

We present ReLCo— the Revita Learner Corpus—a new semi-automatically annotated learner corpus for Russian.

Grammatical Error Detection

What do Transformers Know about Government?

1 code implementation22 Apr 2024 Jue Hou, Anisia Katinskaia, Lari Kotilainen, Sathianpong Trangcasanchai, Anh-Duc Vu, Roman Yangarber

This paper investigates what insights about linguistic features and what knowledge about the structure of natural language can be obtained from the encodings in transformer language models. In particular, we explore how BERT encodes the government relation between constituents in a sentence.

Sentence

Effects of sub-word segmentation on performance of transformer language models

no code implementations9 May 2023 Jue Hou, Anisia Katinskaia, Anh-Duc Vu, Roman Yangarber

Lastly, we show 4. that LMs of smaller size using morphological segmentation can perform comparably to models of larger size trained with BPE -- both in terms of (1) perplexity and (3) scores on downstream tasks.

Language Modelling Segmentation

Toward a Paradigm Shift in Collection of Learner Corpora

no code implementations LREC 2020 Anisia Katinskaia, Sardana Ivanova, Roman Yangarber

We present the first version of the longitudinal Revita Learner Corpus (ReLCo), for Russian.

Using Crowdsourced Exercises for Vocabulary Training to Expand ConceptNet

no code implementations LREC 2020 Christos Rodosthenous, Verena Lyding, Federico Sangati, Alex K{\"o}nig, er, Umair ul Hassan, Lionel Nicolas, Jolita Horbacauskiene, Anisia Katinskaia, Lavinia Aparaschivei

In this work, we report on a crowdsourcing experiment conducted using the V-TREL vocabulary trainer which is accessed via a Telegram chatbot interface to gather knowledge on word relations suitable for expanding ConceptNet.

Chatbot

v-trel: Vocabulary Trainer for Tracing Word Relations - An Implicit Crowdsourcing Approach

no code implementations RANLP 2019 Verena Lyding, Christos Rodosthenous, Federico Sangati, Umair ul Hassan, Lionel Nicolas, Alex K{\"o}nig, er, Jolita Horbacauskiene, Anisia Katinskaia

In this paper, we present our work on developing a vocabulary trainer that uses exercises generated from language resources such as ConceptNet and crowdsources the responses of the learners to enrich the language resource.

Cannot find the paper you are looking for? You can Submit a new open access paper.