Search Results for author: Ekaterina Kochmar

Found 37 papers, 8 papers with code

PetKaz at SemEval-2024 Task 8: Can Linguistics Capture the Specifics of LLM-generated Text?

no code implementations8 Apr 2024 Kseniia Petukhova, Roman Kazakov, Ekaterina Kochmar

In this paper, we present our submission to the SemEval-2024 Task 8 "Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection", focusing on the detection of machine-generated texts (MGTs) in English.

Text Detection

PetKaz at SemEval-2024 Task 3: Advancing Emotion Classification with an LLM for Emotion-Cause Pair Extraction in Conversations

no code implementations8 Apr 2024 Roman Kazakov, Kseniia Petukhova, Ekaterina Kochmar

In this paper, we present our submission to the SemEval-2023 Task~3 "The Competition of Multimodal Emotion Cause Analysis in Conversations", focusing on extracting emotion-cause pairs from dialogs.

Emotion-Cause Pair Extraction Emotion Classification

REFeREE: A REference-FREE Model-Based Metric for Text Simplification

1 code implementation26 Mar 2024 Yichen Huang, Ekaterina Kochmar

Text simplification lacks a universal standard of quality, and annotated reference simplifications are scarce and costly.

Text Simplification

What Makes Math Word Problems Challenging for LLMs?

1 code implementation17 Mar 2024 KV Aditya Srivatsa, Ekaterina Kochmar

This paper investigates the question of what makes math word problems (MWPs) in English challenging for large language models (LLMs).

Math

Are LLMs Good Cryptic Crossword Solvers?

no code implementations15 Mar 2024 Abdelrahman "Boda" Sadallah, Daria Kotova, Ekaterina Kochmar

Cryptic crosswords are puzzles that rely not only on general knowledge but also on the solver's ability to manipulate language on different levels and deal with various types of wordplay.

General Knowledge

How Teachers Can Use Large Language Models and Bloom's Taxonomy to Create Educational Quizzes

no code implementations11 Jan 2024 Sabina Elkins, Ekaterina Kochmar, Jackie C. K. Cheung, Iulian Serban

Question generation (QG) is a natural language processing task with an abundance of potential benefits and use cases in the educational domain.

Language Modelling Large Language Model +2

BasahaCorpus: An Expanded Linguistic Resource for Readability Assessment in Central Philippine Languages

1 code implementation17 Oct 2023 Joseph Marvin Imperial, Ekaterina Kochmar

Current research on automatic readability assessment (ARA) has focused on improving the performance of models in high-resource languages such as English.

The BEA 2023 Shared Task on Generating AI Teacher Responses in Educational Dialogues

no code implementations12 Jun 2023 Anaïs Tack, Ekaterina Kochmar, Zheng Yuan, Serge Bibauw, Chris Piech

This paper describes the results of the first shared task on the generation of teacher responses in educational dialogues.

Automatic Readability Assessment for Closely Related Languages

1 code implementation22 May 2023 Joseph Marvin Imperial, Ekaterina Kochmar

Consequently, when both linguistic representations are combined, we achieve state-of-the-art results for Tagalog and Cebuano, and baseline scores for ARA in Bikol.

How Useful are Educational Questions Generated by Large Language Models?

no code implementations13 Apr 2023 Sabina Elkins, Ekaterina Kochmar, Jackie C. K. Cheung, Iulian Serban

Controllable text generation (CTG) by large language models has a huge potential to transform education for teachers and students alike.

Question Generation Question-Generation +1

Raising Student Completion Rates with Adaptive Curriculum and Contextual Bandits

no code implementations28 Jul 2022 Robert Belfer, Ekaterina Kochmar, Iulian Vlad Serban

We present an adaptive learning Intelligent Tutoring System, which uses model-based reinforcement learning in the form of contextual bandits to assign learning activities to students.

Model-based Reinforcement Learning Multi-Armed Bandits +2

Few-shot Question Generation for Personalized Feedback in Intelligent Tutoring Systems

no code implementations8 Jun 2022 Devang Kulshreshtha, Muhammad Shayan, Robert Belfer, Siva Reddy, Iulian Vlad Serban, Ekaterina Kochmar

Our personalized feedback can pinpoint correct and incorrect or missing phrases in student answers as well as guide them towards correct answer by asking a question in natural language.

Generative Question Answering Question Generation +3

Question Personalization in an Intelligent Tutoring System

no code implementations25 May 2022 Sabina Elkins, Robert Belfer, Ekaterina Kochmar, Iulian Serban, Jackie C. K. Cheung

This paper investigates personalization in the field of intelligent tutoring systems (ITS).

Word Complexity is in the Eye of the Beholder

no code implementations NAACL 2021 Sian Gooding, Ekaterina Kochmar, Seid Muhie Yimam, Chris Biemann

Lexical complexity is a highly subjective notion, yet this factor is often neglected in lexical simplification and readability systems which use a {''}one-size-fits-all{''} approach.

Lexical Simplification

Deep Discourse Analysis for Generating Personalized Feedback in Intelligent Tutor Systems

no code implementations13 Mar 2021 Matt Grenander, Robert Belfer, Ekaterina Kochmar, Iulian V. Serban, François St-Hilaire, Jackie C. K. Cheung

We test our method in a dialogue-based ITS and demonstrate that our approach results in high-quality feedback and significantly improved student learning gains.

Discourse Segmentation Misconceptions

Detecting Multiword Expression Type Helps Lexical Complexity Assessment

1 code implementation LREC 2020 Ekaterina Kochmar, Sian Gooding, Matthew Shardlow

In this work, we re-annotate the Complex Word Identification Shared Task 2018 dataset of Yimam et al. (2017), which provides complexity scores for a range of lexemes, with the types of MWEs.

Complex Word Identification Text Simplification +1

Automated Personalized Feedback Improves Learning Gains in an Intelligent Tutoring System

no code implementations5 May 2020 Ekaterina Kochmar, Dung Do Vu, Robert Belfer, Varun Gupta, Iulian Vlad Serban, Joelle Pineau

Our model is used in Korbit, a large-scale dialogue-based ITS with thousands of students launched in 2019, and we demonstrate that the personalized feedback leads to considerable improvement in student learning outcomes and in the subjective evaluation of the feedback.

BIG-bench Machine Learning

Recursive Context-Aware Lexical Simplification

no code implementations IJCNLP 2019 Sian Gooding, Ekaterina Kochmar

This paper presents a novel architecture for recursive context-aware lexical simplification, REC-LS, that is capable of (1) making use of the wider context when detecting the words in need of simplification and suggesting alternatives, and (2) taking previous simplification steps into account.

Lexical Simplification

Complex Word Identification as a Sequence Labelling Task

1 code implementation ACL 2019 Sian Gooding, Ekaterina Kochmar

Complex Word Identification (CWI) is concerned with detection of words in need of simplification and is a crucial first step in a simplification pipeline.

Complex Word Identification Feature Engineering +1

Automatic learner summary assessment for reading comprehension

no code implementations NAACL 2019 Menglin Xia, Ekaterina Kochmar, Ted Briscoe

Automating the assessment of learner summaries provides a useful tool for assessing learner reading comprehension.

Reading Comprehension

Modelling semantic acquisition in second language learning

no code implementations WS 2017 Ekaterina Kochmar, Ekaterina Shutova

Using methods of statistical analysis, we investigate how semantic knowledge is acquired in English as a second language and evaluate the pace of development across a number of predicate types and content word combinations, as well as across the levels of language proficiency and native languages.

Cannot find the paper you are looking for? You can Submit a new open access paper.