Automated Essay Scoring

26 papers with code • 1 benchmarks • 1 datasets

Essay scoring: Automated Essay Scoring is the task of assigning a score to an essay, usually in the context of assessing the language ability of a language learner. The quality of an essay is affected by the following four primary dimensions: topic relevance, organization and coherence, word usage and sentence complexity, and grammar and mechanics.

Source: A Joint Model for Multimodal Document Quality Assessment

Datasets


Most implemented papers

VerAs: Verify then Assess STEM Lab Reports

psunlpgroup/veras 7 Feb 2024

With an increasing focus in STEM education on critical thinking skills, science writing plays an ever more important role in curricula that stress inquiry skills.

Can Large Language Models Automatically Score Proficiency of Written Essays?

watheq9/aes-with-llms 10 Mar 2024

Although several methods were proposed to address the problem of automated essay scoring (AES) in the last 50 years, there is still much to desire in terms of effectiveness.

Autoregressive Score Generation for Multi-trait Essay Scoring

doheejin/arts 13 Mar 2024

Recently, encoder-only pre-trained models such as BERT have been successfully applied in automated essay scoring (AES) to predict a single overall score.

Exploring LLM Prompting Strategies for Joint Essay Scoring and Feedback Generation

webis-de/bea-24 24 Apr 2024

We evaluate both the AES performance that LLMs can achieve with prompting only and the helpfulness of the generated essay feedback.