Automated Essay Scoring
26 papers with code • 1 benchmarks • 1 datasets
Essay scoring: Automated Essay Scoring is the task of assigning a score to an essay, usually in the context of assessing the language ability of a language learner. The quality of an essay is affected by the following four primary dimensions: topic relevance, organization and coherence, word usage and sentence complexity, and grammar and mechanics.
Source: A Joint Model for Multimodal Document Quality Assessment
Most implemented papers
VerAs: Verify then Assess STEM Lab Reports
With an increasing focus in STEM education on critical thinking skills, science writing plays an ever more important role in curricula that stress inquiry skills.
Can Large Language Models Automatically Score Proficiency of Written Essays?
Although several methods were proposed to address the problem of automated essay scoring (AES) in the last 50 years, there is still much to desire in terms of effectiveness.
Autoregressive Score Generation for Multi-trait Essay Scoring
Recently, encoder-only pre-trained models such as BERT have been successfully applied in automated essay scoring (AES) to predict a single overall score.
Exploring LLM Prompting Strategies for Joint Essay Scoring and Feedback Generation
We evaluate both the AES performance that LLMs can achieve with prompting only and the helpfulness of the generated essay feedback.