Automated Writing Evaluation
1 papers with code • 0 benchmarks • 0 datasets
Automated writing evaluation refers to the task of analysing and measuring written text based on features, such as syntax, text complexity and vocabulary range.
Benchmarks
These leaderboards are used to track progress in Automated Writing Evaluation
No evaluation results yet. Help compare methods by
submitting
evaluation metrics.
Most implemented papers
HaRiM$^+$: Evaluating Summary Quality with Hallucination Risk
One of the challenges of developing a summarization model arises from the difficulty in measuring the factual inconsistency of the generated text.