GEM (Generation, Evaluation, and Metrics)

Introduced by Gehrmann et al. in The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics

Generation, Evaluation, and Metrics (GEM) is a benchmark environment for Natural Language Generation with a focus on its Evaluation, both through human annotations and automated Metrics.

GEM aims to:

  • measure NLG progress across 13 datasets spanning many NLG tasks and languages.
  • provide an in-depth analysis of data and models presented via data statements and challenge sets.
  • develop standards for evaluation of generated text using both automated and human metrics.

It is our goal to regularly update GEM and to encourage toward more inclusive practices in dataset development by extending existing data or developing datasets for additional languages.

Source: https://gem-benchmark.com/

Papers


Paper Code Results Date Stars

Tasks


Similar Datasets


Source: Gehrmann et al.

License


  • Unknown

Modalities


Languages