BEIR (Benchmarking IR) is a heterogeneous benchmark containing different information retrieval (IR) tasks. Through BEIR, it is possible to systematically study the zero-shot generalization capabilities of multiple neural retrieval approaches.
188 PAPERS • 19 BENCHMARKS
The corpus contains review sentences mostly of products in electronics domain, annotated and segregated into 4 comparison categories. Each comparison sentence is annotated with names of the products (PROD1 and PROD2), the aspect (ASP) and the predicate (PRED). Dataset contains sentences after auto-labeling on SNAP dataset and manually labeled sentences from the following corpora:
1 PAPER • 1 BENCHMARK
Click to add a brief description of the dataset (Markdown and LaTeX enabled).
1 PAPER • NO BENCHMARKS YET