Stereotypical Bias Analysis

4 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

LLaMA: Open and Efficient Foundation Language Models

facebookresearch/llama arXiv 2023

We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters.

OPT: Open Pre-trained Transformer Language Models

facebookresearch/metaseq 2 May 2022

Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning.

Galactica: A Large Language Model for Science

paperswithcode/galai 16 Nov 2022

We believe these results demonstrate the potential for language models as a new interface for science.