Tweets and items from psychological scales for sexism detection with counterfactual examples.

This dataset consists of three types of 'short-text' content:

  1. social media posts (tweets)
  2. psychological survey items, and
  3. synthetic adversarial modifications of the former two categories.

The tweet data can be further divided into 3 separate datasets based on their source:

1.1 the hostile sexism dataset, 1.2 the benevolent sexism dataset, and 1.3 the callme sexism dataset.

1.1 and 1.2 are pre-existing datasets obtained from Waseem, Z., & Hovy, D. (2016) and Jha, A., & Mamidi, R. (2017) that we re-annotated (see our paper and data statement for further information). The rationale for including these dataset specifically is that they feature a variety of sexist expressions in real conversational (social media) settings. In particular, they feature expressions that range from overtly antagonizing the minority gender through negative stereotypes (1.1) to leveraging positive stereotypes to subtly dismiss it as less-capable and fragile (1.2).

The callme sexism dataset (1.3) was collected by us based on the presence of the phrase 'call me sexist but' in tweets. The rationale behind this choice of query was that several Twitter users opine potentially sexist comments and signal so using the presence of this phrase, which arguably serves as a disclaimer for sexist opinions.

The survey items (2) pertain to attitudinal surveys that are designed to measure sexist attitudes and gender bias in participants. We provide a detailed account of our selection procedure in our paper.

Finally, the adversarial examples are generated by crowdworkers from Amazon Mechanical Turk by making minimal changes to tweets and scale items, in order to change sexist examples to non-sexist ones. We hope that these examples will help us control for typical confounds in non-sexist data (e.g., topic, civility) and lead to datasets with fewer biases, and consequently allow us to train more robust machine learning models. We only asked to turn sexist examples into non-sexist ones, and not vice versa, for ethical reasons.

The dataset is annotated to capture cases where text is sexist because of its content (what the speaker believes) or its phrasing (the speaker's choice of words). We explain the rationale for this codebook in our paper.

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets


License


Modalities


Languages