Most existing dialogue systems fail to respond properly to potentially unsafe user utterances by either ignoring or passively agreeing with them.

To address this issue, we introduce ProsocialDialog, the first large-scale multi-turn dialogue dataset to teach conversational agents to respond to problematic content following social norms. Covering diverse unethical, problematic, biased, and toxic situations, ProsocialDialog contains responses that encourage prosocial behavior, grounded in commonsense social rules (i.e., rules-of-thumb, RoTs).

ProsocialDialog consists of 58K dialogues between a speaker showing potentially unsafe behavior and a speaker giving constructive feedback for more socially acceptable behavior. Specifically, it contains a rich suite of:

  • 331K utterances
  • 160K Rules-of-thumb (RoTs)
  • 497K dialogue safety labels accompanied by free-form rationales

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets


License


Modalities


Languages