System Demo: Tool and Infrastructure for Offensive Language Error Analysis (OLEA) in English

28 Oct 2022  ·  Marie Grace, Xajavion "Jay" Seabrum, Dananjay Srinivas, Alexis Palmer ·

The automatic detection of offensive language is a pressing societal need. Many systems perform well on explicit offensive language but struggle to detect more complex, nuanced, or implicit cases of offensive and hateful language. OLEA is an open-source Python library that provides easy-to-use tools for error analysis in the context of detecting offensive language in English. OLEA also provides an infrastructure for re-distribution of new datasets and analysis methods requiring very little coding.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods