Malicious or Benign? Towards Effective Content Moderation for Children's Videos

24 May 2023  ·  Syed Hammad Ahmed, Muhammad Junaid Khan, H. M. Umer Qaisar, Gita Sukthankar ·

Online video platforms receive hundreds of hours of uploads every minute, making manual content moderation impossible. Unfortunately, the most vulnerable consumers of malicious video content are children from ages 1-5 whose attention is easily captured by bursts of color and sound. Scammers attempting to monetize their content may craft malicious children's videos that are superficially similar to educational videos, but include scary and disgusting characters, violent motions, loud music, and disturbing noises. Prominent video hosting platforms like YouTube have taken measures to mitigate malicious content on their platform, but these videos often go undetected by current content moderation tools that are focused on removing pornographic or copyrighted content. This paper introduces our toolkit Malicious or Benign for promoting research on automated content moderation of children's videos. We present 1) a customizable annotation tool for videos, 2) a new dataset with difficult to detect test cases of malicious content and 3) a benchmark suite of state-of-the-art video classification models.

PDF Abstract

Datasets


Introduced in the Paper:

MoB

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Classification MoB VTN Accuracy 77.85 # 1
Video Classification MoB ConvLSTM Accuracy 69.71 # 3
Video Classification MoB I3D Accuracy 72.11 # 2

Methods