Search Results for author: Wen-Haw Chong

Found 5 papers, 2 papers with code

Pro-Cap: Leveraging a Frozen Vision-Language Model for Hateful Meme Detection

3 code implementations16 Aug 2023 Rui Cao, Ming Shan Hee, Adriel Kuek, Wen-Haw Chong, Roy Ka-Wei Lee, Jing Jiang

Specifically, we prompt a frozen PVLM by asking hateful content-related questions and use the answers as image captions (which we call Pro-Cap), so that the captions contain information critical for hateful content detection.

Image Captioning Language Modelling +2

Decoding the Underlying Meaning of Multimodal Hateful Memes

1 code implementation28 May 2023 Ming Shan Hee, Wen-Haw Chong, Roy Ka-Wei Lee

Recent studies have proposed models that yielded promising performance for the hateful meme classification task.

Benchmarking Hateful Meme Classification

Prompting for Multimodal Hateful Meme Classification

no code implementations8 Feb 2023 Rui Cao, Roy Ka-Wei Lee, Wen-Haw Chong, Jing Jiang

Specifically, we construct simple prompts and provide a few in-context examples to exploit the implicit knowledge in the pre-trained RoBERTa language model for hateful meme classification.

Classification Hateful Meme Classification +1

On Explaining Multimodal Hateful Meme Detection Models

no code implementations4 Apr 2022 Ming Shan Hee, Roy Ka-Wei Lee, Wen-Haw Chong

For instance, it is unclear if these models are able to capture the derogatory or slurs references in multimodality (i. e., image and text) of the hateful memes.

Classification Hateful Meme Classification

Disentangling Hate in Online Memes

no code implementations9 Aug 2021 Rui Cao, Ziqing Fan, Roy Ka-Wei Lee, Wen-Haw Chong, Jing Jiang

Our experiment results show that DisMultiHate is able to outperform state-of-the-art unimodal and multimodal baselines in the hateful meme classification task.

Classification Hateful Meme Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.