Deciphering implicit hate: evaluating automated detection algorithms for multimodal hate

Accurate detection and classification of online hate is a difficult task. Implicit hate is particularly challenging as such content tends to have unusual syntax, polysemic words, and fewer markers of prejudice (e.g., slurs). This problem is heightened with multimodal content, such as memes (combinat...

Full description

Bibliographic Details
Main Authors: Botelho, A, Hale, S, Vidgen, B
Format: Journal article
Language:English
Published: Association for Computational Linguistics 2021
_version_ 1797106859045289984
author Botelho, A
Hale, S
Vidgen, B
author_facet Botelho, A
Hale, S
Vidgen, B
author_sort Botelho, A
collection OXFORD
description Accurate detection and classification of online hate is a difficult task. Implicit hate is particularly challenging as such content tends to have unusual syntax, polysemic words, and fewer markers of prejudice (e.g., slurs). This problem is heightened with multimodal content, such as memes (combinations of text and images), as they are often harder to decipher than unimodal content (e.g., text alone). This paper evaluates the role of semantic and multimodal context for detecting implicit and explicit hate. We show that both text- and visual- enrichment improves model performance, with the multimodal model (0.771) outperforming other models' F1 scores (0.544, 0.737, and 0.754). While the unimodal-text context-aware (transformer) model was the most accurate on the subtask of implicit hate detection, the multimodal model outperformed it overall because of a lower propensity towards false positives. We find that all models perform better on content with full annotator agreement and that multimodal models are best at classifying the content where annotators disagree. To conduct these investigations, we undertook high-quality annotation of a sample of 5,000 multimodal entries. Tweets were annotated for primary category, modality, and strategy. We make this corpus, along with the codebook, code, and final model, freely available.
first_indexed 2024-03-07T07:08:28Z
format Journal article
id oxford-uuid:e1e651d5-8828-4932-96cd-5aa072146e52
institution University of Oxford
language English
last_indexed 2024-03-07T07:08:28Z
publishDate 2021
publisher Association for Computational Linguistics
record_format dspace
spelling oxford-uuid:e1e651d5-8828-4932-96cd-5aa072146e522022-05-19T16:04:28ZDeciphering implicit hate: evaluating automated detection algorithms for multimodal hateJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:e1e651d5-8828-4932-96cd-5aa072146e52EnglishSymplectic ElementsAssociation for Computational Linguistics2021Botelho, AHale, SVidgen, BAccurate detection and classification of online hate is a difficult task. Implicit hate is particularly challenging as such content tends to have unusual syntax, polysemic words, and fewer markers of prejudice (e.g., slurs). This problem is heightened with multimodal content, such as memes (combinations of text and images), as they are often harder to decipher than unimodal content (e.g., text alone). This paper evaluates the role of semantic and multimodal context for detecting implicit and explicit hate. We show that both text- and visual- enrichment improves model performance, with the multimodal model (0.771) outperforming other models' F1 scores (0.544, 0.737, and 0.754). While the unimodal-text context-aware (transformer) model was the most accurate on the subtask of implicit hate detection, the multimodal model outperformed it overall because of a lower propensity towards false positives. We find that all models perform better on content with full annotator agreement and that multimodal models are best at classifying the content where annotators disagree. To conduct these investigations, we undertook high-quality annotation of a sample of 5,000 multimodal entries. Tweets were annotated for primary category, modality, and strategy. We make this corpus, along with the codebook, code, and final model, freely available.
spellingShingle Botelho, A
Hale, S
Vidgen, B
Deciphering implicit hate: evaluating automated detection algorithms for multimodal hate
title Deciphering implicit hate: evaluating automated detection algorithms for multimodal hate
title_full Deciphering implicit hate: evaluating automated detection algorithms for multimodal hate
title_fullStr Deciphering implicit hate: evaluating automated detection algorithms for multimodal hate
title_full_unstemmed Deciphering implicit hate: evaluating automated detection algorithms for multimodal hate
title_short Deciphering implicit hate: evaluating automated detection algorithms for multimodal hate
title_sort deciphering implicit hate evaluating automated detection algorithms for multimodal hate
work_keys_str_mv AT botelhoa decipheringimplicithateevaluatingautomateddetectionalgorithmsformultimodalhate
AT hales decipheringimplicithateevaluatingautomateddetectionalgorithmsformultimodalhate
AT vidgenb decipheringimplicithateevaluatingautomateddetectionalgorithmsformultimodalhate