How the use of emoji on Islamophobic Facebook pages amplifies racism
- Written by Ariadna Matamoros-Fernández, Lecturer in Digital Media at the School of Communication, Queensland University of Technology
In the aftermath of the fatal stabbing in Melbourne’s Bourke Street on Friday, Facebook and other social media platforms were flooded with hateful messages towards Muslims and Islam.
Over the weekend, I browsed several Islamophobic Facebook pages, such as “No sharia law – Never ever give up Australia” and “Reclaim Australia”. These groups were using the incident to dehumanise Muslims by sharing mocking memes, GIFs, decontextualised information, and blatantly racist comments.
I noted the “angry” reaction button being widely used in response to posts, while comments on posts were often accompanied by other emoji that emphasised states of rage towards Muslims: angry face, pouting face, swearing face.
Screengrabbed by author, November 2018This kind of emoji use involves overt racist language and practices, but emoji can also be used to cloak everyday microaggressions in humour and play. For example, previous research has found how online harassers often use cues such as smiley emoticons to mitigate their abuse.
Racism on social media is structural too. A larger body of evidence shows that it can be built into, and normalised by, social media platforms, which also have the power to curb it by instituting responsible policies and processes.
Using emoji to amplify Islamophobia
Facebook introduced “reactions” in early 2016. Beyond just a simple “like”, this function allowed users to interact with posts and comments by clicking on emoji-like buttons to signify emotions: love, laughter, surprise, sadness and anger.
Since then, hate groups and other users have appropriated this technical feature to spread anger towards specific targets. One way of doing this involves overlaying a question on an image or video and encouraging users to respond by choosing between two reactions, with the “angry” reaction typically being one of the options.
In the below example, taken from a Belgian far-right political party’s Facebook page, users are asked to respond to the question of whether the school year should be adapted to accommodate Islamic traditions.
Screengrabbed by author, September 2017In this way, “reactions” facilitate the performance of rage and antagonism towards other groups by allowing users to click on an angry-faced emoji.
The way Facebook uses this information can have problematic consequences. According to a 2016 blog post, Facebook’s algorithms interpret the clicking of the “angry” reaction on a post as an indication that users want to see more content related to those posts. Islamophobic content that attracts high numbers of “angry” reactions therefore has the potential to become even more visible and shareable.
Facebook also uses these emotional responses to posts to create user profiles to sell to advertisers. The creation of automated categories based on user behaviour has involved Facebook in several public scandals. For example, in the past, Facebook has reportedly allowed advertisers to target “jew haters” and people interested in “white genocide conspiracy theory” – a useful tool for people who wish to spread hate.
How emoji can reproduce cultural stereotypes
In general, emoji are benign and funny digital images. But their design and use can reproduce long-running racist stereotypes.
The US body responsible for the emoji set, the Unicode Consortium, decides what ends up being represented as emoji. At times, these decisions have caused controversies around cultural diversity and race. For example, Apple’s family emojis originally excluded depictions of same-sex couples.
Screengrabbed by author, September 2017Racist stereotypes can be further entrenched by the way emoji are used. In my study of the use of Facebook reactions by Belgian far-right political party, Vlaams Belang, I examined how emoji were used to spread anger. I found that users often responded to posts by posting more emoji and Meep stickers.
The vomit sticker surfaced as a popular and recurrent choice to express disgust towards Muslims.
Screengrabbed by author, May 2017This aligns with other exploitations of the act of vomiting as a cultural trope to convey xenophobia.
For example, the British television series Little Britain used hyperbolic humour to ridicule the racism of Maggie Blackamoor, one of its characters, by having her vomit every time she ate food made by a non-white person, or met people from different ethnicities.
People also used pig emoji, and various stickers, to show opposition to Muslims. According to Islamic law, eating pork is forbidden, and, historically, Western Islamophobia has used pork to attack Muslims.
The practice of posting pig emoji on the comments of Islamophobic posts draws on this long tradition, contributing to the weaponising of pork to antagonise Muslims.
The challenge of moderating social media content
The fact that racist discourse proliferates through emoji and stickers on Facebook suggests a need for new ways to moderate content.
It is not currently possible to switch off the use of emoji and stickers in comments. That means that people can flood Facebook public pages with problematic emoji, without the platform having an easy solution to it. While Facebook has automated filters to moderate certain words and textual expressions, there isn’t yet a filter to ban emoji, even though they are standardised characters.
Reporting emoji and other stickers as hate speech can be difficult, if not impossible. Whether a cute pig emoji signifies Islamophobia depends on the context in which it was posted, and Facebook’s flagging mechanism doesn’t allow users to explain why certain content might be hateful.
As a result, the practice of weaponising emoji to spread racist discourse is likely to continue.
Failing to provide options to report or minimise certain uses of emoji reflects an assumption that emoji and stickers can’t be used for hateful purposes. But it’s clear that user practices on social media, and the way platforms mediate that use, can contribute to structural racism and other forms of oppression, and make them appear normal, mundane and acceptable.
We all have an interest in ensuring that social media companies take proper responsibility to prevent the content that appears on their platforms from being used to spread hate.