Last weekend, actress and influencer Julia Fox apologized after she misread a TikToker’s reference to “mascara,” not knowing it was “algospeak” for sexual assault, the latest misunderstanding caused by code words used on social media devised to avoid algorithmic censors.
The Key Facts
Users of social media use algospeak to avoid AI content moderation software that might flag content violating rules on the app or which may be sensitive.
Algospeak can be found on TikTok because their content moderation policy is stricter than any other social media app. They will prohibit users posting longer than usual and penalize those who violate its community guidelines.
Emojis can be used to deduce different meanings, so words are often altered. In fact, nearly a third of Americans who use Facebook have reported that they use emojis or altered phrases to communicate prohibited terms. This is according to Telus International (a Canadian company offering AI content moderation).
Social media companies are free to decide how they want to handle AI content moderation. There is no clear guideline.
While the automated content moders often have a broad view of videos that contain racist, hateful or explicit content, they aren’t always able to identify specific words.
The content creators making money must carefully choose the words that can be used. Accounts could be deleted and accounts blocked. TikTok offers a platform for creators of content to appeal to removed videos.
Common ‘algospeak’ Words
Panini/Panorama/Panoramic = Pandemic
Mascara is a male term that can mean boyfriend/romantic companion or refers to the genitals of mated men
Unalive = Suicide/Kill
Seggs/Shmex = Sex
Corn or 🌽 = Porn/Adult Industry
Homophobia = Cornucopia
Leg Booty = Member in the LGBTQ Community
Le dollar bean = Lesbian
Accounting = Sex worker
S.A. = Sexual Assault
Camping = Abortion
Ninja or 🥷 = Derogatory terms and hate speech towards the Black community
In 2019, the U.S. The Algorithmic Accountability Act was introduced by Senator Ron Wyden, D-Ore. It is a bill that ensures AI algorithms remain fair and non-discriminatory. “Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems,” Wyden said. According to the bill, it would be up to the Federal Trading Commission (FTC) to create regulations. They will also provide a guideline that social media companies can use to evaluate and report on how automating crucial decision-making steps affect consumers.
The Crucial Quote
“The reality is that tech companies have been using automated tools to moderate content for a really long time and while it’s touted as this sophisticated machine learning, it’s often just a list of words they think are problematic,” Ángel Díaz, a lecturer at the UCLA School of Law who studies technology and racial discrimination, told The Washington Post.
From Camping To Cheese Pizza, ‘Algospeak’ Is Taking Over Social Media (SME)