From Camping To Cheese Pizza, ‘Algospeak’ Is Taking Over Social Media


Americans are increasingly using code words known as “algospeak” to evade detection by content moderation technology, especially when posting about things that are controversial or may break platform rules.


If you’ve seen people posting about “camping” on social media, there’s a chance they’re not talking about how to pitch a tent or which National Parks to visit. The term recently became “algospeak” for something entirely different: discussing abortion-related issues in the wake of the Supreme Court’s overturning of Roe v. Wade.

Social media users are increasingly using codewords, emojis and deliberate typos—so-called “algospeak”—to avoid detection by apps’ moderation AI when posting content that is sensitive or might break their rules. Siobhan Hanna, who oversees AI data solutions for Telus International, a Canadian company that has provided human and AI content moderation services to nearly every major social media platform including TikTok, said “camping” is just one phrase that has been adapted in this way. “There was concern that algorithms might pick up mentions” of abortion, Hanna said.

More than half of Americans say they’ve seen an uptick in algospeak as polarizing political, cultural or global events unfold, according to new Telus International data from a survey of 1,000 people in the U.S. last month. And almost a third of Americans on social media and gaming sites say they’ve “used emojis or alternative phrases to circumvent banned terms,” like those that are racist, sexual or related to self-harm, according to the data. Hanna explained that Algospeak, which is commonly used to avoid hate speech rules, such as harassment or bullying, was followed closely by policies about violence and exploitation.

We’ve come a long way since “pr0n” and the eggplant emoji. Tech companies, as well the third-party contractors who help them with content polices, face ever-changing challenges due to these evolving workarounds. Although machine learning may be able to detect explicit violations, such as hate speech, AI is often unable to discern between the lines when it comes to phrases or euphemisms that, in one context, seem innocent but have a deeper meaning.


Almost a third of Americans on social media say they’ve “used emojis or alternative phrases to circumvent banned terms.”


The term “cheese pizza,” for example, has been widely used by accounts offering to trade explicit imagery of children. Although there is a related viral trend where many people are singing about their fondness for corn on TikTok, the corn emoji has been used frequently to discuss or attempt to direct people towards porn. Past SME reporting has revealed the double-meaning of mundane sentences, like “touch the ceiling,” used to coax young girls into flashing their followers and showing off their bodies.

“One of the areas that we’re all most concerned about is child exploitation and human exploitation,” Hanna told SME. It’s “one of the fastest-evolving areas of algospeak.”

But Hanna said it’s not up to Telus International whether certain algospeak terms should be taken down or demoted. It’s the platforms that “set the guidelines and make decisions on where there may be an issue,” she said.

“We are not typically making radical decisions on content,” she told SME. “They’re really driven by our clients that are the owners of these platforms. We’re really acting on their behalf.”

For instance, Telus International does not clamp down on algospeak around high stakes political or social moments, Hanna said, citing “camping” as one example. However, the company refused to disclose whether certain terms of algospeak have been banned by any clients.

The “camping” references emerged within 24 hours of the Supreme Court ruling and surged over the next couple of weeks, according to Hanna. But “camping” as an algospeak phenomenon petered out “because it became so ubiquitous that it wasn’t really a codeword anymore,” she explained. That’s typically how algospeak works: “It will spike, it will garner a lot of attention, it’ll start moving into a kind of memeification, and [it] will sort of die out.”

New forms of algospeak also emerged on social media around the Ukraine-Russia war, Hanna said, with posters using the term “unalive,” for example—rather than mentioning “killed” and “soldiers” in the same sentence—to evade AI detection. And on gaming platforms, she added, algospeak is frequently embedded in usernames or “gamertags” as political statements. One example: numerical references to “6/4,” the anniversary of the 1989 Tiananmen Square massacre in Beijing. “Communication around that historical event is pretty controlled in China,” Hanna said, so while that may seem “a little obscure, in those communities that are very, very tight knit, that can actually be a pretty politically heated statement to make in your username.”

Telus International expects to also see an increase in online algospeak around the midterm elections.


“One of the areas that we’re all most concerned about is child exploitation and human exploitation. [It’s] one of the fastest-evolving areas of algospeak.”

Siobhan Hanna, Telus International

Other ways to avoid being moderated by AI involve purposely misspelling words or replacing letters with symbols and numbers, like “$” for “S” and the number zero for the letter “O.” Many people who talk about sex on TikTok, for example, refer to it instead as “seggs” or “seggsual.”

In algospeak, emojis “are very commonly used to represent something that the emoji was not originally envisioned as,” Hanna said. That can happen in some situations, though it’s not always malicious. For example, the U.K. crab emoji spikes as a metaphoric response to Queen Elizabeth’s passing. But in other cases, it’s more malicious: The ninja emoji in some contexts has been substituted for derogatory terms and hate speech about the Black community, according to Hanna.

Few laws regulating social media exist, and content moderation is one of the most contentious tech policy issues on the government’s plate. Legislation like the Algorithmic Accountability Act has been blocked by partisan disputes. This bill is intended to make sure that AI (like content moderation) can be managed ethically and transparently. Social media companies and moderation firms outside of them have done it all, despite the lack of regulation. Experts have expressed concern about the accountability of these companies. called for scrutinyThese relationships.

Telus International provides both human and AI-assisted content moderation, and more than half of survey participants emphasized it’s “very important” to have humans in the mix.

“The AI may not pick up the things that humans can,” one respondent wrote.

And another: “People are good at avoiding filters.”

FOREBES: MORE

FOREBES: READ MOREHere are 25 of the Best Retirement Locations in 2022FOREBES: MOREHow Rich is King Charles III? Discover The New Monarch’s Incredible FortuneFOREBES: MOREThere’s more to it than just inflation. The Avian Flu is causing Thanksgiving Turkey prices to soar





Source link