Hate Speech And Fake News Still Haunt Social Media Platforms
We saw signals at the end of last year. Facebook gained the headlines with the global press after internal documents had been published showing the company’s bad practices that could result in mental health problems and even put democracies at risk.
But the discussion about how social media platforms can increase harm, hate speech, fake news, and silence minorities only grew in the last quarter when the controversial owner of Tesla, the billionaire Elon Musk, bought Twitter. All this at almost the same time that LinkedIn blocked inclusive job posts.
There are a total of 5 billion people connected to the internet. With this number, we might think the digital world is a place where everyone is welcome (if you have a connection and a device… you’re in!). At least it’s a place where anyone can find a community. For brands and creators, it means a sea of possible clients they can interact with.
But not everything is rose-colored.
As in real life, we face misinformation and discrimination in the digital world. The difference is that this type of content can spread millions of times faster, and what might look like a harmless action (after all, “it’s just a post”) can lead to massive damage.
So, who’s to blame? The social platforms themselves? Governments and the lack of digital laws? Users? What is the responsibility of each of these related fronts, and more importantly, how can we, as marketers and members of society, contribute to building a better social media environment? Let’s discuss it.
Do you remember that Facebook had to change its organization’s name to Meta?
It was not a long time ago: the end of last year. Right after the company’s internal documents were leaked in the so-called Facebook Papers scandal. We saw a huge number of highlights in the press showing that Facebook knew, for example, that young people were having mental health problems using Instagram due to the comparison of esthetic standards — and the company did nothing.
The same documents showed us that Facebook gave priority to content that irritated people on the feed just because it had more engagement and kept people engaged on the platform for longer. Also, the papers even showed that Facebook knew that the service was helping to bring political divisions that led to wars in developing countries — and, again, it did nothing.
“The company’s leadership knows ways to make Facebook and Instagram safer and won’t make the necessary changes because they have put their immense profits before people,” said Frances Haugen to Congress at that time, a former Facebook employee who leaked the documents to the press.
Free speech is an important pillar and the basis of democracy. But the problem with Mr. Musk is that he has a controversial version of it. He defends that everybody can post everything they want regardless of the problems it brings to society, such as violence to minorities and even death to those who believe in anti-vaccine posts, for example.
The billionaire himself, for example, made anti-vaccine posts and “jokes” with Hitler and was not bothered by Twitter’s policies before buying it — which shows that the platform doesn’t have the best moderation tools (or the intention to moderate all the harmful posts).
Even with that, he compared the CEO of Twitter, Parag Agrawal, to Joseph Stalin for having some moderation policies.
What’s going to happen to Twitter now that Musk promises “free speech” (according to his misrepresented vision) to one of the most famous social media platforms?
A similar controversial view of “free speech” was seen in LinkedIn’s attitude.
The case occurred in Brazil. The platform was blocking affirmative job posts for black and indigenous people. When asked, the company said that “people with the same talents should have access to the same opportunities,” indicating that promoting inclusion of people with no opportunities is some sort of “discrimination” against the ones who don’t suffer preconceptions.
This led to a huge discussion in Brazil, including global companies positioning themselves and the government starting investigations against LinkedIn. In the end, the company stepped back and changed its policies for Latin America.
Is social media a safe place for minorities?
Social media platforms are businesses. That’s not a problem. The issue is when services like Facebook, Twitter, and others try to gain more profits by engaging the users as much as possible, without regard for mental issues that could arise from the content, and providing access to content that could generate more violence in the real world.
It’s hard to think of an inclusive place within social networks. But, fortunately, things are starting to change.
Pinterest is an example of a social network that seems to go against the grain, with a less invasive algorithm and much greater concern for inclusion, diversity, and its users’ well-being.
After the scandal, even Meta for Business (previously Facebook Business) changed its policies and began to deny advertisements in “sensitive areas.” With this regulation, advertisers can’t use target options such as “Lung cancer day,” “LGBT culture” or “Jewish holidays.”
Zuckerberg’s company also said it took down 9.2 million posts deemed content with a harassing nature on Facebook and 7.8 million of the same types of posts on Instagram.
Did I hear a sigh of relief there on the other side? Indeed, these actions can give us a little more optimism about a healthier environment on social media — at least, a first step in the right direction.
Also, The US and European congresses are analyzing the kinds of regulations that can exist to make the big digital platforms more responsible for the information shared on their domains.
All of these are great because social networks can be a very good environment to find your community. For Giordano Bruno, Business Partner at Pipefy, and volunteer at It Gets Better, access to social media makes it easier to get information about LGBTQIAP+ causes, for example.
“When I was fifteen, I found very little content on social media. Today it’s huge. If you type ‘I’m gay and I need help,’ I am going to find lots of articles, people, and organizations where I can find information and make myself feel okay with who I am,” he explained.
And what about marketers and brands?
We talked a lot about the behavior of each social platform and the impact on the well-being of what should be the focal point of each and every social network: its users.
But we can’t forget that, along with users, we have another important part of the social media heel: us — companies, brands, content creators, marketers, and advertisers.
Being active on social media is vital for every brand, and that’s true. And that said, brands also have a part in the mission to guarantee a better balance between mental health, privacy, information, and accountability.
We can use these digital tools for good causes. Reinforcing support and collaboration on important issues is a positive way to impact our community.
For me, movements like #FreeBritney or #BlackLiveMatters are initiatives to start the conversation about difficult topics that need to be addressed to build a better world. Of course, it’s not a matter of just raising a flag on a social media post.
If we want to end prejudice or bias, we must understand where it comes from. For Perly, sometimes, it comes from people that can’t understand how amazing it is to be different.
Perly addresses that the first step for those companies is to understand if they are ready to make that change. He also explained how they must be familiar with the concepts of diversity, equity, and inclusion to generate a sense of belonging.
Also, we are Marketing and Sales professionals, but also people. As consumers of digital channels, we must empower ourselves and start looking for better content that resonates positively with us.
Being more conscious about what we consume while we scroll can also improve our mental health while we surf the web — and even give us better references for our work.
In addition, it is necessary to address the responsibility of Meta, Twitter, and other platforms to have better filters and control over the information that is spread on their domains. Improving the algorithm and moderation to show better content can be a key factor in increasing the tolerance of sensitive content and having our brands in safer environments.
For me, the key to achieving diversity is understanding that our color, gender, and nationality lead us to have different experiences, but that is not an ultimatum for our destiny.
The digital advertisement should encourage minorities to overcome those obstacles and negative stereotypes and be the protagonists of their own stories.
Education is a powerful tool for changing the world. However, this education is not only about how many schools we have but everything that helps to build our society. If we, as brands, creators and users, start educating through our content (and by example), we can achieve a more tolerant environment.
Both in the real-life and the digital world.
This article is also in the new issue of Rock Content Magazine, released this August. In this issue we bring incredible content about diversity, inclusion and accessibility, an extremely important topic for brands and society today. You can download the magazine here, it’s completely free! Good reading!