The Facebook Papers: How Facebook ‘enables’ violence

Panorama

27 October, 2021, 11:40 am
Last modified: 27 October, 2021, 03:23 pm
If bigotry was the vehicle to violence, Facebook is the jet fuel 

When the 2020 election season approached in the United States, Facebook introduced dozens of measures to suppress hateful and deceptive content. And they seemed to work as well, at least on the surface. 

As the US election came to a close, quite understandably, there was a sense of relief at the Facebook headquarters. The curse of 'Cambridge Analytica' was finally over. 

Little did they realise that their premature celebration would come back to bite them just a few weeks later as begrudged Trump and QAnon supporters stormed the Capitol building demanding justice for the 'so-called' stolen election. 

Many of these protestors were instigated through widespread misinformation on Facebook and other platforms like WhatsApp and Instagram, owned by the same company. As they were vandalising the Capitol, Facebook employees were scrambling through their very limited resources to manage the chaos.

These stories remained classified until 22 October this year, when a consortium of 17 US news organisations began publishing a series of stories - called the 'Facebook Papers' - based on hundreds of leaked internal documents from Facebook. These documents were first disclosed to the Security and Exchange Commission and provided to Congress by the legal counsel of Frances Haugens, a whistleblower from Facebook. 

And now, this is being dubbed as the greatest crisis in the company's rather scandal-plagued history. 

What do the reports reveal?

A lot, actually. 

Firstly, the papers suggest that Facebook was fundamentally under-prepared to handle widespread misinformation campaigns and the management of the company was negligent in incorporating harsher measures in suppressing misinformation - primarily due to fear of losing subscribers and thereby investors.

Facebook is also being used to spread fake news to incite violence in Tigray, Ethiopia which should not come as a surprise given that the military junta in Myanmar implemented a similar strategy against the Rohingya population a few years back. In both cases, Facebook failed to contain the spread of the misinformation or halt the violence.

The Papers also revealed that human traffickers are using Facebook as a platform. Despite its employees flagging Facebook accounts committing such crimes back in 2018, it still remains a serious problem, one that is largely unaddressed. 

Most importantly, Facebook maintains a double standard in regulating content in regions that do not speak English as evidenced by their colossal failure to handle widespread misinformation in Myanmar, India, Bangladesh or Ethiopia.

Facebook's role in inciting violence

Whether it is rumours of the now US President Joe Biden stealing the election in Novermber, the Rohingya raping Buddhist women in Myanmar, individuals belonging to the Hindu religious minority desecrating the Holy Quran in Bangladesh, the Muslim minorities plotting a terrorist plan to destroy India or the people of Tigray planning to destabilise Ethiopia - they are all primarily sourced and spread through one common platform: Facebook. 

And more often than not, such misinformation has a wide (and violent) range of implications as evidenced by the US capital riot (January 2021), the recurring assault on Hindu minorities in Bangladesh (the latest in October 2021)  to the military crackdown on the Rohingya (August 2017 mass-scale exodus) forcing them to flee their homeland. 

Recently, Indian National Congress proposed to launch a Joint Party Commission (JPC) probe into Facebook alleging it allowed BJP goons to spread hate speech against minorities in the country.

It begs the question: how does it happen and why has it been allowed to continue? 

Experts identify two fundamental issues with how Facebook can incite violence: a polarising algorithm that fosters bigotry and negligence to deal with misinformation, more disproportionately in foreign-language speaking countries.

Facebook's negligence in handling misinformation

Facebook currently has over 2.74 billion users and reaches 59% of the world's social networking population. About 4.75 billion posts are shared on Facebook each day including photos, videos, statuses etc. Given the overwhelming volume of content, it is understandably quite difficult to monitor each and every post.

Facebook claims to have done a good job of it. 

Facebook's founder, CEO and chairman, Mark Zuckerberg even claimed to have taken care of 94% of all misinformation on Facebook, although studies later found that the rate was somewhere between 4-5%.

On top of that, the Facebook Papers revealed that the management of the company was often reluctant in taking down harmful posts and hate speech in fear of facing backlash, losing large volumes of subscribers and confidence of the investors. 

Unfortunately, Facebook's greatest failure lies elsewhere: handling misinformation in foreign languages.

According to the Wall Street Journal, over 90% of Facebook's total users currently reside outside of the US and Canada and an overwhelming majority of them use foreign languages as their primary mode of communication. However, when analysing the time spent on monitoring misinformation, Facebook only spent 13% of total time monitoring content from outside the US. 

This oversight continues to have severe consequences for non-English-speaking communities in English-speaking countries like US and Canada, as well as, in foreign-language-speaking regions. 

One prime example is a photo of US Vice President Kamala Harris getting vaccinated which was being circulated on Facebook with the caption 'The Fake Vaccination of Kamala Harris.' Facebook flagged this as fake news. But when the same photo circulated with a Spanish caption, Facebook missed it.

Similar misinformation kept spreading throughout Facebook and other platforms like WhatsApp. 

The vaccination rate among Latino communities is overwhelmingly lower than other communities and most of the unvaccinated people claim to have heard 'bad things' about vaccines from Facebook or WhatsApp. A mere coincidence, perhaps? 

Facebook's oversight over foreign content led to misinformation spreading like wildfire through the virtual sphere of Ethiopia, Bangladesh, India, Myanmar etc., provoking seemingly innocent, passionate (and in many cases young) individuals to resort to violence that they may have not committed otherwise. 

For instance, when the photo of a Quran on the lap of a Hindu deity spread on the social media sphere of Bangladesh on a fateful 13 October with derogatory remarks about the Hindu minorities in Bangladesh, these posts did not incorporate any warning signs as the content was in Bangla. 

We all know what happened next. Similar incidents took place during the attack on Buddhist monasteries in Ramu (2012) or the assault on Hindu minorities in Nasirnagar (2016).

And Facebook absolutely cannot avoid accountability or responsibility for those atrocities either. 

How Facebook fosters polarisation

Like any other social media platform, Facebook makes money primarily by running ads on their platforms and also by selling our data. To sustain this business model, the platform needs to keep our attention glued to its screen as long as possible. 

The easiest way to do so is to write an algorithm that shows you content (videos, posts, photos, advertisements etc.) that are only relevant to you. 

Here's the problem: such an algorithm skews the content each subscriber gets to enjoy. If this seems confusing a relevant example can be of assistance. 

If you have subscribed to a slightly left-leaning channel or page, Facebook's algorithm would record this as part of your preferences and would direct content relevant to left-leaning ideologies. Similarly, if you are a Bangladeshi devout Muslim and pressed the like button on an Islamic page, Facebook would only recommend you contents relevant to your previous choices.

Now the radical views of individuals are moderated when they are exposed to debate and opposing schools of thought. But the same individual may become more radicalised if they are shielded from such opposition and are exposed to content that only reinforces their predominant beliefs.

Unfortunately, Facebook does exactly that. 

A staunch right-wing, QAnon supporter only gets Jordan Petersen, Alex Jones or Ben Shapiro content in their recommended list. A predominantly left-wing supporter would only get content supporting Bernie Sanders or Alexandria Ocasio-Cortez. Similarly, a religious person would only get content that reinforces their ugliest instincts that may in no way be relevant to their belief.

That is, such an algorithm - when exposed to for a long time - eventually pushes a seemingly moderate individual over the edge and from the pit of the social media influence doom they emerge as a radical. 

Therefore, while it is imperative that Facebook prepares a diverse roster to fight misinformation that addresses hate speech, fake news in all languages and for every region, it is also important to keep in mind that it is the algorithm itself that plants the seed of polarisation and radicalisation.

Comments

While most comments will be posted if they are on-topic and not abusive, moderation decisions are subjective. Published comments are readers’ own views and The Business Standard does not endorse any of the readers’ comments.