How Facebook fuels religious violence
The social media platform’s haphazard content moderation strategy is failing Bangladesh. Here’s how it can fix its policy
Last October, Bangladesh was rocked by one of its worst bouts of violence in years when Muslims across the country attacked hundreds of Hindu homes and temples, killing at least six people and injuring dozens of others.
The attacks were sparked by two inflammatory posts that went viral on a popular platform: Facebook. In the first, a live video broadcast the alleged desecration of a Quran at a Hindu temple; in the second, a Hindu man allegedly criticised Islam.
Since its founding in 2004, Facebook has been aware of the potential dangers of its platform. Its terms of service have always reserved the right to remove threatening speech, and as harmful content increasingly plagued the platform, it regularly updated its policies to better define its guidelines and enforcement approaches. For years, Facebook's corporate owner, now known as Meta, has funnelled billions of dollars into improving "safety and security" measures across all of its products, including WhatsApp and Instagram.
But these efforts have failed to curb hate speech-fueled violence. Just weeks before the Bangladeshi attacks, former Facebook employee Frances Haugen testified about the company's pernicious effect on hate speech and social divisions before the US Congress. Facebook is "literally fanning ethnic violence," she said. The company's response—revising its bullying and harassment policies—has done little to resolve the crisis.
Facebook's ad hoc and piecemeal approach to content moderation is failing in Bangladesh and around the world, with stark consequences.
In Bangladesh, allegations of blasphemy, often made by ordinary citizens, are particularly potent at mobilising the country's Muslim majority against minority communities. Long before Facebook was developed, these hate campaigns spread through street movements and word of mouth, often to violent results. In 1999, militants brutally attacked Shamsur Rahman, a poet and secular activist, over his controversial views.
Facebook has amplified the spread of such speech, especially when it is promoted by political and religious elites eager to demonstrate their populist bona fides. Over the last decade, these Facebook posts have triggered severe anti-minority violence across Bangladesh. In the country's first major instance of Facebook-inspired violence, thousands of Muslims attacked a Buddhist enclave in 2012 after a photo of a burned Quran—which tagged the account of a Buddhist man—circulated on the platform. In the wake of the violence, at least 1,000 Buddhist families fled their homes.
The attack set off what has since become a dire annual trend. In 2013 and 2014, Muslim rioters attacked Hindu villages after a Hindu man was falsely accused of demeaning the Prophet Mohammed on Facebook; two years after the latter incident, rioters vandalised at least 15 Hindu temples. In 2017, a 20,000-person mob set fire to a Hindu village; an assault on a Hindu community in 2019 killed four people and injured 50 others. Earlier in 2021, a Hindu man's car and home were vandalised after he allegedly criticised the Prophet Mohammed in a Facebook conversation.
Over the last decade, Islamists have killed dozens of allegedly blasphemous bloggers, whether Hindus, atheists, or Muslims, after first learning of their online posts, a troubling trend that has pushed many others to flee the country. They have also targeted minority Muslim sects, such as the Ahmadiyya, for their supposedly blasphemous heterodox beliefs. Fear of blasphemy has gone digital—and Facebook has accelerated its ability to stoke conflict.
To combat hate speech, Facebook relies on a multipronged strategy that incorporates algorithms, user reporting, and internal content moderators. The company's algorithms are designed to identify and remove harmful content, while its automated system reviews user-reported posts. When the system fails to draw a conclusion, the post is sent to human content moderators, many of whom face substantial language barriers and lack the bandwidth to review a high volume of content.
As malicious content propagated on its platform, Facebook made strides in building its expertise on conflict dynamics and tailoring mitigations accordingly. In Bangladesh, the platform has removed accounts associated with hacking, spam, and election-related misinformation; more recently, it appointed the country's first public policy manager and expanded its Bengali-speaking staff.
This moderation strategy has proved to be inadequate. Last year's Facebook Papers revealed that Facebook's algorithm reportedly catches less than 5 percent of online hate speech. The video that sparked Bangladesh's recent violence stayed online for six days and was shared over 56,000 times before Facebook took it down. Despite its efforts, the company's policy has consistently been piecemeal and underfunded, with moderators working under difficult conditions for poor pay.
One key issue is that Facebook has largely focused its moderation efforts in English-speaking Western countries, while neglecting other regions where hate speech can prove to be even more dangerous. To better curb violence in these countries, the platform must ensure that sufficient resources are devoted to them—including by bolstering staff, adding localised resources, expanding its hate speech training to more languages, and improving product designers' knowledge of country-specific context, culture, and trends. Governments can push Facebook to enact these changes by employing key regulatory tools, such as the European Union's Digital Services Act, that increase transparency and accountability.
Expanding partnerships with external experts, such as trusted civil society groups and academics, could enable Facebook to effectively flag questionable content and identify early warning signs of conflict. Through their understanding of local context, customs, and historical grievances, Facebook could better assess which types of content are likely to stoke violence.
With these efforts in place, Facebook should consider developing regional or country-specific guidelines on high-risk hate speech. In Bangladesh, such policies would have helped moderators understand that provocative content about Islam is more likely to incite violence than other posts.
To combat harmful content, Facebook must also commit to publicising detailed data on hate speech, which spreads differently across regions. There is currently minimal public data on the prevalence of hate speech—and Facebook has largely concealed its own research or responded negatively to those who criticise it. Last year, company executives disbanded an internal team that built and managed CrowdTangle, a public Facebook-owned data analytics tool, after researchers and journalists used its data to shed a negative light on the platform. By releasing more proprietary data—and allowing researchers to perform their own analyses—Facebook would help policymakers devise improved, targeted solutions.
Since 2018, Facebook's "at-risk countries" team has monitored and removed hateful content in countries it considers to be most vulnerable to violence, particularly during election seasons. Insufficient funding, however, has constrained the number of places that the team can focus on. Expanded resourcing would allow Facebook to broaden its attention to countries like Bangladesh, which experience high religious tensions and are especially susceptible to communal violence.
In regions that are currently experiencing conflict, and therefore more prone to the dangers of hate speech, Facebook can implement a temporary zero-tolerance policy that aggressively moderates potentially provocative content during periods of high tension. Algorithms, which would be designed to identify country-specific words known to inflame tensions, would flag potentially hateful content. In order to prevent these posts from going viral while under human moderators' review, Facebook should temporarily disable or limit views and all forms of engagement—likes, comments, and shares—before an internal decision has been made.
As Facebook continues to grapple with hate speech, disinformation, and malicious content, religious violence will only continue to afflict Bangladesh. And with more than 47 million Bangladeshis now on the platform, the stakes of inaction are too high to ignore.
Facebook, as well as other social media platforms looking to expand globally, must make curbing hate speech abroad a higher priority. It's time for the company to adopt a meaningful and well-resourced approach to moderating hate speech—one that focuses on enfranchisement and safety.
Mubashar Hasan is an adjunct fellow at the University of Western Sydney's Humanitarian and Development Research Initiative. He is the author of Islam and Politics in Bangladesh: The Followers of Ummah.
Geoffrey Macdonald is the Bangladesh programme director at the International Republican Institute.
Hui Hui Ooi is a senior programme manager for technology and democracy at the International Republican Institute. She is a Southeast Asian specialist and co-author of Combating Information Manipulation: A Playbook for Elections and Beyond.
Disclaimer: This article first appeared in Foreign Policy, and is published by special syndication arrangement.