As Facebook grapples with the spread of hate speech on its platform, it is introducing changes that limit the spread of messages in two countries where it has come under fire in recent years: Sri Lanka and Myanmar.
In a blog post on Thursday evening, Facebook said that it was “adding friction” to message forwarding for Messenger users in Sri Lanka so that people could only share a particular message a certain number of times. The limit is currently set to five people.
This is similar to a limit that Facebook introduced to WhatsApp last year. In India, a user can forward a message to only five other people on WhatsApp. In other markets, the limit kicks in at 20. Facebook said some users had also requested this feature because they are sick of receiving chain messages.
In early March, Sri Lanka grappled with mob violence directed at its Muslim minority. In the midst of it, hate speech and rumours started to spread like wildfire on social media services, including those operated by Facebook. The government in the country then briefly shut down citizen’s access to social media services.
In Myanmar, social media platforms have faced a similar, long-lasting challenge. Facebook, in particular, has been blamed for allowing hate speech to spread that stoked violence against the Rohingya ethnic group. Critics have claimed that the company’s efforts in the country, where did does not have a local office or employees, are simply not enough.
In its blog post, Facebook said it has started to reduce the distribution of content from people in Myanmar who have consistently violated its community standards with previous posts. Facebook said it will use learning to explore expanding this approach to other markets in the future.
“By limiting visibility in this way, we hope to mitigate against the risk of offline harm and violence,” Facebook’s Samidh Chakrabarti, director of product management and civic integrity, and Rosa Birch, director of strategic response, wrote in the blog post.
In cases where it identifies individuals or organizations “more directly promote or engage violence”, the company said it would ban those accounts. Facebook is also extending the use of AI to recognize posts that may contain graphic violence and comments that are “potentially violent or dehumanizing.”
The social network has, in the past, banned armed groups and accounts run by the military in Myanmar, but it has been criticized for reacting slowly and, also, for promoting a false narrative that suggested its AI systems handle the work.
Last month, Facebook said it was able to detect 65% of the hate speech content that it proactively removed (relying on users’ reporting for the rest), up from 24% just over a year ago. In the quarter that ended in March this year, Facebook said it had taken down 4 million hate speech posts.
Facebook continues to face similar challenges in other markets, including India, the Philippines, and Indonesia. Following a riot last month, Indonesia restricted the usage of Facebook, Instagram, and WhatsApp in an attempt to contain the flow of false information.