Facebook, hate factor against Rohingyas

Published:

A report please: Amnesty International’s report accusing Meta, Facebook’s parent company, of fueling hatred of the Rohingya with hateful content promoted by algorithms and without questioning its business model. The world’s most popular social network is accused of being jointly responsible for the persecution of Muslim minorities in Burma.

Anti-Rohingya sentiment has risen in Burma since 2012. Inter-communal violence erupts and messages of hate pour into social media. Some posts go viral, like this text by a very famous Burmese meteorologist. He calls on his compatriots to resist” common enemy », he says that Muslims should not be allowed to occupy Burma. The snippet would go on to be shared more than 10,000 times, with 47,000 reactions and 830 comments, including open calls for the killing and uprooting of the Rohingya, for years to come.

In 2018, the US Senate summoned the head of Facebook, Mark Zuckerberg. Democratic Senator Patrick Leahy asks him about Burma and asks him why he can’t get this material out in 24 hours. ” Since hate speech is closely related to the language issue, we are in the process of hiring dozens of Burmese moderators. Without human resources who can speak the local dialects, it is very difficult to eradicate them. And we should make enough efforts in this area “, recognizes the boss of the Meta group.

At the time of this announcement, hundreds of thousands of Rohingya have already been forced to flee Burma. Facebook has only five employees who understand Burmese in a country with 18 million users, a ridiculously low number given the challenges of the inter-ethnic crisis. And too late. Amnesty reports the statements of civil society activists or UN staff who have been redirected to the social network’s warning page, which allows them to report death threats or incitement to racial hatred. Two hypotheses can be put forward. First, negligence: Facebook, where the events took place, failed to act in the face of the mass of shame heaped upon the Rohingya. Second, complicity: because the Facebook algorithms that promote the most read content do not distinguish what is violent.

Facebook’s algorithms dominate” inflammatory exits ” and ” most dangerous content »

According to Patrick de Brun of Amnesty International, the lead author of the report, this is no accident:

Documents that informant Francis Haugen demonstrated to the public how these algorithms work. Now we know that the main algorithms, news feed moderators, recommendations or ranking of publications are designed so that we spend as much time as possible on Facebook. However, all studies converge : This process prioritizes inflammatory speech and the most dangerous content. But it turns out that at the same time, this model has proven to be incredibly profitable for the Meta group. So much so that even when Meta was aware of the risks posed by this system, this company did nothing to change its algorithms or change its economic model.

Today, Meta is the subject of a number of claims before international justice. Two complaints have been filed on both sides of the Atlantic: members of the Rohingya community in the United States and the United Kingdom who say they are victims of a campaign of violence spread by Facebook. They are demanding $150 billion in damages from Mark Zuckerberg’s company. As for displaced people who survived in refugee camps in Bangladesh, they have formed about twenty associations and are asking Meta to fund their education in order to exercise their basic right to education. Facebook must own up to its shortcomings, insist Amnesty International, and reform its practices to prevent further abuses.

Also read: Rohingya refugees sued Facebook

Leave a Reply

Your email address will not be published. Required fields are marked *