In India, Facebook Fights Misinformation and Hate Speech

[ad_1]

On February 4, 2019, a Facebook researcher created a new user account to see what it’s like to experience the social media site as a resident of Kerala, India.

For the next three weeks, the account operated by a simple rule: Follow all recommendations generated by Facebook’s algorithms to join groups, watch videos, and discover new pages on the site.

The result was a flood of hate speech, misinformation and celebrations of violence, documented in an internal Facebook report released later that month.

“Following this tester’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve ever seen in my entire life,” the Facebook researcher wrote.

The report was one of dozens of studies and notes written by Facebook employees grappling with the platform’s impact on India. They provide clear evidence of one of the most serious criticisms leveled at the company worldwide by human rights activists and politicians: it moves into a country without fully understanding its potential impact on local culture and politics, and is unable to use the resources to take action. After the problems arise.

India is the company’s largest market, with 340 million people using Facebook’s various social media platforms. And it offers an expanded version of the problems Facebook faces around the world, whose problems in the subcontinent are exacerbated by a lack of resources and expertise in India’s 22 officially recognized languages.

Internal documents obtained by a consortium of news organizations that include The New York Times are part of a larger cache of material called The Facebook Papers. It was convened by Frances Haugen, a former Facebook product manager who was a whistleblower and recently testified at a Senate subcommittee about the company and its social media platforms. References to India were scattered among the documents Ms. Haugen submitted to the Securities and Exchange Commission in a complaint earlier this month.

The documentation includes reports on how it’s done. affiliated bots and fake accounts The country’s ruling party and opposition figures were hurting national elections. They also detail how a plan that Facebook CEO Mark Zuckerberg has advocated to focus on “meaningful social interactions,” or exchanges between friends and family, has led to more misinformation, particularly in India. during the pandemic.

Facebook did not have enough resources in India and was unable to deal with the problems it raised in India, including anti-Muslim posts, according to its documents. Eighty-seven percent of the company’s global budget for time spent classifying misinformation is reserved for the United States, while only 13 percent is reserved for the rest of the world—although North American users only make up 10 percent of the social network. Daily active users, according to a document describing Facebook’s resource allocation.

A Facebook spokesperson, Andy Stone, said the figures are incomplete and do not include third-party verification partners of the company, most of which are outside the US.

This disproportionate focus on the United States has had consequences in many countries outside India. Company documents showed that Facebook took measures to reduce misinformation, including disinformation shared by the Myanmar military junta, during the November elections in Myanmar.

The company reversed these measures after the election, despite research showing that it reduced views of provocative posts by 25.1 percent and posts containing false information by 48.5 percent. Three months later, the army a violent blow in the country. Facebook said it implemented it after the coup a special policy removing praise and support for violence in the country and subsequently banning the Myanmar military from Facebook and Instagram.

In Sri Lanka, people have automatically added hundreds of thousands of users to their Facebook groups, exposing them to content that incites violence and incites hate. A nationalist youth militia group in Ethiopia has successfully coordinated calls for violence and posted other provocative content on Facebook.

Mr Stone said Facebook has invested significantly in technology to find hate speech in a variety of languages, including Hindi and Bengali, two of the most widely used languages. He added that Facebook has halved the hate speech people see globally this year.

“Hate speech against marginalized groups, including Muslims, is on the rise in India and globally,” said Mr Stone. That’s why we’re improving enforcement and are committed to updating our policies as online hate speech evolves.”

Katie Harbath, who has worked and worked directly at Facebook for 10 years as director of public policy, said there is “definitely a question about funding” for Facebook in India, but the answer is not “to throw more money at the problem”. To secure India’s national elections. He said Facebook needed to find a solution that could be applied to countries around the world.

Facebook employees have conducted various tests and fieldwork in India for several years. This work increased ahead of India’s 2019 national elections; In late January of that year, a handful of Facebook employees traveled to the country to meet with colleagues and talk to dozens of local Facebook users.

According to a note written after the trip, one of the top requests from users in India was for Facebook to “take action on the kinds of misinformation linked to real-world harm, particularly politics and religious group tension.”

Ten days after the researcher opened the fake account to examine the false information, suicide bomber In Kashmir’s disputed border region, there has been a surge of violence and a spike in accusations, misinformation and conspiracies between citizens of India and Pakistan.

After the attack, anti-Pakistani content began to circulate in Facebook-recommended groups that the researcher joined. He noted that many of the groups had tens of thousands of users. A different report released by Facebook in December 2019 found that Indian Facebook users tend to join large groups, with the country’s median group size of 140,000 members.

Graphic posts circulated in groups he participated in, including a meme showing the beheading of a Pakistani national and the bodies wrapped in white sheets on the floor.

After the researcher shared her case study with her colleagues, her colleagues commented on the published report that they were concerned about misinformation about the upcoming elections in India.

Two months after the national elections began in India, Facebook took a series of steps to stem the flow of misinformation and hate speech in the country, according to an internal document called the India Election Case Study.

The case study paints an optimistic picture of Facebook’s efforts, including adding more verification partners, the network of third-party outlets Facebook is working to outsource to fact-check, and increasing the amount of misinformation it removes. He also noted that Facebook has created a “political whitelist to limit PR risk,” essentially a list of politicians who receive a special exemption from information control.

The study did not take note of the big problem the company faces with bots in India or issues such as voter suppression. During the election, Facebook saw an increase in bots (or fake accounts) linked to various political groups and in its efforts to spread misinformation that could affect people’s understanding of the voting process.

In a separate post-election report, Facebook found that more than 40 percent of top views or impressions in the Indian state of West Bengal were “fake/false”. An unreal account has garnered more than 30 million impressions.

A report published in March 2021 showed that many of the problems mentioned in the 2019 elections remained.

In an internal document called Hostile Harmful Networks: The India Case Study, Facebook researchers wrote that there are groups and pages on Facebook that are “filled with provocative and misleading anti-Muslim content.”

The report said there were numerous inhumane posts comparing Muslims to “pigs” and “dogs” and false information claiming that the Qur’an, the holy book of Islam, urges men to rape their families.

Much of the material circulated around Facebook groups promoting the Rashtriya Swayamsevak Sangh, an Indian right-wing and nationalist paramilitary group. The groups struggled with an expanding Muslim minority population in West Bengal and near the Pakistani border, posting on Facebook calling for the Muslim population to be expelled from India and promote a Muslim population control law.

According to the report, Facebook knew that such malicious posts were proliferating on its platform and needed to develop its “classifiers”, which are automated systems that can detect and remove posts containing violent and provocative language. Facebook also refrained from describing RSS as a dangerous organization due to “political sensitivities” that could affect the social network’s functioning in the country.

Of the 22 officially recognized languages ​​of India, Facebook said it has trained over five of its AI systems. (For some, he said, there were human critics.) But Hindi and Bengali still didn’t have enough data to adequately moderate the content, and much of the content targeting Muslims was “never flagged or taken action,” Facebook said in its report. .

Five months ago, Facebook was still struggling to effectively remove hate speech against Muslims. Another company reported on detailed efforts by Bajrang Dal, an extremist group affiliated with the Hindu nationalist political party Bharatiya Janata Party, to publish posts containing anti-Muslim narratives on the platform.

According to the document, Facebook is considering describing the group as a dangerous organization because it “encourages religious violence” on the platform. But he hasn’t done that yet.

“Join the group and help the group work; A post looking for members on Facebook to spread Bajrang Dal’s messages, increase the number of members of the group, friends. “Fight for truth and justice until the oppressors are destroyed.”

Ryan Mac, Cecilia Kang and Mike Isaac contributing reporting.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *