[ad_1]
NEW DELHI, India (AP) – Facebook According to leaked documents obtained by The Associated Press in India, its own employees have been selective in blocking hate speech, misinformation and provocative posts, especially anti-Muslim content, in India, even as they cast doubt on the motivations and interests of the company.
Highlights from internal company documents related to India, from research in March of this year to company notes dating back to 2019 FacebookIn the world’s largest democracy and the company’s largest growing market, it constantly struggles to stop abusive content on its platforms. Social and religious tensions in India have a history of overflowing and fueling violence on social media.
The files show this Facebook he’s been aware of the problems for years, raising questions about whether he’s done enough to fix them. Many critics and digital experts say it has failed to do so, especially where members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party or BJP are involved.
Worldwide, Facebook has become increasingly important in politics, and India is no different.
Modi gained credit for using the platform to his party’s advantage during the election, and news from The Wall Street Journal last year cast doubt on whether that would be the case. Facebook He was selectively implementing hate speech policies to avoid BJP backlash. Both Modi and Facebook President and CEO Mark Zuckerberg spread the bonhomie in 2015 with a photo of the two hugging each other. Facebook Centre.
The leaked documents include numerous internal reports on hate speech and misinformation in India. In some cases, most of them are condensed by their “recommended” feature and algorithms. But it also includes company employees’ concerns about the mishandling of these issues and their expressed dissatisfaction with the viral “maliciousness” on the platform.
According to the documents, Facebook He saw India as the world’s most “at-risk countries” and identified both Hindi and Bengali languages as priorities for “automation of violating hostile speech”. Yet, Facebook It didn’t have enough local language moderators or content flagging to stop misinformation that sometimes led to real-world violence.
In a statement to the AP, Facebook He said he had “significantly invested in technology to find hate speech in a variety of languages, including Hindi and Bengali,” and that this “half the amount of hate speech people see” by 2021.
“Hate speech against marginalized groups, including Muslims, is on the rise globally. Therefore, we are improving sanctions and are committed to updating our policies as online hate speech evolves.”
This AP story, along with others published, is based on statements made to the Securities and Exchange Commission and is based on previous submissions to Congress. Facebook legal counsel to Frances Haugen, who turned from employee to whistleblower. Corrected versions were obtained by a consortium of news organizations, including the AP.
In February 2019, and ahead of a general election where misinformation concerns were raised, Facebook The employee wanted to understand what a new user in the country sees in their news feeds if they only follow the pages and groups recommended by the platform itself.
He created a working test user account and kept it alive for three weeks, during which time an extraordinary event shook India – a militant attack in disputed Kashmir killed more than 40 Indian soldiers, bringing the country to the brink of war with rival Pakistan.
In the memo, titled “An Indian Tester’s Descent into a Sea of Polarization, Nationalist Messages,” the corrected employee said he was “shocked” by the content that filled the news feed, which had “turned into an almost constant barrage of polarization.” nationalist content, misinformation, violence and blood.”
seemingly benign and harmless groups recommended by Facebook it quickly morphed into something else where hate speech, unconfirmed rumors, and viral content were rampant.
Proposed groups were flooded with fake news, anti-Pakistan rhetoric, and Islamophobic content. Most of the content was extremely graphic.
One of them involved a man holding the bloody head of another man wrapped in a Pakistani flag and with an Indian flag in place of his head. “Popular Around Facebook” feature showed a series of unconfirmed content regarding Indian retaliatory attacks on Pakistan after the bombings. Facebookfact-checking partners.
“Following this tester’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve ever seen in my entire life,” the researcher wrote.
At the time, local media outlets were reporting that Kashmiris were attacked during the fallout, raising deep concerns about what such divisive content could lead to in the real world.
“Should we have an extra responsibility as a company to prevent integrity damage from recommended content?” the researcher asked in the conclusion section.
The memo distributed with other employees did not answer this question. However, it did reveal how the platform’s own algorithms or default settings play a role in promoting this type of bad content. The employee noted that there were obvious “blind spots”, particularly in “local language content”. They said they hope these findings will spark conversations about how to avoid such “integrity damage”, especially for those “significantly different” from the typical US user.
They acknowledged that while the research was conducted over three weeks, which is not an average representation, it demonstrates how such “uncontrolled” and problematic content can “completely take over” during a “major crisis event.”
NS Facebook The spokesperson said the test work “inspired deeper, more rigorous analysis” of recommendation systems and “contributed to product changes to improve them.”
“Separately, our work to curb hate speech continues and we have further strengthened our hate classifiers to include four Indian languages,” the spokesperson said.
[ad_2]
Source link