[ad_1]
YouTube’s stricter policies against misinformation have been followed by sharp declines in the prevalence of false and misleading videos on Facebook and Twitter, according to new research released Thursday that underscores the video service’s power in social media.
Researchers New York University Center for Social Media and Policy A significant increase in election fraud was detected in YouTube videos shared on Twitter immediately after the November 3 elections. In November, these videos made up about a third of all election-related video posts on Twitter. The most shared YouTube channels about election fraud that month on Twitter came from sources that have supported false election information in the past, including Project Veritas, Right Side Broadcasting Network, and One America News Network.
However, the rate of election fraud allegations shared on Twitter fell sharply afterwards. December 8. YouTube said that day that it would remove videos that supported the unsubstantiated theory that common mistakes and fraud had changed the outcome of the presidential election. As of December 21, the share of election fraud content shared from YouTube to Twitter fell below 20 percent for the first time since the election.
That percentage dropped further after January 7, when YouTube announced that all channels violating its election misinformation policy would receive “strikes,” and channels that received three strikes within a 90-day period would be permanently removed. As of Opening Day, the rate was around 5 percent.
This trend was repeated on Facebook. The increase in sharing of videos containing fraud theories peaked at around 18 percent of all videos on Facebook just before December 8. After YouTube introduced its stricter policies, the rate fell sharply for most of the month before rising slightly before January 6. Riot in the Capitol. That percentage dropped back to 4 percent by Opening Day, after the new policies were introduced on January 7.
To arrive at their findings, the researchers collected a random sampling of 10 percent of all tweets each day. They then isolated tweets linked to YouTube videos. They did the same thing using CrowdTangle, a Facebook-owned social media analytics tool for YouTube links on Facebook.
From this large dataset, the researchers filtered for election fraud as well as election-related YouTube videos in general, using a range of keywords such as “Stop the Steal” and “Sharpiegate.” This allowed researchers to understand the volume of YouTube videos about election fraud over time and how that volume changed in late 2020 and early 2021.
Misinformation on major social networks has proliferated in recent years. YouTube, especially delayed It lags behind other platforms in eliminating different types of misinformation and often announces stricter policies a few weeks or months later. Facebook and excitement. But in recent weeks, YouTube has toughened its policies, such as: ban all anti-vaccine misinformation and the suspension of accounts of prominent anti-vaccine activists, among them Joseph Mercola and Robert F. Kennedy Jr.
Megan Brown, a research scientist at the NYU Center for Social Media and Policy, said that after YouTube banned the content, people may no longer be able to share videos promoting election fraud. It is also possible that interest in electoral fraud theories drops significantly after states have confirmed election results.
But in conclusion, Ms Brown said, “We know that these platforms are deeply interconnected.” He noted that YouTube has been identified as one of the most shared on other platforms, including: both Facebook recently released content reports and NYUs have Research.
“It’s a big part of the information ecosystem,” Brown said. “Once YouTube’s platform gets healthier, others do the same.”
[ad_2]
Source link