[ad_1]
These days, mass shooters like the one held in Buffalo, New York don’t stop at plotting brutal attacks on supermarkets. They arrange to broadcast their massacres live on social platforms in hopes of fueling more violence, while also creating marketing plans.
Sites like Twitter, Facebook, and now game streaming platform Twitch have learned painful lessons from dealing with the violent videos that often accompany such footage. But experts are calling for broader discussion, including whether livestreams should exist, as these types of videos are nearly impossible to delete completely once they’re online.
The self-identified white supremacist gunman, who police said had killed 10 all Black people at a Buffalo supermarket on Saturday, had placed a GoPro camera on his helmet to stream his attack live on Twitch, the video game streaming platform used by another shooter. 2019 that killed two people in a synagogue in Halle, Germany.
He previously outlined his plan in a detailed but rambling series of online diary entries that were publicly posted before the attack, but it’s unclear how people might have seen them. Its purpose: to inspire imitators and spread their racist beliefs. After all, he was an impersonator himself.
When he killed 51 people at two mosques in Christchurch, New Zealand three years ago, he decided not to post on Facebook, as another mass shooter did. Unlike Twitch, Facebook requires users to sign up for an account to watch live streams.
Still, everything didn’t go according to plan. Megan Squire, a senior fellow and technologist at the Southern Poverty Law Center, said that on most accounts platforms responded faster than after the 2019 Christchurch shooting to stop the Buffalo video from spreading.
Another Twitch user watching the live video likely pointed it out to Twitch’s content moderators, who said it would have helped Twitch pull the stream less than two minutes after the initial gunshots, per a company spokesperson. Twitch did not say how the video was flagged. Regarding Tuesday’s shooting, the company thanked “for user reports that helped us catch and remove harmful content in real time.”
“They’ve done pretty well in this case,” Squire said. “The fact that the video is so hard to find right now is proof of that.”
This was a small consolation for the family members of the victims. Celestine Chaney’s son, Wayne Jones, learned that his mother had been killed when someone posted a screenshot from the live broadcast. Before long, he saw the video himself.
“I didn’t find out, no one knocked on my door as usual,” he said. “I learned in a Facebook photo that my mother was shot dead. Then I watched the video on social media.”
Chaney’s granddaughter’s girlfriend, Danielle Simpson, said she had reported dozens of sites after the video appeared repeatedly on her Facebook feed, and Chaney was worried her family would see them.
“I think I reported about 100 pages on Sunday because every time I went to Facebook, either the pictures or the video were there,” he said. “You couldn’t escape it. There was nowhere you could go.”
In 2019, the Christchurch footage was broadcast live on Facebook for 17 minutes and quickly spread to other platforms. This time around, platforms generally seemed better coordinated, especially by sharing digital “signatures” of video used to detect and remove duplicates.
But platform algorithms can be more difficult to identify a copycat video if someone has edited it. This has created problems such as users of some internet forums recreating the Buffalo video with twisted attempts at humor. Squire said tech companies should use “more fancy algorithms” to detect these partial matches.
“It looks darker and more cynical,” he said of recent attempts to spread the footage of the shoot.
Twitch has over 2.5 million viewers at any one time; According to the company, nearly 8 million creators stream video on the platform each month. The site uses a combination of user reports, algorithms and moderators to detect and remove any violence occurring on the platform. The company said it quickly removed the attacker’s broadcast, but did not share many details about what happened on Saturday, including whether the broadcast was reported or how many people watched the attack live.
A Twitch spokesperson said the company shared the livestream with the Global Anti-Terrorism Internet Forum, a nonprofit group founded by tech companies to help others monitor their own platform for rebroadcasts. However, the clips in the video continued to reach other platforms, including the Streamable site, where millions of viewers were able to view it. A spokesperson for Hopin, which owns Streamable, said on Monday it was working to remove the videos and terminate the accounts of those who uploaded them.
Looking ahead, the platforms could face moderation complications in the future due to a Texas law – reinstated by an appeals court last week – that prohibits major social media companies from “censoring” users’ viewpoints. Jeff Kosseff, associate professor of cybersecurity law at the US Naval Academy, said the attacker “had a very specific viewpoint” and the law was so vague that it posed a risk to platforms that moderate people like him. “It really puts a finger on the scale of maintaining harmful content,” he said.
Some lawmakers have called for social media companies to further scrutinize their platforms, following the livestream of the attacker. President Joe Biden made no such calls in his speech in Buffalo on Tuesday.
Alexa Koenig, executive director of the Center for Human Rights at the University of California at Berkeley, said there has been a shift in the way tech companies respond to such events. Koenig said it’s an “incredibly important development” for coordination between companies to create fingerprint repositories for extremist videos so they don’t get re-uploaded to other platforms.
A Twitch spokesperson said the company would review how it responded to the gunman’s livestream.
Experts suggest that sites like Twitch could exert greater control over who can stream live and when, for example, by creating delays or whitelisting valid users while banning rule breakers. More broadly, Koenig said, “there’s a general public conversation that needs to be done about the usefulness of live streaming and how we set safe norms about when it’s valuable, when it’s not, and how it’s used and what happens if you use it.”
Of course, another option would be to end the live broadcast entirely. But that’s almost impossible to imagine given how much tech companies rely on livestreams to attract and keep users engaged to make money.
Koenig said freedom of speech is often the reason why tech platforms allow this type of technology – beyond the unspoken profit component. But that has to be balanced with “privacy rights and some of the other issues that arise in this case,” Koenig said.
[ad_2]
Source link