[ad_1]
It’s no surprise that these videos are making news. People shoot their videos because they work. Seeking opinions has been one of the most effective strategies for forcing a major platform to fix something for years. Tiktok, Twitter, and Facebook have made it easy for users to report abuse and rule violations by other users. But when these companies seem to be breaking their own policies, people often find that the best way forward is to try to post on the platform itself in the hopes of going viral and attracting attention that leads to some sort of resolution. For example, Tyler’s two videos in the Marketplace bios each had over 1 million views.
“I’m probably tagged to something once a week,” says Casey Fiesler, an assistant professor at the University of Colorado at Boulder who does research on technology ethics and online communities. He’s active on TikTok, which has 50,000+ followers, but while not everything he sees seems like a legitimate concern, he says the app’s regular thread parade is real. has made several such mistakes over the past few months, all of which disproportionately affect marginalized groups on the platform.
MIT Technology Review He asked TikTok about each of these recent examples, and the answers are similar: After reviewing it, TikTok finds that the question was created in error, emphasizes that the blocked content in question doesn’t violate its policies, and gives groups like links to support the company.
The question is whether this cycle—some technical or policy error, a viral response, and an apology—can be changed.
Solving problems before they arise
“There are probably two kinds of harm in this algorithmic content moderation that people have observed,” Fiesler says. “One is false negatives. People are asking ‘why is there so much hate speech on this platform and why isn’t it being removed?’ says.”
The other is false positive. “Their content is flagged for being one of a marginalized group who talks about their experiences with racism,” he says. “Talking about hate speech and hate speech can feel a lot like an algorithm.”
He noted that both of these categories hurt the same people: Those who are disproportionately abused are algorithmically censored for talking about it.
TikTok’s mysterious suggestion algorithms are part of its success— but its vague and ever-changing borders are already having a chilling effect on some users. Fiesler notes that many TikTok creators self-censor words on the platform to avoid triggering a review. And while he wasn’t entirely sure how successful this tactic was, Fielser started doing it himself, just in case. Account bans, algorithmic mysteries, and weird moderation decisions are a constant part of the conversation in the app.
[ad_2]
Source link