[ad_1]
But much of Oasis’ plan remains idealistic at best. One example is the proposal to use machine learning to detect harassment and hate speech. My Colleague as Karen Hao reported In the past year, AI models are giving too many chances for either hate speech to spread or be circumvented. Still, Wang advocates for Oasis to promote AI as a moderation tool. “AI is only as good as the data gets,” he says. “The platforms share different inspection practices, but all work for better accuracy, faster response, and security through design prevention.”
The document itself is seven pages long and outlines future goals for the consortium. Much of this looks like a mission statement, and Wang says the first few months of work focused on creating advisory groups to help establish goals.
Other elements of the plan, such as the content moderation strategy, are unclear. Wang said he wants companies to hire a variety of content moderators so they can understand and combat harassment against people of color and color. However, the plan does not offer any further steps to achieve this goal.
The consortium will also expect member companies to share data on which users have been abused, which is important in identifying repeat offenders. Wang says participating tech companies will partner with nonprofits, government agencies, and law enforcement to help create security policies. He also plans for Oasis to have a law enforcement response team whose job is to notify police of harassment and abuse. But it remains unclear How task force working with law enforcement will be different from the status quo.
Balancing privacy and security
Despite the lack of concrete details, the experts I spoke to think the consortium’s standards document is at least a good first step. “It’s good that Oasis is looking at self-regulation, starting with people who know the systems and their limits,” says Brittan Heller, a lawyer specializing in technology and human rights.
It’s not the first time tech companies have worked together in this way. In 2017, some agreed to freely exchange information with the Global Internet Forum to Fight Terrorism. Today, GIFCT remains independent and the companies that sign up for it are self-regulated.
Lucy Sparrow, a researcher at the University of Melbourne’s School of Computer and Information Systems, says Oasis offers companies something to work with instead of waiting for them to find the language themselves or wait. a third party to do this work.
Sparrow adds that, as Oasis pushes, cooking ethics in design from scratch is admirable, and his research in multiplayer gaming systems has shown that it makes a difference. “Ethics tends to be pushed aside, but here [Oasis] It encourages thinking about ethics from the start,” he says.
But Heller says ethical design may not be enough. He recommends that tech companies readjust their heavily criticized terms of service to take advantage of consumers without legal expertise.
Sparrow agrees, saying he hesitates to believe that a group of tech companies would act in the best interests of consumers. “It really raises two questions,” he says. “One, how much do we trust equity-based companies to control security? Second, how much control do we want tech companies to have over our virtual lives?”
This is a sticky situation, especially as users have both security and privacy rights, but these needs can be strained.
[ad_2]
Source link