[ad_1]
In 2018, Liz O’Sullivan and her colleagues at a leading artificial intelligence startup began working on a system that could automatically remove nudity and other explicit images from the internet.
They have sent millions of photos online to workers in India who have spent weeks attaching labels to obscene materials. The data paired with the photos will be used for teaching. artificial intelligence software How can I recognize inappropriate images? But after the photos were tagged, Ms. O’Sullivan and her team noticed a problem: Indian workers had classified all images of same-sex couples as indecent.
For Ms. O’Sullivan, this moment showed how easily and often prejudice can infiltrate AI. “It was a brutal Whac-a-Mole game,” he said.
This month, 36-year-old New Yorker Ms. O’Sullivan was appointed CEO of a new company, Parity. The startup is one of many organizations, including more than a dozen start-ups and some of the biggest names in technology, and offers tools and services designed to identify and eliminate bias in AI systems.
Soon, businesses may need that help. In April, the Federal Trade Commission warned Against the sale of AI systems that are racially biased or that may prevent individuals from obtaining employment, housing, insurance or other benefits. A week later, the European Union announced draft regulations of the European Union that could lead to this situation. punishing companies for offering such technology.
It is unclear how regulators can police bias. Last week, the National Institute of Standards and Technology, a government research lab whose work often informs policy, released a report. bid It details how businesses can combat bias in AI, including changes in the way technology is designed and built.
Many in the tech industry believe businesses should start preparing for a push. “Some sort of legislation or regulation is inevitable,” said Christian Troncoso, senior director of legal policy at the Software Alliance, a trade group that represents some of the largest and oldest software companies. “Every time there is one of these scary stories about artificial intelligence, it shakes the public’s trust and faith.”
Research over the past few years has shown that facial recognition serviceshealth systems and even talking to digital assistants may be prejudiced against women, people of color, and other marginalized groups. Amid a growing chorus of complaints on the matter, some local regulators have already taken action.
In late 2019, state regulators in New York opened an investigation After a study by the UnitedHealth Group, he found that an algorithm used by a hospital prioritized care for white patients over Black patients, even when white patients were healthier. state last year Researched Apple Card credit service After alleging discrimination against women. The regulators decided Goldman Sachs, which operates the card, did not discriminate, while the status of the UnitedHealth investigation is unclear.
UnitedHealth spokesman Tyler Mason said the company’s algorithm was abused by one of its partners and was not racially biased. Apple declined to comment.
According to PitchBook, a research firm that tracks financial activities, more than $100 million has been invested in the last six months in companies researching ethical issues involving artificial intelligence, after $186 million last year.
But efforts to address the problem reached a tipping point this month when the Software Alliance presented a detailed framework for tackling bias in AI, including the acknowledgment that some automated technologies require regular surveillance from humans. The trade group believes the document could help companies change their behavior and show regulators and legislators how to control the problem.
While Amazon, IBM, Google, and Microsoft have been criticized for being biased in their systems, they also offer tools to combat it.
Ms O’Sullivan said there is no simple solution to bias in AI. A more thorny issue is that some in the industry are questioning whether the problem is as pervasive or harmful as he believes.
“Changing mindsets don’t happen overnight – and that’s even more true when talking about big companies,” he said. “You’re trying to change the minds of many, not just one person.”
When Ms. O’Sullivan began advising businesses on AI bias more than two years ago, she was often met with skepticism. Many managers and engineers have unwittingly embraced what they call “justice,” arguing that the best way to build fair technology is to ignore issues like race and gender.
Increasingly, companies were building systems that learned tasks by analyzing vast amounts of data, including photos, audio, text, and statistics. The belief was that justice would come if a system learned from as much data as possible.
But as Ms. O’Sullivan saw after tagging in India, bias can infiltrate a system when designers choose the wrong data or sort it incorrectly. Studies show that facial recognition services can be biased towards women and people of color when they are trained on white male-dominated photo collections.
Designers can be blind to these problems. Workers in India, where same-sex relationships were still illegal at the time and attitudes towards gays and lesbians were very different from those in the United States, were categorizing the photos as they saw fit.
Ms. O’Sullivan saw the flaws and pitfalls of artificial intelligence while working for Clarifai, the company that led the labeling project. He said he left the company after he realized he was building systems for the military, which he believed could eventually be used to kill. Clarifai did not respond to a request for comment.
Now, after years of complaints about bias in AI—not to mention the threat of regulation—he believes attitudes have changed. The Software Alliance, in its new framework to curb harmful bias, has warned against fairness through unawareness, saying the argument is not valid.
“They agree that you have to turn the rocks over and see what’s underneath,” Ms O’Sullivan said.
Yet there is resistance. He said there was a recent conflict at Google, where two ethical researchers were kicked outwas indicative of the situation in many companies. Efforts to fight prejudice often clash with company culture and the constant effort to develop new technology, kick it out and start making money.
It’s also still hard to know how serious the problem is. “We have very little data needed to model broader societal security issues, including bias with these systems,” said Jack Clark, co-author of the AI Index, an effort to monitor artificial intelligence technology and policy worldwide. “Many things the average person cares about – like justice – have yet to be disciplined or measured on a large scale.
A university student in philosophy and a member of the American Civil Liberties Union, Ms. O’Sullivan is building Parity around a tool designed and licensed by Rumman Chowdhury, a renowned AI ethics researcher who has spent years in business consulting with Accenture. Before I became an admin at Twitter. Dr. Chowdhury installed an older version of Parity and built it on the same tool.
While other start-ups like Fiddler AI and Weights and Biases offer tools to monitor AI services and identify potentially biased behavior, Parity’s technology allows analyzing the data, technologies and methods a business uses to build its services and then identifying areas of risk. purposes. and suggest changes.
The vehicle uses artificial intelligence technology this in itself may be biasedillustrates the double-edged nature of artificial intelligence and the difficulty of Ms. O’Sullivan’s mission.
Tools that can identify bias in AI are flawed, just as AI is flawed. But he said the power of such a tool is to pinpoint potential problems – to get people to look closely at the issue.
In the end, he explained that the goal was to create a wider dialogue between people with a wide range of views. The problem arises when the problem is ignored or when those discussing the problems have the same point of view.
“You need different perspectives. But can you really get different perspectives in a company?” Miss O’Sullivan asked. “This is a very important question, I’m not sure I can answer it.”
[ad_2]
Source link