[ad_1]
Activists and academics have been skeptical of facial analysis software that claims to be able to identify a person’s age, gender, and emotional state for years. prejudicedunreliable or invasive – and should not be sold.
Acknowledging some of these criticisms, Microsoft said on Tuesday that it plans to remove these features from its program. artificial intelligence service To detect, analyze and recognize faces. They will stop being available to new users this week and will be phased out for existing users later in the year.
The changes are part of Microsoft’s push for tighter controls of its AI products. After a two-year review, a team at Microsoft developed the “Standard for Responsible Artificial Intelligence,” a 27-page document that sets out requirements to ensure AI systems do not have a detrimental impact on society.
The requirements include that systems provide “valid solutions to the problems they are designed to solve” and provide “a similar quality of service for defined demographic groups, including marginalized groups”.
Technologies that will be used to make important decisions about a person’s access to employment, education, healthcare, financial services, or a life opportunity are reviewed by a team led by Microsoft’s chief AI officer, Natasha Crampton, prior to release. .
There has been growing concern about the emotion recognition tool at Microsoft that labels someone’s expression as anger, contempt, disgust, fear, happiness, neutral, sadness, or surprise.
“There is a tremendous amount of cultural, geographic and individual variation in the way we express ourselves,” said Ms. Crampton. This led to reliability concerns as well as larger questions such as “whether facial expression is a reliable indicator of your inner emotional state.”
Read More About Artificial Intelligence
Eliminating age and gender analysis tools, along with other tools for detecting facial features such as hair and smiles, could be useful for interpreting visual images, for example for people who are blind or low vision, but the company decided that doing so was problematic. Ms. Crampton said profiling tools are generally available to the public.
Specifically, he added that the system’s so-called gender classifier is binary and “this is not consistent with our values.”
Microsoft will also put new controls on facial recognition that can be used to perform identity checks or call a specific person. Uber, for example, uses software To verify that a driver’s face matches the ID on file for that driver’s account. Software developers who want to use Microsoft’s facial recognition tool will need to apply for access and explain how they plan to deploy it.
Users will also need to implement and explain how to use other potentially abusive AI systems such as: Special Neural Sound. The service can create a human voice edition based on a sample of someone’s speech so that authors can, for example, create synthetic versions of their voices to read their audiobooks in languages they don’t speak.
Due to potential abuse of the tool – to create the impression that people are saying things they don’t – speakers must go through a series of steps to verify that their voice is allowed to be used, and the recordings contain watermarks detectable by Microsoft. .
“We are taking concrete steps to bring our AI principles to life,” said Ms. Crampton, who has worked as a lawyer at Microsoft for 11 years and joined the ethical AI group in 2018. “This is going to be a huge journey.”
Microsoft, like other tech companies, stumbled upon its artificially smart products. In 2016, he launched a chatbot called Tay on Twitter, designed to learn “speech understanding” from the users he interacts with. The bot started gushing quickly racist and offensive tweetsand Microsoft had to remove it.
In 2020, researchers discovered speech-to-text tools developed by Microsoft, Apple, Google, IBM and Amazon. It worked less well for black people. Microsoft’s system was the best of the bunch, but misidentified 15 percent of words for whites versus 27 percent for Blacks.
The company collected various speech data to train the AI system, but did not realize how diverse the language could be. So he hired a sociolinguist How people speak in formal and informal settings has gone beyond demographic and regional diversity.
“It’s actually a bit misleading to think of race as a determining factor in how someone speaks,” said Ms. Crampton. “What we learned in consultation with the expert is that a wide variety of factors actually influence language diversity.”
Ms Crampton said this speech-to-text journey to correct the disparity helps inform the guidance laid out in the company’s new standards.
“This is a critical norm-setting period for AI,” he said. Europe’s proposed regulations setting rules and limits on the use of artificial intelligence. “We hope to be able to use our standard to contribute to the brilliant, necessary discussion that needs to be made about the standards that tech companies should adhere to.”
A live discussion About the potential harms of artificial intelligence have been fueled by the mistakes and errors that have persisted in the tech community for years. real results It’s on people’s lives, like algorithms that determine whether people get welfare benefits. Dutch tax authorities accidentally took away childcare benefits from families in need flawed algorithm Persons with dual citizenship were penalized.
Automated software for recognizing and analyzing faces has been particularly controversial. last year on facebook close ten-year system for identifying people in photographs. The company’s vice president of artificial intelligence expressed “many concerns about the place of facial recognition technology in society.”
Couple of Black men wrongfully arrested after imperfect face recognition matches. And in 2020, at the same time as the Black Lives Matter protests following the police killing of George Floyd in Minneapolis, Amazon and Microsoft have issued moratoriums on police use of facial recognition products in the United States. clearer laws use was needed.
Since that time, Washington and Massachusetts Among other things, it passed regulations requiring judicial control over the use of facial recognition tools by the police.
Ms. Crampton said that Microsoft is considering starting to file its software with the police in states that have statutory laws, but has decided not to do so for the time being. That could change as the legal environment changes, she said.
Arvind Narayanan, a Princeton computer science professor and leading artificial intelligence specialistHe said companies may be stepping back from face-analyzing technologies because they’re “more instinctive, unlike various other types of AI that may be suspicious but don’t necessarily feel in our bones.”
Companies may also realize that some of these systems aren’t that valuable commercially, at least for the time being, he said. Microsoft couldn’t say how many users it has for the facial analysis features it got rid of. Mr. Narayanan predicted that companies would be less likely to abandon other invasive technologies, such as targeted advertising, which direct people to choose the best ads to show them, because they are a “cash cow”.
[ad_2]
Source link