[ad_1]
The federal government said on Thursday that artificial intelligence technology for screening new job candidates or monitoring worker productivity can unfairly discriminate against people with disabilities, and is sending a warning to employers that widely used recruiting tools may violate civil rights laws.
The U.S. Department of Justice and the Equal Employment Opportunity Commission have jointly guided employers to be mindful before using popular algorithmic tools aimed at making the job of assessing employees and job prospects easier – but this could also potentially violate the Americans with Disabilities Act.
“We are sounding an alarm about the dangers associated with blind reliance on artificial intelligence and other technologies that we see increasingly being used by employers,” Kristen Clarke, Assistant Attorney General of the department’s Civil Rights Division, told reporters on Thursday. “The use of AI exacerbates the longstanding discrimination faced by disabled job seekers.”
Examples of popular work-related AI tools included resume scanners, employee tracking software that ranks employees based on keystrokes, and video interview software that measures a person’s speech patterns or facial expressions. This type of technology could potentially eliminate people with speech impediments or various other disabilities.
The move reflects wider pressure from President Joe Biden’s administration to encourage positive advances in AI technology while reining in the opaque and potentially harmful AI tools used to make important decisions about people’s livelihoods.
“We are fully aware that there is tremendous potential to streamline things,” said Charlotte Burrows, chair of the EEOC, which is responsible for enforcing laws against workplace discrimination. But we cannot allow these tools to become a high-tech path to discrimination.”
One scientist who studies bias in AI recruiting tools said holding employers accountable for the tools they use is a “great first step”, but added that more work is needed to rein in the vendors who make these tools. Ifeoma Ajunwa, a law professor at the University of North Carolina and founding director of the AI Decision Making Research Program, said doing so would likely be a job for another agency like the Federal Trade Commission.
“There is now recognition that these tools, often used as an anti-bias intervention, can actually lead to more bias and also confuse it,” Ajunwa said.
[ad_2]
Source link