fbpx

Google Sidelines Engineer Claiming His Artificial Intelligence Is Sensitive


SAN FRANCISCO — Google put an engineer on paid leave after rejecting claims that artificial intelligence is responsive, and uncovered another bullshit about the company’s cutting-edge technology.

Blake Lemoine, a senior software engineer at Google’s Responsible AI organization, said in an interview on Monday that he was on leave. The company’s human resources department said it violated Google’s privacy policy. The day before his dismissal, Mr. Lemoine said he turned over documents to a US senator’s office, claiming that Google and its technology provided evidence of religious discrimination.

Google said its systems mimic conversational exchanges and may touch on different topics, but it’s not conscious of it. “Our team, including ethicists and technologists, has reviewed Blake’s concerns under our AI Principles and reported that the evidence does not support his claims,” ​​Google spokesman Brian Gabriel said in a statement. “Some in the broad AI community are considering the long-term possibility of responsive or general AI, but it doesn’t make sense to call today’s non-responsive speech patterns human.” First the Washington Post reported Suspension of Mr. Lemoine.

Mr. Lemoine had grappled with Google executives, executives and human resources for months over the company’s startling claim that the Language Model for Dialogue Applications, or LaMDA, had a consciousness and a soul. Google says hundreds of researchers and engineers have interviewed the internal tool, LaMDA, and came to a different conclusion than Mr. Lemoine did. Most AI experts believe the industry is too far from computing sensibility.

Some AI researchers have made optimistic claims about these technologies that soon reach sensibility, but others are quick to dismiss these claims. “If you used these systems, you would never say things like that,” said Emaad Khwaja, a researcher at the University of California, Berkeley, and the University of California, San Francisco, who has researched similar technologies.

Google’s research organization has spent the last few years mired in scandal and controversy while chasing the AI ​​pioneer. Scientists and other employees of the department regularly quarreled over technology and personnel issues in episodes that often spilled over into the public domain. Google in March lay off A researcher who tries not to publicly participate in the published work of two colleagues. And layoffs Two AI ethics researchers, Timnit Gebru and Margaret Mitchell, continued to cast a shadow on the group after criticizing Google language models.

Mr. Lemoine, a veteran who describes himself as a priest, ex-con, and artificial intelligence researcher, told Google executives that he is as senior as Kent Walker, head of global affairs, and believes LaMDA is a 7 or 8-year-old boy. years old. He wanted the company to get approval of the computer program before experimenting on it. His allegations were based on his religious beliefs, which the company’s human resources department said was being discriminated against.

“They’ve repeatedly questioned my sanity,” said Mr. Lemoine. “’Have you been to a psychiatrist recently?’ they said” In the months before he went on administrative leave, the company had recommended that he take mental health leave.

Yann LeCun, head of artificial intelligence research at Meta and a key figure in the rise of neural networks, said in an interview this week that such systems are not powerful enough to achieve real intelligence.

Google’s technology plexusIt is a mathematical system that learns skills by analyzing large amounts of data. For example, she can learn to recognize a cat by pinpointing patterns in thousands of cat photos.

In the past few years, Google and other leading companies, engineered neural networks that learn from an enormous amount of proseincluding unpublished books and thousands of Wikipedia articles. These “broad language models” can be applied to many tasks. They can summarize articles, answer questions, compose tweets and even write blog posts.

But they are extremely flawed. Sometimes they produce excellent prose. Sometimes they produce nonsense. Systems are very good at recreating patterns they have seen in the past, but they cannot reason like a human.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

(0)