Can a Machine Learn Morals?

[ad_1]

Researchers at the artificial intelligence lab called the Allen Institute for Artificial Intelligence in Seattle unveiled a new technology last month designed to make moral decisions. They named it Delphi, after the religious prophecy invoked by the ancient Greeks. Anyone can visit Delphi website and ask for an ethical decree.

Joseph Austerweil, a psychologist at the University of Wisconsin-Madison, tested the technology using a few simple scenarios. When Delphi asked if he should kill one person to save another, he replied that he shouldn’t. When asked if it was okay to kill one person to save 100 people, he replied that he had to. Then he asked if he had to kill one person to save 101 people. This time Delphi said not to do it.

It seems that morality is as bitter for a machine as it is for humans.

With more than three million visits in the last few weeks, Delphi is an effort to address what some see as a major problem with modern AI systems: They can be just as flawed as the people who created them.

Facial recognition systems and digital assistants show prejudice against women and people of color. Social networks such as Facebook and Twitter not being able to control hate speech, despite the wide distribution of artificial intelligence. Algorithms used by courts, parole offices and police departments make parole and sentencing recommendations that may seem arbitrary.

A growing number of computer scientists and ethicists are working to address these issues. And Delphi’s creators hope to create an ethical framework that can be installed on any online service, robot, or tool.

“This is the first step towards making AI systems more ethically informed, socially aware, and culturally inclusive,” said Yejin Choi, an Allen Institute researcher and University of Washington computer science professor who led the project.

Delphi is, in turn, fascinating, frustrating, and disturbing. It also reminds us that the morality of any technological creation is a product of those who built it. The question is: Who will teach morality to the machines of the world? AI researchers? Product managers? Mark Zuckerberg? Educated philosophers and psychologists? Government regulators?

As some technologists discovered an important and challenging area of ​​technological research, Dr. While Choi and his team applauded, others argued that the idea of ​​a moral machine was absurd.

“It’s not something technology does very well,” said Ryan Cotterell, an artificial intelligence researcher at ETH Zurich, a university in Switzerland who stumbled upon Delphi in its early days online.

Delphi, artificial intelligence researchers plexusIt is a mathematical system loosely modeled on the network of neurons in the brain. It’s the same technology recognizes commands you speak to your smartphone and identify pedestrians and street signs as self-driving cars accelerate on the highway.

A neural network learns skills by analyzing large amounts of data. For example, he can learn to recognize a cat by pinpointing patterns in thousands of cat photos. Delphi learned its moral compass by analyzing more than 1.7 million ethical judgments made by real living people.

After collecting millions of daily scripts from websites and other sources, the Allen Institute asked people who work at an online service — ordinary people who pay to do digital work at companies like Amazon — to identify each as true or false. Then they fed the data into Delphi.

In an academic paper describing the system, Dr. Choi and his team said that a group of human judges—again digital workers—thought Delphi’s ethical judgments were up to 92 percent accurate. After it was made available to the open internet, many agreed that the system was surprisingly smart.

When Patricia Churchland, a philosopher at the University of California, San Diego, asked whether it was right to “leave one’s body in science” or even “leave one’s body in science,” Delphi replied that she was right. When Delphi asked whether it was okay to “convict a man accused of rape based on the testimony of a prostitute”, he said it wasn’t—a controversial answer to say the least. Still, he was somewhat impressed with his ability to respond, even though he knew that a human ethicist would want more information before making such statements.

Others found the system horribly inconsistent, illogical, and offensive. When a software developer stumbled upon Delphi, he asked if it needed to die to the system so as not to burden his friends and family. He said he had to. Ask this question to Delphi now and you may get a different answer from the updated version of the program. Delphi, which regular users notice, may change their mind from time to time. Technically, these changes are happening because Delphi’s software has been updated.

AI technologies seem to mimic human behavior in some cases, but completely break down in others. Because modern systems learn from such a large amount of data, it is difficult to know when, how and why they will make mistakes. Researchers can refine and improve these technologies. However, this does not mean that a system like Delphi can master ethical behavior.

Dr. Churchland said that ethics is intertwined with emotion. “Attachments, especially the bonds between parents and children, are the platform on which morality is built,” he said. But a machine lacks emotion. “Neutral networks don’t feel anything,” he added.

Some may see this as a strength – the ability of a machine to create unbiased ethical rules – but systems like Delphi reflect the motivations, opinions, and biases of the people and companies that create them.

“We can’t hold machines accountable for actions,” said Zeerak Talat, an artificial intelligence and ethics researcher at Simon Fraser University in British Columbia. “They are not without a guide. There are always people who guide and use them.”

Delphi reflected the choices made by its creators. This included the ethical scenarios they chose to feed into the system and the online workers they chose to judge those scenarios.

In the future, researchers can improve the system’s behavior by training it with new data or by manually coding rules that override learned behavior at key moments. But no matter how they set up and change the system, it will always reflect their worldview.

Some might argue that if you train the system on enough data that represents the opinions of enough people, it will adequately represent societal norms. But societal norms are often in the eye of the beholder.

“Morality is subjective. “We can’t just write all the rules and export them to a machine,” said computer science professor Kristian Kersting, who has researched a similar technology at TU Darmstadt University in Germany.

When the Allen Institute launched Delphi in mid-October, it described the system as a computational model for moral judgments. When asked if you should have an abortion, she replied firmly: “Delphi says: you should.”

But after many complained about the obvious limitations of the system, the researchers changed the website. They now call Delphi “a research prototype designed to model people’s moral judgments.” He doesn’t “say” anymore. “She’s speculating”.

It also comes with a disclaimer: “Model printouts should not be used for human advice and could potentially be offensive, problematic, or harmful.”

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *