Meta has built a massive new language AI and it

[ad_1]

Pineau has helped transform the way research is published at many of the largest conferences, and provided a checklist of what researchers should submit with their results, including code and details on how experiments were run. Since joining Meta (later Facebook) in 2017, he has championed this culture in his AI lab.

“My commitment to this open science is why I’m here,” he says. “I wouldn’t be here under any other circumstances.”

Ultimately, Pineau wants to change how we judge artificial intelligence. “What we call cutting edge technology today can’t just be about performance,” he says. “It should also be state-of-the-art in terms of responsibility.”

Still, giving a major language model is a bold move for Meta. “I wouldn’t say that this model doesn’t risk producing a language we’re not proud of,” Pineau says. “It will.”

Weigh the risks

Margaret Mitchell, now at Hugging Face, one of the AI ​​ethics researchers Google was forced to sack in 2020, sees the launch of OPT as a positive move. But she thinks transparency has limits. Has the language model been tested with sufficient rigor? Do the foreseeable benefits, such as misinformation generation or racist and misogynistic language, outweigh the foreseeable harms?

“Publishing a broad language model to the world where a wide audience will use it or be impressed by its output comes with responsibilities,” he says. Mitchell notes that this model can generate harmful content not only on its own, but also through downstream applications that the researchers build on top of it.

Meta AI has audited OPT to eliminate some harmful behavior, but the main point is to publish a model that researchers can learn from warts and all, Pineau says.

“There has been a lot of talk about how to do this in a way that will get us to sleep at night, knowing that there is a non-zero risk to reputation, a non-zero risk to harm,” he says. That’s why OpenAI didn’t release GPT-3’s predecessor, GPT-2, which rejects the notion that you shouldn’t release a model because it’s too dangerous. “I understand the weaknesses of these models, but it’s not a research mindset,” he says.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *