Artificial Intelligence Masters Language. Should We Trust What They Say?

[ad_1]

But the broad-language model approach has also received significant criticism over the past few years, as the GPT-3’s fluency dazzled many observers. Some skeptics argue that software is only capable of blindly imitating – imitating syntactic patterns of human language, but unable to generate its own ideas or make complex decisions; human intelligence. For these critics, the GPT-3 is the latest shiny object in a long history of AI hype, diverting research money and attention to what will ultimately prove to be a dead end, hindering other promising approaches from maturing. Other critics believe that software like GPT-3 will continue to be forever compromised by the biases, propaganda, and misinformation in the data it trains; which means it will always be irresponsible to use it for anything other than parlor tricks.

Wherever you are in this discussion, the pace of recent developments in major language models makes it hard to imagine that they won’t be commercially applied in the coming years. And that raises the question of exactly how they – and therefore, other dizzying advances in AI – will be released to the world. In the rise of Facebook and Google, we’ve seen how dominance in a new technology space can quickly lead to astounding power over society, and how AI threatens to become even more transformative in its ultimate effects than social media. What is the right kind of organization to build and own something so big and ambitious with so much promise and potential for abuse?

Or should we build at all?

Origins of OpenAI Until July 2015, when a small group of tech intellectuals gathered for a private dinner at the Rosewood Hotel on Sand Hill Road, the emblematic heart of Silicon Valley. The dinner took place in the midst of two recent developments in the tech world, one positive and the other disturbing. On the one hand, radical advances in computing power – and some new breakthroughs in the design of neural networks – have created a palpable sense of excitement in the field of machine learning; There was a feeling that the long “AI winter”, decades before the field had its initial excitement, was finally starting to unravel. A group at the University of Toronto had trained a program called AlexNet to identify classes of objects in photographs (dogs, castles, tractors, tables) with a much higher level of accuracy than any previous neural network. Google quickly stepped in to recruit AlexNet creators, while simultaneously acquiring DeepMind and launching its own startup called Google Brain. The mainstream adoption of smart assistants like Siri and Alexa has shown that even scripted agents can be consumer hits.

But around the same time, public attitudes towards Big Tech were undergoing a seismic shift, as once-popular companies like Google or Facebook were criticized for their near-monopoly power, their reinforcement of conspiracy theories, and their relentless capture of our attention. towards algorithmic feeds. Long-term fears about the dangers of AI were popping up in the op-ed pages and on the TED stage. Nick Bostrom of the University of Oxford has published his book “Superintelligence”, presenting a series of scenarios in which advanced AI could deviate from humanity’s interests with potentially disastrous results. At the end of 2014, Stephen Hawking announced to the BBC “The development of full artificial intelligence could spell the end of the human race.” The corporate consolidation cycle that characterizes the social media era already seemed to be happening with AI, but this time it might not be algorithms. just polarize or sell our attention to the highest bidder – they may eventually destroy humanity itself. And once again, all the evidence was showing that this power would be controlled by a few Silicon Valley mega-corporations.

The agenda for dinner on Sand Hill Road that July night was nothing if not ambitious: finding the best way to lead AI research to the most positive outcome possible, avoiding the short-term negative consequences that devastated both the Web 2.0 era and the Web 2.0 era. long-term existential threats. From that dinner, a new idea began to take shape—an idea that would soon become a full-time obsession for Y Combinator’s Sam Altman and Greg Brockman, who left Stripe. Interestingly, the idea was not as technological as it was organizational: If artificial intelligence was to be released into the world in a safe and beneficial way, it would require innovation at the level of governance and incentives and stakeholder engagement. The technical path to what the field calls artificial general intelligence, or AGI, was not yet clear to the group. But Bostrom and Hawking’s troubling predictions convinced them that the acquisition of human-like intelligence by artificial intelligence would reinforce a staggering amount of power and moral burden in those who eventually managed to invent and control them.

In December 2015, the group announced the creation of a new entity called OpenAI. Altman had signed on to become CEO of the company where Brockman oversaw the technology; Another attendee at the dinner, AlexNet co-creator Ilya Sutskever, was hired from Google as head of research. (Elon Musk, who was also present at the dinner, joined the board of directors but left in 2018.) blog post, Brockman and Sutskever outlined the scope of their goals: “OpenAI is a nonprofit artificial intelligence research company,” they wrote. “Our aim is to advance digital intelligence in a way that will most benefit humanity as a whole, without being constrained by the need to generate financial returns,” they added: “We believe AI should be an extension of individual human aspirations. and in the spirit of freedom, it should be distributed as widely and evenly as possible.

OpenAI founders are a public charter three years later, he explained the basic principles behind the new organization. From its earliest days, the document has been interpreted as a not-so-subtle dig at Google’s slogan of “Don’t be evil”, an acknowledgment that maximizing the social benefits of new technology – and minimizing harms – wasn’t always the case. simple calculation. While Google and Facebook achieved global dominance through closed source algorithms and private networks, the OpenAI founders promised to go the other way and freely share new research and code with the world.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *