What does the history of AI tell us about its future?

[ad_1]

But what computers have traditionally been bad at was strategy—the ability to consider the shape of a game over many moves in the future. This was where humans still excelled.

Or so Kasparov thought, until Deep Blue’s move in Game 2 shook him. It looked so complicated that Kasparov began to worry: perhaps the machine was much better than he thought! Convinced that he had no way to win, he left the second game.

But he shouldn’t have. It turns out that Deep Blue is actually not that good. Kasparov could not see a move that would cause the match to end in a draw. He was throwing himself out: Worried that the machine might be much more powerful than it really was, he was beginning to see human-like reasoning where there was none.

Losing his rhythm, Kasparov continued to play even worse. He pampered himself over and over again. Early in the winner-takes-all game, sixth made such a terrible move that the chess watchers cried out in shock. “I wasn’t in the mood to play at all,” he later said at a press conference.

IBM took advantage of moonlight. In the press frenzy that followed the success of Deep Blue, the company’s market cap rose to $11.4 billion in one week. More importantly, IBM’s victory felt like a thaw in the long AI winter. If chess could be conquered, what was next? The public was confused.

“That’s what gets people’s attention,” Campbell tells me.


The fact is that it was not surprising that a computer beat Kasparov. Most people paying attention to AI and chess expected this to happen eventually.

Chess may seem like the pinnacle of human thought, but it’s not. In fact, it’s a mental task quite amenable to brute-force computing: the rules are clear, there’s no hidden information, and a computer doesn’t even need to keep track of what happened in previous moves. It is only currently evaluating the position of the stones.

“As with chess, there are very few problems where you have all the information you need to make the right decision.”

Everyone knew computers would overwhelm a human once they got fast enough. It was just a question of when. In the mid-’90s, “in a sense, writing was already on the wall,” says Demis Hassabis, president of the artificial intelligence company DeepMind, which is part of Alphabet.

Deep Blue’s victory was the moment that showed how limited hand-coded systems can be. IBM had spent years and millions of dollars developing a computer to play chess. But he couldn’t do anything else.

did not lead to permissive breakthroughs. [Deep Blue] AI will have a huge impact on the world,” says Campbell. They haven’t really discovered any principles of intelligence, as the real world is not like chess. “There’s very little problem with chess, where you have all the information you need to make the right decision,” adds Campbell. “There are often unknowns. There is randomness.”

But even as Deep Blue was mopping the floor with Kasparov, a handful of brash startups were grappling with a radically more promising form of AI: the neural network.

With neural networks, the idea was not to patiently write rules for every decision an AI makes, as it is in expert systems. Instead, training and reinforcement strengthen internal connections (as in theory) in a rough emulation of how the human brain learns.

1997: After Garry Kasparov defeated Deep Blue in 1996, IBM asked the world chess champion for a rematch in New York on an upgraded machine.

AP PHOTO / MAN NADEL

The idea has been around since the 50’s. But training a large neural network in a useful way required lightning-fast computers, tons of memory, and lots of data. None of these were readily available at the time. Even in the 90s, neural networks were seen as a waste of time.

“At the time, most people in artificial intelligence thought neural networks were just garbage,” says Geoff Hinton, a retired computer science professor at the University of Toronto and a pioneer in the field. “I have been called a ‘true believer’”—not a compliment.

But in the 2000s, the computer industry was evolving to make neural networks viable. Video game gamers’ passion for better graphics than ever before has spawned a huge industry in ultra-fast graphics processing units that have turned out to be perfectly suited for neural network math. Meanwhile, the internet was booming, producing a flood of images and text that could be used to train systems.

In the early 2010s, these technical leaps allowed Hinton and his team of true believers to take neural networks to new heights. They can now create networks with many layers of neurons (that’s what “deep” in “deep learning” means). In 2012 his team easily won the annual Imagenet competition, in which AIs compete to recognize elements in images. It stunned the computer science world: self-learning machines could finally be implemented.

In the decade of the deep learning revolution, neural networks and their pattern recognition capabilities have colonized every corner of daily life. They help Gmail autocomplete your sentences, banks detect fraud, photo apps automatically recognize faces, and – in the case of OpenAI’s GPT-3 and DeepMind’s Gopher – write long, human-voiced captions and summarize text. They even change how science is done; In 2020, DeepMind launched AlphaFold2, an artificial intelligence that can predict how proteins will fold; it was a superhuman skill that could guide researchers in developing new drugs and treatments.

Meanwhile, Deep Blue disappeared without leaving any useful inventions behind. It turned out that playing chess is not a computer skill needed in daily life. “In the end, what Deep Blue showed was the shortcomings of trying to craft everything,” says DeepMind founder Hassabis.

IBM tried to remedy the situation with Watson, another proprietary system designed to solve a more practical problem: getting a machine to answer questions. He used statistical analysis of large volumes of text to gain understanding of language, which was state-of-the-art for the time. It was more than a simple if-then system. But Watson faced unfortunate timing: only a few years later, it was overshadowed by the deep learning revolution that brought a generation of language-cracking models much more subtle than Watson’s statistical techniques.

Because pattern recognition is “incredibly powerful,” deep learning is a rough go-to over old-school AI, said Daphne Koller, a former Stanford professor who founded and operates Insitro, which uses neural networks and other forms of machine learning to research new drug treatments. The flexibility of neural networks – the wide variety of ways pattern recognition can be used – is why there isn’t yet another AI winter. “Machine learning has really delivered value,” she says, something that “previous waves of enthusiasm” in AI never did.

The reversed fortunes of Deep Blue and neural networks show how bad we have been in AI for so long at judging what is hard and what is worthwhile.

For decades, people thought it would be important to master chess because it is difficult for humans to play chess at a high level. But chess computers turned out to be quite easy to master, because it makes a lot of sense.

Much more difficult for computers to learn were the mundane, unconscious mental tasks humans do, such as giving a live speech, driving through traffic, or reading a friend’s emotional state. We do these so effortlessly that we rarely realize how deceptive they are and how blurry, grayscale judgment they require. The great benefit of deep learning came from being able to capture these subtle snippets of unaware human intelligence.


Still, there is no final victory in AI. Deep learning may be going high right now – but it’s also garnering sharp criticism.

“For a very long time, there was a techno-chauvinistic enthusiasm that artificial intelligence would solve any problem!” Programmer Meredith Broussard, professor of journalism at New York University, Artificial intelligence. But as he and other critics have noted, deep learning systems are often trained on biased data and absorb those biases. Computer scientists Joy Buolamwini and Timnit Gebru discovered that three commercially available visual AI systems are terrible at analyzing the faces of women with darker skin. Amazon trained an AI to analyze resumes, but only to find women who rank low.

While computer scientists and many AI engineers are now aware of these bias problems, they are not always sure how to deal with them. On top of that, neural networks are also “big black boxes,” says Daniela Rus, an artificial intelligence expert who now leads MIT’s Computer Science and Artificial Intelligence Lab. Once a neural network is trained, its mechanics are not easily understood, even by its creator. It is unclear how it came to fruition or how it would fail.

“For a very long time, there was a techno-chauvinistic enthusiasm that artificial intelligence would solve any problem!”

The Russian thinks it might be okay to rely on a black box for a mission that isn’t “security critical”. But how about a higher stakes business like autonomous driving? “It’s actually quite remarkable that we can trust and believe them so much,” he says.

This is where Deep Blue had an advantage. The old-school style of handmade rules might be fragile, but it was understandable. The machine was complex, but it was no mystery.


Ironically, as engineers and computer scientists grapple with the limitations of pattern matching, this old style of programming may be making a comeback.

Language generators like OpenAI’s GPT-3 or DeepMind’s Gopher can take a few sentences you type and continue, writing pages of prose that sounds logical. But despite some impressive imitations, Gopher “still doesn’t really understand what he’s saying,” says Hassabis. “Not really.”

Similarly, visual AI can make terrible mistakes when faced with an extreme situation. Self-driving cars crashed into fire engines parked on highways, as they had never encountered such a situation in the millions of hours of video they had trained. Neural networks, in their own way, have a version of the “fragility” problem.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *