AI reinvents what computers are

[ad_1]

Fall 2021: the season of new phones like pumpkin, pecan pie, and peach. Just in time every year, Apple, Samsung, Google and others release their latest versions. These luminaires in the consumer technology calendar no longer inspire the surprise and curiosity of those exciting early days. But behind all the marketing glare, something extraordinary is happening.

Google’s latest product, the Pixel 6, is the first phone to have a separate chip dedicated to AI that sits next to its standard processor. And the chip that powers the iPhone for the past few years has contained something Apple calls a “neural engine,” again dedicated to artificial intelligence. Both chips are better suited to the types of computation involved in training and running machine learning models on our devices, such as the AI ​​that powers your camera. Almost without realizing it, AI has become a part of our daily lives. And it’s changing the way we think about computing.

What does it mean? Computers haven’t changed much in 40-50 years. They’re smaller and faster, but still boxes with processors that run instructions from humans. AI changes this on at least three fronts: how computers are made, how they are programmed, and how they are used. In the end, it will change what they are for.

“The essence of computing is shifting from crunching numbers to decision making,” says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is breaking computers out of their boxes.

Haste makes waste

The first change has to do with how computers and the chips that control them are made. Traditional computing gains came as machines got faster at performing calculations after another. For decades, the world has benefited from chip accelerations that come with metronomic regularity, as chipmakers keep up with Moore’s Law.

But the deep learning models that make current AI applications work require a different approach: They need lots of less precise calculations to do it all at the same time. This means that a new type of chip is needed: a chip that can move data as fast as possible, making sure it’s available when and where it’s needed. When deep learning exploded onto the scene a decade or so ago, specialized computer chips already existed that were pretty good at it: graphics processing units or GPUs designed to display entire dozens of pixels per second.

Anything can be a computer. Indeed, most household items, from toothbrushes to light switches and doorbells, already have a smart version.

Now, chipmakers like Intel and Arm and Nvidia, which provided many of the first GPUs, are turning to making hardware specifically tailored for AI. Google and Facebook are also trying to enter this industry for the first time in a race to find an AI superiority through hardware.

For example, the chip inside the Pixel 6 is a new mobile version of Google’s tensor processing unit, or TPU. Unlike traditional chips for ultra-fast, precise computations, TPUs are designed for the high-volume but low-precision computations required by neural networks. Google has been using these chips in-house since 2015: they process people’s photos and natural language search queries. Google’s sister company, DeepMind, uses them to train their AI.

Over the last few years, Google has made TPUs available to other companies, and these chips – and similar ones developed by others – are becoming the default in the world’s data centers.

AI even helps design its own computing infrastructure. In 2020, Google used a reinforcement learning algorithm, a type of artificial intelligence that learns how to solve a task through trial and error, to design the layout of a new TPU. AI finally came up with strange new designs that no human would have thought of – but it worked. This type of AI could one day develop better, more efficient chips.

show, don’t tell

The second change has to do with how computers are told what to do. We’ve been programming computers for the last 40 years; We will train them for the next 40 years, says Chris Bishop, head of Microsoft Research in the UK.

Traditionally, in order for a computer to do something like recognize speech or identify objects in an image, programmers had to first come up with rules for the computer.

With machine learning, programmers no longer write rules. Instead, they create a neural network that learns these rules for itself. This is a fundamentally different way of thinking.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *