Tech companies stand to benefit from specialized processors that make AI faster and more efficient.
Here’s a toast to cellphone cameras, which made their first appearance 20 years ago and have altered the course of do-it-yourself photography ever since.
More impressive, our phones have become smart enough to look at our photos for themselves. Apps can accurately outline people, identify wildlife and even translate text in real time.
That’s thanks to machine learning, the process by which computers can organize and identify data — and even perform tasks — without being told specifically how to do so. But as they honed this style of computing, engineers and programmers realized it would benefit from specialized hardware. Enter what’s known as an artificial intellgence accelerator, a new kind of processing unit.
Traditional CPUs — central processing units — are all-rounders. They can perform a lot of tasks and are pretty decent at all of them. But there’s a tradeoff for being generally good: CPUs don’t excel at any one function. Machine learning is very specialized. It doesn’t need the precision power that CPUs offer, but it does require a lot of multitasking, and CPUs aren’t designed for that.
“You can think of a CPU like a van,” says Paul Li, a Capital Group equity analyst who covers Asian semiconductors. “It’s supposed to cover any need you can think of. You can transport a lot of people or things in a van, but it doesn’t excel at any given task.”
AI accelerators, by contrast, are designed to multitask. They’re also intended to be as efficient as possible, requiring less power because they dispense with many of the CPU bells and whistles. For consumer devices, that means longer battery life and less heat; for centralized processing centers, that means less electricity consumption without sacrificing processing power.
In that way, AI accelerators are like sports cars. “There are workloads you almost never have to do in terms of AI,” Li says. “If you never need to carry more than one passenger, you can get rid of the back seats — your goal is to make these chips as fast and inexpensive as possible.”
Of course, there are some wrinkles in the current technology. There’s no established architecture for AI accelerators, as there is for PCs and iPhones. That complicates efforts to develop and improve chips. Similarly, there’s no clear industry leader. Amazon, Google and Facebook all design their own processors, and that could complicate efforts to create nonproprietary, off-the-shelf hardware. And machine learning is still rapidly evolving, forcing the hardware to adjust rapidly to keep up with it.
Nevertheless, AI accelerators present intriguing possibilities for the companies leading the way on this front. They’re key to streamlining and speeding machine learning, which holds promise in a variety of fields, from powering autonomous vehicles to improving medical scanning — not to mention enabling creations that have yet to be dreamed up. Companies that successfully develop this technology stand to benefit greatly.
In traditional programming, a computer doesn’t solve problems so much as apply rigid rules to the information fed into it. Say you give a standard computer a picture and ask it, “Does this picture contain the color yellow?” The CPU would check its definition of yellow, then see if any part of the picture met that definition. This kind of computing is great for well-defined, straightforward questions but is not very good for solving amorphous or vague problems.
Machine learning inverts that approach. Programmers gather as many data points as possible and ask the machine to determine how they’re related. If you want a computer to identify ducklings, for example, you give it as many pictures as possible and label the ones with ducklings. The machine will process each picture’s features, then guess whether it contains a duckling. When it’s right, the machine records that mix of features as being more likely to contain a duckling; when it’s wrong, it registers the combination as being
After enough comparisons, the machine is able to make better-educated guesses. It’s discovered that ducklings tend to be yellow, with a round body and head, and usually an orange bill. That’s not how the machine thinks of it, of course; it doesn’t know what a bill is. But it can assign values to parts of the picture — a bill has hard edges, a triangular shape and an orange coloration. Given enough refinement, the machine can start confidently declaring that images contain ducklings.
Though it’s complex, this machine learning process is a powerful tool for solving intricate problems. Rather than asking a programmer to preemptively think of every possible outcome, the computer gets to cycle through virtually every outcome to figure out how they’re related to various factors.
A case in point: Speech recognition software used to be notoriously unreliable. A team of programmers simply couldn’t anticipate every accent, voice timbre and speech impediment. But engineers could use machine learning to scan millions of speech clips and figure out how sounds and letters were interrelated, and how people tended to speak. Today, smartphone personal assistants are far more reliable.
The explosion of easily accessible information has been rocket fuel for machine learning. That’s why tailored hardware is showing up now; until fairly recently, old CPUs were sufficient.
“What is clear is that the legacy architectures are not optimal for AI,” says Isaac Sudit, a Capital Group equity analyst with responsibility for U.S. and European semiconductors. “We’re already doing it. We’re just not doing it optimally.”
That optimization doesn’t have to be dramatic. The learning process is very data- and processing-heavy, so small gains can offer significant returns, and that can make previously infeasible projects economically viable. Consumers will see workaday improvements, such as better voice recognition, faster response times from smart devices and longer battery life.
And AI accelerators could also help empower the kind of tantalizing functionality that’s been just out of our grasp for years.
Autonomous vehicles, for example, must process a huge amount of video and spatial data to safely navigate streets. As breakthroughs continue in the lidar and camera hardware that the car uses to “see,” muscular chips will allow machine learning to keep up and run efficiently in the field.
In stores, chips trained to process 3D space could be paired with smart mirror technology to provide a real-time try-it-on experience. Shoppers could flick through dozens of outfits in a few moments, seeing how clothes would look on their real reflections, without having to try anything on. Similar technology could tailor other items, such as bicycles custom-designed to a rider’s body.
As the technology matures, novel uses are sure to appear. After all, it wasn’t that long ago that the idea of a camera phone seemed unfathomable.
The above article originally appeared in the Spring 2020 issue of Quarterly Insights magazine.