Google and Microsoft are racing against start-ups to create a computer chip powerful enough to support Artificial Intelligence
By now our future is clear: We are to be cared for, entertained, and monetised by artificial intelligence. Existing industries like healthcare and manufacturing will become much more efficient; new ones like augmented reality goggles and robot taxis will become possible.
But as the tech industry busies itself with building out this brave new artificially intelligent, and profit boosting, world, it’s hitting a speed bump: Computers aren’t powerful and efficient enough at the specific kind of math needed. While most attention to the AI boom is understandably focused on the latest exploits of algorithms beating humans at poker or piloting juggernauts, there’s a less obvious scramble going on to build a new breed of computer chip needed to power our AI future.
One datapoint that shows how great that need is: software companies Google and Microsoft have become entangled in the messy task of creating their own chips. They’re being raced by a new crop of startups peddling their own AI-centric silicon—and probably Apple, too. As well as transforming our lives with intelligent machines, the contest could shake up the established chip industry.
Microsoft revealed its AI chip-making project late on Sunday. At a computer vision conference in Hawaii, Harry Shum, who leads Microsoft’s research efforts, showed off a new chip created for the HoloLens augmented reality goggles. The chip, which Shum demonstrated tracking hand movements, includes a module custom-designed to efficiently run the deep learning software behind recent strides in speech and image recognition. Microsoft wants you to be able to smoothly reach out and interact with the virtual objects overlaid on your vision and says nothing on the market could run machine learning software efficiently enough for the battery-powered device that sits on your head.
Microsoft’s project comes in the wake of Google’s own deep learning chip, announced in 2016. The TPU, for tensor processing unit, was created to make deep learning more efficient inside the company’s cloud. The company told WIRED earlier this year that it saved the company from building 15 new data centres as demand for speech recognition soared. In May Google announced it had made a more powerful version of its TPU and that it would be renting out access to the chips to customers of its cloud computing business.
News that Microsoft has built a deep learningprocessor for Hololens suggests Redmond wouldn’t need to start from scratch to prep its own server chip to compete with Google’s TPUs. Microsoft has spent several years making its cloud more efficient at deep learning using so-called field-programmable gate arrays, a kind of chip that can be reconfigured after it’s manufactured to make a particular piece of software or algorithm run faster. It plans to offer those to cloud customers next year. But when asked recently if Microsoft would make a custom server chip like Google’s, Doug Burger, the technical mastermind behind Microsoft’s roll out of FPGAs, said he wouldn’t rule it out. Pieces of the design and supply chain process used for the HoloLens deep learning chip could be repurposed for a server chip.
Google and Microsoft’s projects are the most visible part of a new AI-chip industry springing up to challenge established semiconductor giants such as Intel and Nvidia. Apple has for several years designed the processors for its mobile devices, and is widely believed to be working on creating a new chip to make future iPhones better at artificial intelligence. Numerous startups are working on deep learning chips of their own, including Groq, founded by ex-Google engineers who worked on the TPU. “Companies like Intel and Nvidia have been trying to keep on selling what they were already selling,” says Linley Gwennap, founder of semiconductor industry analysts the Linley Group. “We’ve seen these leading cloud companies and startups moving more quickly because they can see the need in their own data centres and the wider market.”
Graphics chip maker Nvidia has seen sales and profits soar in recent years because its chips are better suited than conventional processors to training deep learning software. But the company has mostly chosen to modify and extend its existing chip designs rather than making something tightly specialised to deep learning from scratch, Gwennap says.
You can expect the established chip companies to fight back. Intel, the world’s largest chipmaker, bought an AI chip startup called Nervana last summer and is working on a dedicated deep learning chip built on the company’s technology. The company has the most sophisticated and expensive chip manufacturing operation on the planet. But representatives of the large and small upstarts taking on the chip industry say they have critical advantages. One is that they don’t have to make something that fits within an existing ecosystem of chips and software originally developed for something else.
“We’ve got a simpler task because we’re trying to do one thing and can build things from the ground up,” says Nigel Toon, CEO and co-founder of Graphcore, a UK startup working on a chip for artificial intelligence. Last week the company disclosed $30 million of new funding, including funds from Demis Hassabis, the CEO of Google’s DeepMind AI research division. Also in on the funding round: several leaders from OpenAI, the research institute co-founded by Elon Musk.
At the other end of the scale, the big cloud companies can exploit their considerable experience in running and inventing machine learning services and techniques.
“One of the things we really benefited from at Google was we could work directly with the application developers in, say, speech recognition and Street View,” says Norm Jouppi, the engineer who leads Google’s TPU project. “When you’re focused on a few customers and working hand in hand with them it really shortens the turnaround time to build something.”
Google and Microsoft built themselves up by inventing software that did new things with chips designed and built by others. As more is staked on AI, the silicon substrate of the tech industry is changing—and so is where it comes from.