Posted by on April 21, 2018 11:50 pm
Tags:
Categories: Column 1

  • Earlier this week Alibaba said will make its own chip available for access through its cloud.
  • Google has developed chips for AI, and Facebook has a nascent chip effort.
Alibaba founder Jack Ma

Chinese retailer and cloud infrastructure provider Alibaba is the latest company to think up its own design for processors that can run artificial intelligence software. It joins a crowded roster of companies already working on similar custom designs, including Alphabet, Facebook and Apple.

The trend could eventually threaten the traditional relationship between big buyers and big suppliers. In particular, chipmaker Nvidia, whose stock has surged as its graphics processing chips have become common for powering AI-based applications, could find its data center business impacted as these roll-your-own-chip projects mature.

The companies are betting that their own chips can help their AI applications run better while lowering costs, as running hundreds of thousands of computers in a data center isn’t cheap. It could also reduce their dependency on the few vendors (like Nvidia) who make the types of graphics processors that excel at performing the functions modern AI applications require.

Nvidia still strong

On Thursday, Alibaba said that its recently formed research and development arm — dubbed the Academy for Discovery, Adventure, Momentum and Outlook — has been working on an AI chip called the Ali-NPU and that the chips will become available for anyone to use through its public cloud, a spokesman told CNBC.

The idea is to strengthen the Alibaba cloud and enable the future of commerce and a variety of AI applications within many industries, the spokesman said. In the fourth quarter Alibaba held 4 percent of the cloud infrastructure services market, meaning that it was smaller than Amazon, Microsoft, IBM and Google, according to Synergy Research Group.

Alibaba’s research academy has been opening offices around the world, including in Bellevue, Washington, near Microsoft headquarters. Last year Alibaba hired Qualcomm employee Liang Han as an “AI chip architect” in the Silicon Valley city of Sunnyvale. Job listings show that Alibaba is looking to add more people to the effort at that location.

The activity bears a resemblance to Google-parent Alphabet’s efforts.

Internally Alphabet engineers have been using Google’s custom-built tensor processing unit, or TPUs, to accelerate their own machine learning tasks, since 2015. Last year Google announced a second-generation TPU that could handle more challenging computing work, and in February Google started letting the public use second-generation TPUs through its cloud.

The second generation of the Google AI chip can be used in the place of graphics processing units from the likes of Nvidia, which can do more than just train AI models.

The Alibaba and Google server chip programs are still in relative infancy, at least compared to Nvidia’s GPU business in data centers.

Indeed, Google and Nvidia remain partners, and Nvidia’s GPUs remain available on the Google cloud alongside the TPUs. Alibaba also offers Nvidia GPUs through its cloud and will continue to do after the Ali-NPU comes out, the spokesman said.

In a note last July, analysts Matthew Ramsay and Vinod Srinivasaraghavan with Canaccord Genuity said that with the release of Nvidia’s latest GPUs, they have “increased confidence Nvidia will … more successfully defend pricing as data center sales scale and in-house and merchant ASIC [application-specific integrated circuit] offerings increase.”

You’ve got a chip, I’ve got a chip, everybody’s got a chip

Earlier this week it became clear that Facebook is also exploring chip development. That initiative could one day lead the company to develop AI chips. That wasn’t a complete surprise, though, as last year Intel said that it was working with Facebook on a new chip it had built for AI. But Intel hasn’t been involved in Google’s TPU, or Alibaba’s Ali-NPU.

Facebook’s AI chip could improve operations for internal researchers — training systems faster could mean more rapid experimentation — and boost the efficiency of systems doing calculations for the billions of people who use the company’s apps. The company’s push is different from Alibaba and Google in the sense that it’s not primarily about giving customers an innovative type of hardware that could bring performance gains.

Meanwhile, Apple has built a “neural engine” element into the chips inside the top-of-the-line iPhone X phone; Microsoft is working on an AI chip for the next version of its HoloLens mixed-reality headset; and Tesla has been developing an AI chip for its vehicles.

But all those devices are different from the servers that would house AI chips from the likes of Google and Alibaba. Data center servers would have more power, direct network connectivity and more data storage on board.

Leave a Reply