Google's Bard Vs. Microsoft Backed ChatGPT - What It Takes To Train AI

Alphabet Inc GOOG GOOGL Google on Tuesday elaborated on the supercomputers it uses to train its artificial intelligence models.

Google said the systems are faster and more power-efficient than comparable systems from Nvidia Corp NVDA, Reuters reports.

It has designed its custom chip called the Tensor Processing Unit, or TPU which serves more than 90% of the company's work on AI training, feeding data through models to respond to queries with human-like text or generating images. The Google TPU is now in its fourth generation.

Google detailed how it has strung over 4,000 chips into a supercomputer using its custom-developed optical switches to help connect individual machines.

The company's Bard and Microsoft Corp MSFT backed OpenAI's ChatGPT intensified competition among companies that build AI supercomputers as the so-called large language models that power the technologies have exploded in size.

Google trained its largest publicly disclosed language model to date, PaLM, by splitting it across two of the 4,000-chip supercomputers over 50 days.

Google said that its chips are up to 1.7 times faster and 1.9 times more power-efficient for comparably sized systems than a system based on Nvidia's A100 chip.

Google hinted at working on a new TPU that would compete with the Nvidia H100, with Google Fellow Norm Jouppi telling Reuters that Google has "a healthy pipeline of future chips."

In March, Microsoft disclosed hunting out ways to string together tens of thousands of Nvidia's A100 graphics chips, the workhorse for training AI models. 

OpenAI needed access to complete cloud computing services for long periods as it tried to train an increasingly large set of AI programs called models.

Microsoft executive vice president Scott Guthrie said it cost Microsoft over several hundred million dollars.

It is already at work on the next generation of the AI supercomputer, part of an expanded deal with OpenAI in which Microsoft added $10 billion.

Microsoft is adding the latest Nvidia graphics chip for AI workloads, the H100, and the newest version of Nvidia's Infiniband networking technology to share data even faster.

Market News and Data brought to you by Benzinga APIs
Posted In: NewsTechMedia
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!

Loading...