Google's next-gen TPUs promise a 4.7x performance boost
At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU), its data center AI chips. This sixth generation of chips, dubbed Trillium, will launch later this year.
“Google was built for this moment. We’ve been pioneering GPUs for more than a decade,” Google CEO Sundar Pichai said in a press briefing ahead of conference.
Announcing the next generation of TPUs is something of a tradition at I/O, even as the chips only roll out later in the year. When they do arrive, though, they will feature a 4.7x performance boost in compute performance per chip when compared to the fifth generation, according to Pichai.
In part, Google achieved this by expanding the chip’s matrix multiply units (MXUs) and by pushing the overall clock speed. In addition, Google also doubled the memory bandwidth for the Trillium chips.
What’s maybe even more important, though, is that Trillium features the third generation of SparseCore, which Google describes as “a specialized accelerator for processing ultra-large embeddings common in advanced ranking and recommendation workloads.” This, the company argues, will allow Trillium TPUs to train models faster and serve them with lower latency.
Pichai also described the new chips as Google’s “most energy-efficient” TPUs yet, something that’s especially important as the demand for AI chips continues to increase exponentially. “Industry demand for ML compute has grown by a factor of 1 million in the last six years, roughly increasing tenfold every year,” he said. That’s not sustainable without investing in reducing the power demands of these chips. Google promises that the new TPUs are 67% more energy-efficient than the fifth-generation chips.
Google’s TPUs recently tended to come in a number of variants. So far, Google didn’t provide any additional details about the new chips, or how much using them would cost in the Google Cloud.
Earlier this year, Google also announced that it would be among the first cloud providers to offer access to Nvidia’s next-gen Blackwell processors. That still means developers will have to wait until early 2025 to get access to these chips, though.
“We’ll continue to invest in the infrastructure to power our AI advances and we’ll continue to break new ground,” Pichai said.
We’re launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.