Alphabet's Google has released new information about the systems used to power and train its AI supercomputers.
In contrast to most major software companies, which rely on Nvidia's A100 processors for AI and machine-learning workloads, Google has developed a custom chip which it uses for over 90 per cent of its AI training work.
The search giant has described in a blog post how this bespoke system reportedly outperforms Nvidia's processors in both speed and processing capabilities.
Google's TPU has now advanced to its fourth generation. The company has revealed how it has connected over 4,000 TPUs to create a supercomputer and developed custom optical switches to help connect individual machines.
Google claims that its supercomputers make it simple to reconfigure links between processors on the run. It also stressed that its chips are up to 1.7 times faster and 1.9 times more power-efficient than a system based on Nvidia's current...