TPU is Google’s premium AI hardware offering for its cloud platform, with the objective of making it easier to run machine learning systems in a fast, cheap and easy manner. In his I/O 2018 keynote, Sundar Pichai declared that TPU 3.0 will be 8x more powerful than its predecessor.
[embed]https://www.youtube.com/watch?v=ogfYd705cRs&t=5750[/embed]
A TPU 3.0 pod is expected to crunch numbers at approximately 100 petaflops, as compared to 11.5 petaflops delivered by TPU 2.0. Pichai did not comment about the precision of the processing in these benchmarks - something which can make a lot of difference in real-world applications.
Not a lot of other TPU 3.0 features were disclosed. The chip is still only for Google’s use for powering their applications. It also powers the Google Cloud Platform to handle customers’ workloads.
An important takeaway from Pichai’s announcement is that TPU 3.0 is expected to be power-hungry - so much so that Google’s data centers deploying the chips now require liquid cooling to take care of the heat dissipation problem. This is not necessarily a good thing, as the need for dedicated cooling systems will only increase as Google scale up their infrastructure. A few analysts and experts, including Patrick Moorhead, tech analyst and founder of Moor Insights & Strategy, have raised concerns about this on twitter.
[embed]https://twitter.com/PatrickMoorhead/status/993905641390391296[/embed]
[embed]https://twitter.com/ryanshrout/status/993906310197506048[/embed]
[embed]https://twitter.com/LostInBrittany/status/993904650888724480[/embed]
The evolution of the Tensor Processing Unit has been rather interesting. When TPU was initially released way back in 2016, it was just a simple math accelerator used for training models, supporting barely close to 8 software instructions. However, Google needed more computing power to keep up with their neural networks to power their applications on the cloud. TPU 2.0 supported single-precision floating calculations and added 8GB of HBM (High Bandwidth Memory) for faster, more improved performance.
With 3.0, TPU have stepped up another notch, delivering the power and performance needed to process data and run their AI models effectively. The significant increase in the processing capability from 11 petaflops to more than 100 petaflops is a clear step in this direction. Optimized for Tensorflow - the most popular machine learning framework out there - it is clear that TPU 3.0 will have an important role to play as Google look to infuse AI into all their major offerings. A proof of this is some of the smart features that were announced in the conference - the smart compose option in Gmail, improved Google Assistant, Gboard, Google Duplex, and more.
It comes as no surprise to anyone that almost all the major tech giants are investing in cloud-ready AI technology. These companies are specifically investing in hardware to make machine learning faster and more efficient, to make sense of the data at scale, and give intelligent predictions which are used to improve their operations.
There are quite a few examples to demonstrate this. Facebook’s infrastructure is being optimized for the Caffe2 and Pytorch frameworks, designed to process the massive information it handles on a day to day basis. Intel have come up with their neural network processors in a bid to redefine AI. It is also common knowledge that even the cloud giants like Amazon want to build an efficient cloud infrastructure powered by Artificial Intelligence. Just a few days back, Microsoft previewed their Project Brainwave in the Build 2018 conference, claiming super-fast Artificial Intelligence capabilities which rivaled Google’s very own TPU.
We can safely infer that Google needed a TPU 3.0 like hardware to join the elite list of prime enablers of Artificial Intelligence in the cloud, empowering efficient data management and processing.
Check out our coverage of Google I/O 2018, for some exciting announcements on other Google products in store for the developers and Android fans.