‘Cloud TPU’ Bolsters Google’s ‘AI-First’ Strategy
Google fleshed out its artificial intelligence efforts during its annual developers conference this week with the roll out of an initiative called Google.ai that serves as a clearinghouse for machine learning research, tools and applications designed to help automate AI development.
Google (NASDAQ: GOOGL) executives also laid out the company’s shift to an “AI-first world” in which machine learning has been applied from everything from speech recognition to Google Maps.
Among the AI development tools unveiled on Wednesday (May 17) is the latest iteration of the company’s TensorFlow machine learning framework called the Cloud Tensor Processing Unit. The four-processor board is described as a machine-learning “accelerator” that can be accessed from the cloud and used to train machine-learning models. Each board delivers up to 180 teraflops of computing horsepower.
Cloud TPUs “are optimized for both training and inference,” explained Google CEO Sundar Pichai. Running on the Google Compute Engine, the company said its AI accelerators could be mixed-and-matched with the latest Volta GPU announced by Nvidia (NASDAQ: NVDA) last week or Skylake CPUs from Intel (NASDAQ: INTC).
The AI initiative addresses the reality that building and training machine learning models is an often arduous and expensive process. Greater availability of tools like TPU also responds to the underlying lack of skilled developers, Pichai added.
Other machine learning tools included in the AI initiative include APIs for speech and computer vision, the company added.
Google’s pivot to machine learning and custom hardware is also reflected in a range of products the search giant unveiled this week, nearly all incorporating AI. “In an AI-first world, we are rethinking all our products, and applying machine learning and AI to solve user problems,” Pichau stressed. For example, machine learning is now used to rank Google searches.
The drive to push AI into more platforms extends even to datacenters. Last year, Google launched its Tensor processing units as “custom hardware for machine learning,” the Google executive noted, asserting they are as much as 30 times faster than GPUs and CPUs. “We use TPUs across all our products.”
Cloud TPUs were designed to be ganged in datacenters: 64 stacked processors dubbed “TPU pods” essentially form “one big supercomputer,” Pichau explained, with each pod capable of 11.5 petaflops.
Google is promoting its new cloud processors as nothing short of “an important advanced in technical infrastructure for the AI era.” Cloud TPUs are running now on the Google Compute Engine, Pichau added. In so doing, Google is attempting to differentiate its cloud service as best suited for deep learning and machine learning development.
The cloud-based Tensor processors announced this week are aimed at computer-intensive training of machine learning models as well as real-time tasks like inferring whether an image is a dog or cat.
Market watchers praised the Google.ai initiative while noting that few details have so far been released. “Building models today is still a massively time and processor intensive task which is mostly done manually and is very expensive,” said Richard Windsor, and analyst with Edison Investment Research. “If machines can build and train their own models, a whole new range of possibilities is opened-up in terms of speed of development as well as the scope tasks that AI can be asked to perform.”
–Editor’s note: This story has been updated.
Recent items:
Machine Learning, Deep Learning, and AI: What’s the Difference?
Why Google Open Sourced Deep Learning Library TensorFlow