Follow BigDATAwire:

November 12, 2019

Intel Debuts New VPU Chip at AI Summit

Jonathan Ballon, Intel vice president in the Internet of Things Group, shows off a Keem Bay prototype at AI Summit on November 12, 2019

Intel today unveiled a new vision processing unit (VPU) designed to power neural networks on edge devices like cameras and drones. The company also used its AI Summit event in San Francisco to demonstrate two recently released neural network ASICs branded under its Nervana unit.

The new Movidius Myriad VPU, codename Keem Bay, is a diminutive device that will be used to execute pre-trained neural network algorithms for “edge media” and computer vision applications. Intel says the chip, which is slated to ship in the first half of 2020, will offer the performance and efficiency to handle the exploding size of machine learning models and data sets.

Jonathan Ballon, Intel vice president in the Internet of Things Group, says Keem Bay’s 64-bit memory bandwidth will help ensure that the processor is fully utilized.

“The architecture is built specifically for the edge. It’s built for DL. It’s built for vision and media,” Ballon said today at AI Summit. “At the edge, performance is important, of course, but customers also care about power, size, and increasingly latency.”

In terms of power, Keen Bay will deliver 4x the raw throughput of Nvidia’s TX2 SOC while consuming just two-thirds of the power. That translates to a 6x inference performance advantage, per second per watt, over Nvidia’s TX2, according to Ballon.

The small form factor of Keem Bay will also ensure that it finds its way into many different applications. That includes cameras of course. But Keem Bay isn’t limited to cameras.

The Intel-NNP_T chip

“It also can go into form factor m2 card that can be used for robotics, kiosks, all the way up to full power PCIe cards, where we can put multiple VPU cards on a card and get parallel, high-density, scalable edge AI acceleration,” Ballon says. “This will really unlock the VPU for a wide range of applications that our customers are asking us for.”

Intel also used the event to showcase two previously announced pieces of dedicated AI hardware that recently started shipping. Specifically, it demoed two Intel Nervana Neural Network Processors (NNP)s, including the NNP-T1000 for training and the NNP-I1000 for inference. These new ASICs are intended to be used for the most demanding deep learning workloads running in data centers and public cloud environments.

While the CPU is the king of the data center and the centerpiece of AI, Intel is ramping up development of a host of other chip types to help it close processing gaps that are emerging in AI. To that end, it’s counting on the new ASICs and VPUs – not to mention FPGAs and a GPU – to help provide the full solution.

“What we really shoot for is all those chips working in concert to solve a single problem,” said Naveen Rao, vice president and general manager of Intel’s AI Products Group. “All these capabilities – compute, memory, communication – integrated at the system level will really drive the next generation of AI solutions.”

Related Items:

Intel Aims to Simplify AI with ‘Neural Stick’ Update

Intel Unveils Nauta, a DL Framework for Containerized Clusters

 

 

BigDATAwire