Intel Adds Memory to Deep Learning for Drug Discovery
A drug discovery technique called “high content” screening analyzes data in microscopic images used to determine the effects of genetic and chemical treatments on cell cultures. However, the processing-intensive analysis of sets of thousands of pre-defined features can take half a day.
In order to accelerate the big cell data process, Intel Corp. said is it working with the Swiss pharmaceutical giant Novartis to apply deep neural network acceleration for screening. Their approach combined the drug maker’s trove of microscopic images with Intel’s Xeon processors that include beefed-up memory capacity running on the TensorFlow machine learning framework.
The partners report they have so far managed to reduce the time required to train analysis models from 11 hours to 31 minutes.
“The challenge that pharmaceutical companies like Novartis face is how to efficiently train models to process as many of these high-content images as possible every day—and they generate millions per week,” explained Kushal Datta, an Intel AI research scientist.
The objective “is to reduce human error and possibly reduce the time to discover a drug for a targeted disease,” Datta added.
Deep learning techniques were used to sort through relevant image features to distinguish among treatments, then “automatically” learn from the resulting data analysis. The hope is that applying deep neural network will greatly accelerate the analysis of high-content screening, the partners said.
The image analysis demonstration used an eight-socket cluster of servers based on Intel’s Xeon 6148 CPU connected via a high-speed fabric network. The configuration ran on a recent version of the TensorFlow. Leveraging large memory support on the server platform, the partners said they achieved scaling of more than 120 3.9-megapixel images per second using 32 TensorFlow “workers” used to execute operations on the TensorFlow graph.
Naveen Rao, general manager of Intel’s AI products group, noted during the chip maker’s recent development conference that GPUs from rivals such as Nvidia (NASDAQ: NVDA) work well for deep learning training since their highly parallel architecture is suited to scientific workloads.
“But [GPUs] have severe memory constraints,” Rao argued in describing the company’s collaboration with Novartis (NYSE: NVS).
The chip maker asserts its latest scalable Xeon processor addresses the complexity and hybrid workloads associated with deep learning. “We can now enable a new level of complexity with the memory capabilities of Xeon,” Rao claimed.
According to Intel (NASDAQ: INTC), deep convolutional neural network models used to scan microscopy images typically work on millions of pixels per image along with millions of model parameters and thousands of training images at a time. Deep neural network acceleration techniques would process multiple image in less time while extracting insights about treatments from image features learned by the model.
The partners said the next step is to tackle the large data sets used to train image classification models. The hope is that “unsupervised” deep learning methods could be applied to unlabeled microscopic images to further accelerate the process, thereby extracting insights about cellular biology that could lead to the discovery of new drugs.
Recent items:
Accelerating Drug Discovery with Machine Learning on Big Medical Data
Why Deep Learning May Not Be So ‘Deep’ After All