ChatGPT Puts AI At Inflection Point, Nvidia CEO Huang Says
It’s been 11 years since three AI researchers shocked the world with a breakthrough in computer vision, kickstarting the deep learning craze. But with emergence of generative language models like ChatGPT over the past few months, we find ourselves at another inflection point in the history of AI, Nvidia CEO and founder Jenson Huang said during his keynote address at the GPU Technology Conference (GTC) yesterday.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the ImageNet competition back in 2012 by using a powerful (at least for its day) Nvidia GPU to train the convolutional neural network (CNN)-based computer vision model they had created, dubbed AlexNet. The Nvidia GPU they chose, a GForce GTX 580, was a high-end graphics card favored by gamers. But instead, they used it to train their CNN on a 14 million images. It was a colossal success, of course, and AlexNet won the challenge by a large margin, thus igniting “the Big Bang of AI,” Huang said.
“A decade later, the Transformer model was invented and Ilya, now at OpenAI, trained the GPT-3 large language model,” Huang continued. “Three hundred and twenty-three sextillion floating point operations were required to train GPT-3, one million times more floating point operations than the trained AlexNet. The result this time? ChatGPT. The AI heard around the world. A new computing platform has been invented. The iPhone moment of AI has started. Accelerated computing and AI have arrived.”
ChatGPT is just the latest iteration in a long line of deep learning breakthroughs that have been powered by GPUs. As the dominant provider of GPUs, Nvidia naturally has benefited from the rapid development of deep learning, which Huang first referred to as a “Cambrian explosion” back in 2017. So it’s no surprise to see Huang using this kind of language again to highlight the immense progress that has been made in AI and the unique role Nvidia has played in it.
But Huang’s GTC 2023 keynote sounded different for a couple of reasons. For starters, there’s a new key breakthrough that led up to the current moment: The publication of Google’s Transformer model in 2017. That model set the stage for the new generative AI models like ChatGPT that have captured the world’s imagination. According to Huang, generative AI models are destined to change the world.
“Generative AI is a new computing platform like PC, Internet, mobile, and cloud,” Huang said. “And like in previous computing eras, first-movers are creating new applications and founding new companies to capitalize on generative AI’s ability to automate and co-create.”
Huang boasted that Nvidia has 50 early-access customers spanning multiple industries using GPUs to create generative AI applications. In just a few months, these services have already reached 100 million users across consumer Internet, software, healthcare, media and entertainment, and financial services, he said.
“ChatGPT is the fastest-growing application in history,” Huang said. “No training is necessary. Just ask these models to do something. The prompts can be precise or ambiguous. If not clear, through conversation, ChatGPT learns your intentions. The generated text is beyond impressive. ChatGPT can compose memos and poems, paraphrase a research paper, solve math problems, highlight key points of a contract, and even code software programs.”
What really sets these large language models (LLMs) apart from what preceded them is their capability to perform downstream tasks without explicit training, what has been dubbed one-shot or zero-shot training. Combined with a new kind of computer vision model called a diffusion model (examples include DALL-E and Stable Diffusion), today’s AI tools can do amazing things, he says.
“In just over a decade, we went from trying to recognize cats to generating realistic images of a cat in a space suit walking on the moon,” Huang said. “Generative AI is a new kind of computer, one that we program in human language. This ability has profound implications. Everyone can direct a computer to solve problems. This was a domain only for computer programmers. Now everyone is a programmer.”
Nvidia sees an opportunity here to sell more GPUs. As the world’s foremost GPU salesman, Huang imparts an impressive pitch (who can forget “The more you buy, the more you save.”) But give credit to Huang and company for realizing that opportunity is bigger than just schlepping more silicon.
To that end, Nvidia is positioning itself as the indispensable middleman for new supply chain of AI development services with its new Nvidia AI Foundation. According to Jaime Hampton’s coverage over at Datanami’s sister publication, EnterpriseAI, Nvidia AI Foundation includes a service for creating images, videos, and 3D models called Picasso; NeMo, a text-to-text modality to create and run large language models; and BioNeMo, a service used for biological research purposes such as generating protein structures.
Nvidia also launched DGX Cloud, which will allow organizations to rend an “AI supercomputer” to train their AI models. According to HPCwire’s Agam Shah’s coverage of the DGX Cloud, the offering provides access to a system with eight Nvidia H100 or A100 GPUs and 640GB of GPU memory, starting at $36,999 per instance per month. Oracle is the first cloud provider to host DGX Cloud.
Training generative models typically requires a large number of GPUs, which is one reason why organizations are simply using the pre-trained models from OpenAI, Google, and others. But once trained, the model can also benefit from GPUs at runtime. To that end, the Santa Clara, California company also unveiled new GPU products for inference workloads, as we covered yesterday in Datanami.
“AI is at an inflection point as generative AI has started a new wave of opportunities driving a step function increase in inference workloads,” Huang said. “AI can now generate diverse data spanning voice, text, images, video, and 3D graphics to proteins and chemicals.”
Realistic chatbots and accurate language translators are just the start of what’s to come. Whether it’s designing new drug molecules, training robot helpers in Amazon warehouses, or generating a realistic video in the omniverse, the technological breakthroughs occurring now in generative AI have the potential to shake up the status quo, Huang said.
“Generative AI will reinvent nearly every industry,” he said during his keynote. “We are at the iPhone moment of AI. Startups are racing to build disruptive products and business models, while incumbents are looking to respond. Generative AI has triggered a sense of urgency in enterprises worldwide to develop AI strategies. Customers need to access Nvidia AI easier and faster.”
Related Items:
Nvidia Unveils GPUs for Generative Inference Workloads like ChatGPT
GPT-4 Has Arrived: Here’s What to Know
Large Language Models in 2023: Worth the Hype?