If there was any doubt Nvidia has arrived as an enterprise player, its deal with VMware should erase all doubt.
The GPU developer and VMware announced at the recent VMworld 2020 conference that they plan to integrate their respective core technologies through a series of development and networking partnerships.
As part of the collaboration, Nvidia’s set of AI software-research tools on the Nvidia NGC hub will be integrated into VMware’s vSphere, Cloud Foundation, and Tanzu platforms. This will help accelerate AI adoption, enabling enterprises to extend existing infrastructure for AI, manage all applications with a single set of operations, and deploy AI-ready infrastructure where the data resides, across the data center, cloud and edge.
To achieve this, the two firms are embarking on what they call Project Monterey, an architecture to provide improved AI performance using Nvidia’s SmartNIC technology and Nvidia’s latest programmable BlueField-2 data-processing units (DPUs) from Mellanox.
The new architecture will combine VMware Cloud Foundation and Nvidia’s BlueField-2 DPU to create a next-generation infrastructure built for AI, machine learning and other high-performance data-centric apps.
It will also deliver expanded application acceleration beyond AI to all enterprise workloads and provide an extra layer of security through a new architecture that offloads critical data-center services from the CPU to SmartNICs and programmable DPUs.
“We’re going to bring the Nvidia AI computing platform and our AI application frameworks onto VMware,” said Nvidia CEO Jen-Hsun Huang on a live stream discussion for the conference. “AI is really a supercomputing type of application, it’s a scale-out, distributed, accelerated computing application. In order to make that possible on VMware, fundamental computer science has to happen between our two companies.”
VMware CEO Pat Gelsinger, who made his name at Intel as a top engineer, acknowledged the arrival of the GPU in the enterprise. “We’ve always been a CPU-centric company. A GPU was something over there and maybe we virtualized or connected to it over the network,” he said.
Huang teasingly shook his head and said “Shame on you, Pat,” to laughter.
“But today we’re making the GPU a first-class compute citizen. Through our network fabric, through the VMware virtual layer, it is coming as an equal citizen in how we treat that compute fabric,” Gelsinger said.
To achieve their goals, AI needs to be easier for developers to use, it needs to work seamlessly anywhere, from the cloud to the edge to the data center and elsewhere, and it needs to provide safe storage for the huge amounts of data it requires, he said.
The integration of Nvidia’s NGC with VMware vSphere and VMware Cloud Foundation is designed to make it easier for more companies to deploy and manage AI for their most demanding workloads, according to the companies. The services are expected to be used by a wide range of enterprises, from healthcare to financial services, retail and manufacturing, using containers and virtual machines on the same platform as their enterprise applications, at scale across the hybrid cloud.
The ability for existing VMware customers to do all this using the VMware tools they already know and use is what can really help drive the adoption of the technology, said Huang.
This really shows how much of a player Nvidia has become. This is the sort of announcement you’d expect from Intel or AMD, not a GPU maker. But Nvidia, which started out making first-person shooter games go faster, has grown into a force for the enterprise with its focus on AI.
Copyright © 2020 IDG Communications, Inc.