Data center operator Equinix has partnered with Nvidia to launch a fully hosted private cloud service, enabling businesses to quickly build and run their own large-scale artificial intelligence models.
According to the cooperation agreement, Equinix will install and operate privately owned Nvidia infrastructure in a company’s international business exchange data center. Enterprise customers will purchase Nvidia’s systems and pay fees to Equinix, who will operate them on their behalf.
This service has now been commercialized and is based on the Nvidia DGX system, Nvidia network, and Nvidia AI software.
Equinix President and CEO Charles Meyer stated that the company needs to adopt adaptable and scalable hybrid infrastructure in the local market in order to incorporate artificial intelligence supercomputing into data.
He said, “Our new service provides customers with a fast, cost-effective way to adopt advanced artificial intelligence infrastructure operated and managed by global experts. The new service enables customers to operate their artificial intelligence infrastructure near their data.”
The DGX system located within the Equinix data center is connected to the external world through a high-speed dedicated network, and the company also provides high bandwidth interconnection for cloud and enterprise service providers.
With the support of Nvidia partners, the Equinix management service team provided comprehensive training on how to build and operate artificial intelligence systems.
Huang Renxun, founder and CEO of Nvidia, said, “Generative artificial intelligence is changing every industry. Now, companies can have Nvidia’s artificial intelligence supercomputing and software in hundreds of data centers worldwide, combined with the operational efficiency of Equinix management.”
Equinix stated, without disclosing its name, that there are already corporate clients using this new management service, many of whom come from industries such as biopharmaceuticals, financial services, software, automotive, and retail.
These clients are establishing artificial intelligence excellence centers to provide a strategic foundation for large-scale language model (LLM) use cases. These measures include accelerating the time to market of new drugs, developing artificial intelligence co pilot for customer service agents, and building virtual productivity assistants.
In October of this year, IDC predicted that by 2023, global businesses will spend nearly $16 billion on generative artificial intelligence (genAI) software, infrastructure hardware, and IT services, and will grow to $143 billion by 2027.
This technology research company states that as organizations transition from early experimentation to active construction with target use cases, and then to widespread adoption across business activities, and extend the use of genetic artificial intelligence to the edge, investment in generative artificial intelligence will follow natural development in the coming years.
As Equinix and Nvidia reach a custody AI deal, companies in the Asia Pacific region are showing interest in acquiring their AI computing systems for privacy and security reasons.
Charlie Boyle, Vice President of NVIDIA DGX Systems, said at yesterday’s press conference, “Today, when we talk to corporate clients around the world, one of their primary concerns and ideas about artificial intelligence is being able to have their own models and truly have their own future.”
He added that many companies in the Asia Pacific region are rapidly expanding their use of this technology. But they don’t have internal experts to build their own large-scale language models.
He pointed out that most companies need to be very close to the artificial intelligence processing they are trying to accomplish.
Boyle said, “Artificial intelligence models and AI execution must be very close to data. All these factors combined make customers want to do AI, want fast, secure, and access their data. But often, they either lack data center space or internal expertise to manage all of this.”