Oracle Cloud Infrastructure (OCI) is meeting the growing demand for generative AI applications and large language models (LLM) by making Nvidia H100 Tensor Core GPUs available on its OCI Compute platform. In addition, Nvidia L40S GPUs will also be coming to the platform soon.
The availability of Nvidia H100 GPUs on OCI Compute offers bare-metal instances powered by the Nvidia Hopper architecture for AI. This enables a significant performance leap for large-scale AI and high-performance computing applications. The Nvidia H100 GPU is specifically designed for resource-intensive computing tasks, including training LLM models.
Oracle claims that organisations using Nvidia H100 GPUs can experience up to a 30x increase in AI inference performance and a 4x boost in AI training compared to Nvidia A100 Tensor Core GPUs. The BM.GPU H100.8 OCI Compute shape includes eight Nvidia H100 GPUs, each with 80GB of HBM2 GPU memory.
OCI Compute bare-metal instances with Nvidia L40S GPUs will be available for early access later this year, with general availability expected in early 2024. The Nvidia L40S GPUs, based on the Nvidia Ada Lovelace architecture, serve as a universal GPU for the data center, providing multi-workload acceleration for LLM inference and training, visual computing, and video applications.
This move by Oracle Cloud Infrastructure demonstrates its commitment to meeting the evolving demands of customers in the field of AI and large language models. By providing access to advanced GPUs and powerful architecture, OCI ensures that its users have the necessary resources to drive innovation and achieve high-performance computing capabilities.
The availability of Nvidia H100 and upcoming Nvidia L40S GPUs on the OCI Compute platform opens up possibilities for organizations across various industries. From training complex AI models to accelerating large-scale computations, these GPUs allow businesses to harness the power of generative AI and large language models in their applications.
In conclusion, Oracle Cloud Infrastructure’s integration of Nvidia H100 Tensor Core GPUs on its OCI Compute platform, along with the upcoming availability of Nvidia L40S GPUs, showcases its commitment to providing cutting-edge technology and empowering businesses in the field of AI and large language models. With these powerful GPUs, organizations can unlock new possibilities and drive innovation in their respective industries.