HPE unveiled innovative enhancements to its Private Cloud solution, incorporating Nvidia Blackwell Graphics Processing Units (GPUs)
HPE and Nvidia have deepened their partnership, committing to providing customers with the latest Nvidia hardware. This collaboration aims to facilitate enterprise AI deployment, offering baked-in investment protection for the latest Nvidia GPUs.
Two new HPE ProLiant Compute servers will be made available: the HPE ProLiant DL385 Gen11 and the HPE ProLiant Compute DL380a Gen12. These servers are specialized for AI workloads and support Nvidia's latest AI hardware, including the RTX PRO 6000 GPUs.
The HPE ProLiant DL385 Gen11 is a 2U AI-optimized server, built to power agentic and physical AI workloads such as robotics, digital twins, and real-time simulation. It is equipped with AMD EPYC processors and supports up to two NVIDIA RTX PRO 6000 Blackwell GPUs. This server offers significant compute and GPU acceleration tailored for AI reasoning, generative design, and immersive visualization. It also features enterprise-grade security, cloud-native management, and compatibility with NVIDIA AI Enterprise software and Blueprints, enabling efficient, scalable deployment of advanced AI applications.
On the other hand, the HPE ProLiant Compute DL380a Gen12 is a larger 4U server supporting up to eight NVIDIA RTX PRO 6000 GPUs, providing massive GPU resources. This server is designed to work with HPE’s Private Cloud AI platform to run cutting-edge AI models like Nemotron (agentic AI), Cosmos Reason (robotics), and advanced video analytics applications such as video search and summarization.
The NVIDIA RTX PRO 6000 GPUs feature 5th-generation Tensor Cores and 2nd-generation Transformer Engine with FP4 precision, delivering up to six times faster AI inference speeds compared to previous GPUs. These gains benefit workloads like large-scale simulation, synthetic data generation, and robotics training, where speed and efficiency are critical. The RTX PRO 6000, previously only available in the 4U form factor, is now available in a 2U form factor, making it more easily deployed by customers in their existing server racks and capable of being run with air cooling.
Private Cloud AI, co-developed with Nvidia, is a combination of HPE's private cloud offerings, GreenLake architecture, and Nvidia's AI computing stack, including NIM microservices. Customers can opt for air-gapped Private Cloud AI services if their data privacy rules require strict controls.
The newest version of HPE Private Cloud will include support for the latest Nvidia hardware and AI models, such as Nvidia's Nemotron and Cosmos Reason. Nemotron is a set of open-source reasoning models based on Meta's Llama, while Cosmos Reason is a vision language model intended for robotics and physical AI devices.
This partnership and the introduction of these new servers mark a significant step forward in HPE's Nvidia Computing by HPE portfolio, expanding it to include the latest Nvidia hardware designed for AI.
Read also:
- Strategies for Adhering to KYC/AML Regulations in India, a Leading Fintech Center (2024)
- Zigbee and LoRa Low-Power Internet of Things (IoT) Network Protocols: The Revolution in Data Transmission and Networking
- Operating solar panels during winter and efficiency assessment
- Breakthrough in green ammonia synthesis as a significant advancement toward decarbonization is reported.