Skip to content

Nvidia's GB300 Blackwell Ultra "desktop powerhouse" now available in workstations from Asus: Unassuming desktop boasts power surpassing many server racks, offering up to 784GB of synchronized memory, delivering 20 PFLOPS AI performance.

Tech giant Asus unveils ExpertCenter Pro ET900N G3, a cutting-edge desktop boasting Nvidia's GB300 Grace Blackwell Ultra for an impressive AI performance of up to 20 PFLOPS, and a massive 784 GB memory capacity. This move signifies Nvidia's expansion beyond DGX-exclusive systems, with GB300...

Nvidia's GB300 Blackwell Ultra "desktop superchip" finds its way into workstations: A...
Nvidia's GB300 Blackwell Ultra "desktop superchip" finds its way into workstations: A plain-designed desktop boasts power surpassing many server racks, offering up to 784GB of coherent memory, and delivering a remarkable AI performance of 20 PFLOPS.

Nvidia's GB300 Blackwell Ultra "desktop powerhouse" now available in workstations from Asus: Unassuming desktop boasts power surpassing many server racks, offering up to 784GB of synchronized memory, delivering 20 PFLOPS AI performance.

Nvidia has announced the production of its new GB300 Blackwell Ultra servers, marking a significant stride in the evolution of AI hardware. The servers, expected to begin shipping and ramping volume shipments by September 2025, are set to revolutionize the AI industry by offering powerful performance and making high-performance AI compute power more accessible.

The first deployment of GB300 NVL72 clusters, featuring 72 Blackwell Ultra GPUs and 36 Grace CPUs per rack, has already been carried out by Dell in partnership with CoreWeave. This collaboration has demonstrated impressive performance, with 1.1 exaFLOPS of FP4 inference, representing a 50% increase over the previous GB200 NVL platform.

Nvidia's decision to reuse the motherboard design (Bianca board) from the GB200 platform has eased supply chain bottlenecks, accelerating production and alleviating previous supply chain challenges. This rapid upgrade pace in AI server hardware has put Nvidia far ahead of its competitors, pushing the supply chain to its limits.

Dell's GB300 NVL72 clusters at CoreWeave enable enhanced cloud AI capabilities for large language model training, reasoning, and inference, accelerating AI workloads and scalability. The momentum and demand for the GB300 platform indicate Nvidia's dominance and transformative effect on AI hardware ecosystems in 2025 and beyond.

Asus has joined the fray by launching the ExpertCenter Pro ET900N G3 workstation, a desktop equipped with Nvidia's GB300 Blackwell Ultra "desktop superchip." This powerful desktop offers AI performance exceeding many server racks, boasting up to 784 GB of coherent memory and 20 petaFLOPS of AI computation. Asus has also introduced the smaller Ascent GX10 desktop, based on the GB10 Grace Blackwell platform, signalling Nvidia's push to bring high-performance AI hardware into more accessible desktop form factors.

The ExpertCenter Pro ET900N G3 offers 20 PFLOPS of AI performance and can deliver up to 1,800W to the GPU alone. Future platforms like Rubin are expected to push GPU density, power efficiency, and thermal design to new levels.

The Grace processor, built by Nvidia, is designed for AI and HPC workloads, not as a direct x86 competitor. The processor enables unified, high-bandwidth memory and compute performance that traditional setups can't easily match. Nvidia's decision to build the Grace processor may gradually erode AMD and Intel's share in AI-focused servers and workstations.

The GB300 Desktop Superchip combines a Grace ARM-based CPU with Nvidia's new Blackwell Ultra GPU. This strategic inflection point in AI infrastructure is enabling OEM partners like Dell and Asus to rapidly deliver powerful AI systems. This proliferation is helping the AI industry scale AI model training and inference workloads faster while offering workstation alternatives that broaden access to AI compute power outside large data centers.

In summary, the GB300 Blackwell Ultra servers are driving a strategic inflection point in AI infrastructure, enabling OEM partners like Dell and Asus to rapidly deliver powerful AI systems. This proliferation is helping the AI industry scale AI model training and inference workloads faster while offering workstation alternatives that broaden access to AI compute power outside large data centers.

  1. The deployment of GB300 NVL72 clusters, featuring powerful Nvidia technology, has showcased impressive performance in data-and-cloud-computing applications, and has the potential to revolutionize the AI industry.
  2. Nvidia's strategic inflection point, as demonstrated by the GB300 Blackwell Ultra servers, is driving a transformation in the AI hardware ecosystem, making high-performance AI compute power more accessible through partnerships with OEMs like Asus.

Read also:

    Latest