Mobile Tech

AI infrastructure daily update — news from ASUS, Block and more


Block will deploy the NVIDIA DGX SuperPOD and Cerebras expands AI inference data centers

In this regular update, RCR Wireless News highlights the top news and developments impacting the booming AI infrastructure sector

ASUS showcases AI solutions at CloudFest 2025

ASUS unveiled a comprehensive lineup of AI infrastructure solutions at CloudFest 2025, integrating Intel Xeon 6 processors, NVIDIA GPUs and AMD EPYC chips. The company also introduced RS700-E12, RS720Q-E12 and ESC8000-E12P-series servers, optimized for scalable AI training and inference. ASUS also debuted the Intel Gaudi 3 AI accelerator PCIe card, designed for efficient generative AI inferencing.

Block deploys NVIDIA DGX SuperPOD for open-source AI

Block said it will be the first North American company to deploy the NVIDIA DGX SuperPOD with DGX GB200 systems, hosted at an Equinix AI-ready data center. This high-performance infrastructure will be dedicated to open-source AI model research and training, focusing on generative AI innovations in underexplored fields. Block’s AI research team said it aims to push AI boundaries while maintaining a commitment to open-source development. The deployment will leverage Lambda 1-Click Clusters, offering rapid access to interconnected NVIDIA GPUs for efficient large-scale AI experimentation and innovation, the firm added.

Cerebras expands AI inference data centers

Cerebras Systems announced the launch of six new AI inference data centers across North America and Europe, equipped with thousands of Cerebras CS-3 systems. These facilities will deliver over 40 million Llama 70B tokens per second, making Cerebras the largest provider of high-speed AI inference globally, according to the firm. New locations include Minneapolis, Oklahoma City and Montreal, with additional sites in the Midwest, East Coast and Europe slated for the last quarter of 2025. The Oklahoma City facility will feature 300+ CS-3 systems in a Level 3+ data center, optimized with custom water-cooling solutions for high-efficiency AI processing, ensuring global access to sovereign AI infrastructure, the firm added.

What is a GPU cluster?

In another article, RCR Wireless News defines a GPU cluster and its role in AI infrastructure. A GPU cluster consists of interconnected computing nodes, each equipped with GPUs, CPUs, memory and storage. These nodes communicate via high-speed networking, enabling efficient data distribution and processing. GPU clusters are at the core of modern AI infrastructure, providing the computational power necessary for deep learning, NLP and advanced AI-driven applications. As AI continues to redefine industries, organizations that invest in scalable, efficient GPU clusters will be well-positioned to capitalize on the full potential of artificial intelligence.

Why these announcements matter?

These developments highlight the rapid expansion of AI infrastructure, driven by growing enterprise demand for scalable, high-performance AI solutions. With increasing AI complexity, companies are investing in specialized AI hardware, liquid-cooled data centers and hyperscale computing to accelerate AI research, optimize model training and drive next-generation innovation.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button