Telecom operators are no longer just moving data—they’re turning it into intelligence. Gigabyte’s latest lineup at MWC 2026 shows how far this shift has come, with platforms designed to handle everything from massive AI training clusters to compact edge workstations.

The core of the strategy is the GB300 NVL72, a liquid-cooled rack-scale system that packs 72 NVIDIA Blackwell Ultra GPUs and 36 Grace CPUs. It’s built for telecom-grade AI workloads, where real-time analytics and automation are critical. The platform uses NVIDIA’s Quantum-X800 InfiniBand or Spectrum-X Ethernet to connect these components, ensuring low-latency communication between GPUs and CPUs.

This isn’t just about raw power—it’s about efficiency. Gigabyte’s XN24-VC0-LA61, for example, uses direct liquid cooling to cram NVIDIA MGX architecture and Grace Blackwell NVL4 Superchips into a dense, energy-conscious design. Meanwhile, the G893-ZX1-AAX4 combines AMD EPYC 9005 CPUs with Instinct MI355X GPUs, balancing performance-per-watt for inference tasks while keeping costs in check.

Where the Data Meets AI

The real innovation lies in how these systems bridge the gap between raw data and actionable intelligence. Digital twins, a key focus for telecom operators, get a boost from Gigabyte’s XL44-SX2-AAS1. This platform uses eight RTX PRO 6000 Blackwell Server Edition GPUs to simulate network conditions in real time, with 800 GB/s of bandwidth through NVIDIA ConnectX-8 SuperNICs and PCIe Gen6 connectivity.

But the transformation doesn’t stop at the core. Gigabyte’s B683-Z80-LAS1 is a 6U blade server that pushes liquid cooling to its limits, removing over 90% of system heat while supporting AMD EPYC processors in a 1:1 CPU-to-NIC configuration. It’s built for AI cloud and neocloud services, where scalability and power efficiency are non-negotiable.

Gigabyte Unveils AI-Powered Telecom Infrastructure for Next-Gen Network Intelligence

Edge AI: Bringing Intelligence to the Network Periphery

At the edge, Gigabyte’s W775-V10-L01 workstation stands out. Powered by NVIDIA’s GB300 Grace Blackwell Ultra Desktop Superchip, it supports up to 775 GB of coherent memory—enough for large-scale AI development right on a developer’s desk. The AI TOP ATOM, meanwhile, delivers a full petaFLOP of compute in a palm-sized form factor, perfect for rapid prototyping or deployment in constrained environments.

  • GB300 NVL72: 72 NVIDIA Blackwell Ultra GPUs + 36 Grace CPUs, liquid-cooled rack-scale platform
  • XN24-VC0-LA61: NVIDIA MGX architecture, Grace Blackwell NVL4 Superchips, direct liquid cooling
  • G893-ZX1-AAX4: AMD EPYC 9005 CPUs + Instinct MI355X GPUs, optimized for inference and modeling
  • XL44-SX2-AAS1: Eight RTX PRO 6000 Blackwell Server Edition GPUs, 800 GB/s bandwidth for digital twins
  • B683-Z80-LAS1: 6U blade server, AMD EPYC processors, full-system liquid cooling (90%+ heat removal)
  • W775-V10-L01: NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, up to 775 GB coherent memory
  • AI TOP ATOM: Compact edge AI workstation, up to one petaFLOP of compute

These platforms aren’t just about pushing more compute—they’re about redefining how telecom networks operate. The challenge now is whether operators can integrate these capabilities without overloading their existing infrastructure. But with Gigabyte’s focus on liquid cooling and power efficiency, the path forward looks clearer than ever.

For buyers, the message is simple: the future of telecom AI isn’t just coming—it’s here, and it’s built to scale.