INFRASTRUCTURE ARCHITECTURE

NVIDIA GB200 NVL72 Infrastructure and MPO-8 APC Cabling for Scalable Units

Deconstructing the cabling architecture of a Blackwell Scalable Unit (SU), where 8 racks converge into 9,216 active fiber strands.

The DGX GB200 Scalable Unit (SU) represents a big shift in data center architecture. The SU is a unified 576-GPU entity interconnected by 9,216 active fiber strands. ScaleFibre provides the precision terminated trunks required to manage this density.

The 4 Physical SuperPOD Fabrics

NVIDIA segments the SU into distinct physical layers to isolate GPU traffic.

MN-NVL (NVLink 5)

Scale-Up

The ‘internal’ rack network connecting 72 GPUs at 1.8 TB/s.

  • Zero Optical Fiber
  • Passive Copper Backplane
  • Blind-mate connectors

Compute InfiniBand

Scale-Out

The primary ‘East-West’ fabric for massive multi-node training.

  • 4,608 active fibers per SU
  • Rail-optimized topology
  • Quantum-3/Quantum-2

Storage & In-Band

Frontend

Ethernet-based fabric for high-speed data ingestion and provisioning.

  • 5:3 Blocking factor
  • BlueField-3 DPU offload
  • VXLAN/RoCE support

OOB Management

Control Plane

The isolated network for hardware telemetry, BMC, and PDU management.

  • RJ45/Cat6 Copper
  • SN2201 Switch tier
  • Physical air-gap security

Exascale SU Metrics

An 8-rack Scalable Unit represents the fundamental building block of the NVIDIA AI Factory.

9,216

Active Fibers per SU

4,608

Compute-Only Strands

5:3

Storage Blocking Ratio

400G/800G

Native Port Speeds
Product Image
Featured Solution

High Fibre Count MPO Trunk Cables

High-count MPO trunk cable up to 288 fibres. Compact, lightweight, and perfect for backbone installs in hyperscale or enterprise datacentre.

View High Count MPO Trunk Details

The Three Levels of SU Connectivity

1
Level A: Server-to-Leaf

1,152 fibers per rack using high fibre count trunks or jumpers to connect NVL72 nodes to Leaf Switches.

2
Level B: Leaf-to-Spine

Aggregating rail-aligned traffic within the SU using 1:1 non-blocking links for compute.

3
Level C: Spine-to-Core

Scaling beyond the SU to a centralized Core area using high-count trunks.

Legacy Patching (Point-to-Point)

  • Manual Complexity: Requires 9,216 individual patch cords per 8-rack block.
  • Airflow Obstruction: Dense cable bundles block liquid-cooling exhaust paths.
  • Risk Profile: High probability of ‘crossed rails’ during manual 1:1 patching.
  • Deployment Time: 115+ hours for manual routing and labeling per SU.

Modular High Fibre Count Trunking

  • Plug-and-Play: Consolidates thousands of fibers into pre-terminated 128F/144F/256F/288F/576F tailored trunks.
  • Thermal Optimization: Small-diameter cables maximize airflow in dense racks.
  • Pathway Efficiency: Consolidates 1,152 active fibers per rack into high-count MPO backbones.
  • Installation Profile: Rapid deployment via pre-terminated factory-tested assemblies.

Active Fiber Growth: Node to Full SuperPOD

Cabling Complexity
9,216 active fibers per SU requires modular high fibre count trunks to avoid airflow-blocking 'cable chaos'.

Scalable Unit Visualized

The 8-Rack Compute Block
The 8-Rack Compute Block

An NVIDIA GB200 SU (Scalable Unit) consists of 8 racks, each housing a DGX GB200 NVL72 system with 72 GPUs.

High Fibre Count Trunk Distribution
High Fibre Count Trunk Distribution

Consolidating thousands of rack fibers into high-density trunks for airflow clearance, rapid installation and minimal pathway usage.

Liquid Cooling
Liquid Cooling

Liquid-cooled cold plates stabilize the tray environment, allowing OSFP transceivers to shed heat effectively via riding heat sinks.

Technical FAQ

+ How does the SU count stay manageable at 9,216 fibers?
By using a tiered cabling hierarchy. High-fiber count trunks replace thousands of individual MPO patch cords, reducing the physical volume and preventing cooling obstructions.
+ What is the '5:3 Blocking Factor' in the storage fabric?
Unlike the non-blocking (1:1) compute fabric, the storage network is intentionally oversubscribed. This reduces fiber costs and complexity while meeting the 40GB/s per-node requirement for storage. Deployment often utilizes NVIDIA compatible MPO patch cables.
+ Why is the internal NVLink fabric fiber-free?
NVIDIA utilizes a passive copper backplane and cable cartridges within the NVL72 rack. This eliminates thousands of optical transceivers and fibers, significantly reducing power consumption and latency. Optical fiber is reserved for the scale-out compute fabric.
+ What happens when we scale to 16 Scalable Units?
At the 16-SU scale (9,216 GPUs), the total active fiber count for the compute fabric alone reaches 18,432 strands. Managing this density requires high density housings designed specifically for high-count optical fiber and centralized core group switching architectures.
+ Why is MPO-8 used instead of the standard MPO-12?
Modern 400G NDR and 800G XDR transceivers use 4-lane or 8-lane parallel optics. An 8-fiber MPO alignment matches the 4x Tx and 4x Rx configuration perfectly. Using 8-fiber active MPO trunks eliminates ‘dark’ or wasted fibers within the cluster fabric.
+ What is the importance of APC (Angled Physical Contact) polish?
High-speed 100G-PAM4 signaling is extremely sensitive to back-reflections. The 8-degree angle of an APC connector ensures reflected light is absorbed into the fiber cladding, maintaining the high Optical Return Loss (ORL) required for error-free AI training.
+ How does fiber density impact liquid-cooled AI halls?
Even with liquid-cooled trays, air must still circulate to manage secondary heat. Using high-density SmartRibbon cables significantly reduces cable diameter, ensuring that the physical cabling does not obstruct airflow or liquid-cooling manifolds.
+ What are the distance limitations for SU-level cabling?
Multimode (OM4/OM5) is restricted to 50 meters for 400G/800G. For centralized Spine-to-Core links that exceed this, Single-mode G.657.A1 fiber is mandatory to support reaches further reach without signal degradation.
+ Can I use standard outdoor cables for AI data center backbones?
No. Indoor AI halls require LSZH (Low Smoke Zero Halogen), Riser or Plenum to meet required fire safety regulations depending on local regulations. For high-density pathways, specialized SlimCORE indoor cables provide the necessary strand count in a reduced diameter.
+ What is the benefit of factory-terminated pigtails in the SU?
MPO cord optical fibre pigtails allow for rapid mass-fusion splicing at the Spine or Core layer. This factory-controlled termination at one end provides the benefits of pre-termination, whilst the ‘blunt’ end allows flexibility to fit off to the required length onsite.

Architect Your AI Factory

ScaleFibre delivers pre-terminated cabling solutions for NVIDIA DGX SuperPOD deployments.

Get in Touch

Get detail on high fibre count trunks for your NVidia DGX SU.