

# NVIDIA GB200 NVL72 Infrastructure and MPO-8 APC Cabling for Scalable Units

The DGX GB200 Scalable Unit (SU) represents a big shift in data center architecture. The SU is a unified 576-GPU entity interconnected by 9,216 active fiber strands. ScaleFibre provides the precision terminated trunks required to manage this density.


---


## The 4 Physical SuperPOD Fabrics
NVIDIA segments the SU into distinct physical layers to isolate GPU traffic.

### MN-NVL (NVLink 5) [Scale-Up]

* The 'internal' rack network connecting 72 GPUs at 1.8 TB/s.
**Features:**
  - Zero Optical Fiber
  - Passive Copper Backplane
  - Blind-mate connectors

### Compute InfiniBand [Scale-Out]

* The primary 'East-West' fabric for massive multi-node training.
**Features:**
  - 4,608 active fibers per SU
  - Rail-optimized topology
  - Quantum-3/Quantum-2

### Storage & In-Band [Frontend]

* Ethernet-based fabric for high-speed data ingestion and provisioning.
**Features:**
  - 5:3 Blocking factor
  - BlueField-3 DPU offload
  - VXLAN/RoCE support

### OOB Management [Control Plane]

* The isolated network for hardware telemetry, BMC, and PDU management.
**Features:**
  - RJ45/Cat6 Copper
  - SN2201 Switch tier
  - Physical air-gap security


## Exascale SU Metrics
An 8-rack Scalable Unit represents the fundamental building block of the NVIDIA AI Factory.

| Metric | Value |
| :--- | :--- |
| Active Fibers per SU | **9,216** |
| Compute-Only Strands | **4,608** |
| Storage Blocking Ratio | **5:3** |
| Native Port Speeds | **400G/800G** |
## The Three Levels of SU Connectivity
1. **Level A: Server-to-Leaf**: 1,152 fibers per rack using high fibre count trunks or jumpers to connect NVL72 nodes to Leaf Switches.
2. **Level B: Leaf-to-Spine**: Aggregating rail-aligned traffic within the SU using 1:1 non-blocking links for compute.
3. **Level C: Spine-to-Core**: Scaling beyond the SU to a centralized Core area using high-count trunks.

## Comparison: Legacy Patching (Point-to-Point) vs. Modular High Fibre Count Trunking

### Legacy Patching (Point-to-Point)
* Manual Complexity: Requires 9,216 individual patch cords per 8-rack block.
* Airflow Obstruction: Dense cable bundles block liquid-cooling exhaust paths.
* Risk Profile: High probability of 'crossed rails' during manual 1:1 patching.
* Deployment Time: 115+ hours for manual routing and labeling per SU.

### Modular High Fibre Count Trunking
* Plug-and-Play: Consolidates thousands of fibers into pre-terminated 128F/144F/256F/288F/576F tailored trunks.
* Thermal Optimization: Small-diameter cables maximize airflow in dense racks.
* Pathway Efficiency: Consolidates 1,152 active fibers per rack into high-count MPO backbones.
* Installation Profile: Rapid deployment via pre-terminated factory-tested assemblies.

## Expert Insight
> ""
> — **<no value>**, <no value>
## Technical FAQ
**Q: How does the SU count stay manageable at 9,216 fibers?**
A: By using a tiered cabling hierarchy. [High-fiber count trunks](/products/optical-cable-assemblies/mpo-trunks/high-fibre-count-mpo-trunks/) replace thousands of individual MPO patch cords, reducing the physical volume and preventing cooling obstructions.

**Q: What is the '5:3 Blocking Factor' in the storage fabric?**
A: Unlike the non-blocking (1:1) compute fabric, the storage network is intentionally oversubscribed. This reduces fiber costs and complexity while meeting the 40GB/s per-node requirement for storage. Deployment often utilizes [NVIDIA compatible MPO patch cables](/products/optical-cable-assemblies/mpo-trunks/nvidia-compatible-mpo-patch-cable-apc/).

**Q: Why is the internal NVLink fabric fiber-free?**
A: NVIDIA utilizes a passive copper backplane and cable cartridges within the NVL72 rack. This eliminates thousands of optical transceivers and fibers, significantly reducing power consumption and latency. Optical fiber is reserved for the [scale-out compute fabric](/products/optical-cable-assemblies/mpo-trunks/nvidia-compatible-mpo-splitter-ndr/).

**Q: What happens when we scale to 16 Scalable Units?**
A: At the 16-SU scale (9,216 GPUs), the total active fiber count for the compute fabric alone reaches 18,432 strands. Managing this density requires [high density housings](/products/housings/high-fibre-count-housings/highstack-fixed-housings-for-high-count-optical-fibre/) designed specifically for high-count optical fiber and centralized core group switching architectures.

**Q: Why is MPO-8 used instead of the standard MPO-12?**
A: Modern 400G NDR and 800G XDR transceivers use 4-lane or 8-lane parallel optics. An 8-fiber MPO alignment matches the 4x Tx and 4x Rx configuration perfectly. Using [8-fiber active MPO trunks](/products/optical-cable-assemblies/mpo-trunks/small-fibre-count-mpo-trunks/) eliminates 'dark' or wasted fibers within the cluster fabric.

**Q: What is the importance of APC (Angled Physical Contact) polish?**
A: High-speed 100G-PAM4 signaling is extremely sensitive to back-reflections. The 8-degree angle of an [APC connector](/products/optical-cable-assemblies/mpo-trunks/nvidia-compatible-mpo-patch-cable-apc/) ensures reflected light is absorbed into the fiber cladding, maintaining the high Optical Return Loss (ORL) required for error-free AI training.

**Q: How does fiber density impact liquid-cooled AI halls?**
A: Even with liquid-cooled trays, air must still circulate to manage secondary heat. Using high-density [SmartRibbon cables](/products/fibre-optic-cables/indoor-cables/smartribbon-flame-retardant-optical-fibre-cables/) significantly reduces cable diameter, ensuring that the physical cabling does not obstruct airflow or liquid-cooling manifolds.

**Q: What are the distance limitations for SU-level cabling?**
A: Multimode (OM4/OM5) is restricted to 50 meters for 400G/800G. For centralized Spine-to-Core links that exceed this, [Single-mode G.657.A1 fiber](/products/fibre-optic-cables/indoor-cables/slimcore-indoor-optical-cables/slimcore-144-fibre-indoor-fibre-optic-cable/) is mandatory to support reaches further reach without signal degradation.

**Q: Can I use standard outdoor cables for AI data center backbones?**
A: No. Indoor AI halls require [LSZH (Low Smoke Zero Halogen)](/products/fibre-optic-cables/indoor-cables/slimcore-indoor-optical-cables/), Riser or Plenum to meet required fire safety regulations depending on local regulations. For high-density pathways, specialized [SlimCORE indoor cables](/products/fibre-optic-cables/indoor-cables/slimcore-indoor-optical-cables/slimcore-288-fibre-indoor-fibre-optic-cable/) provide the necessary strand count in a reduced diameter.

**Q: What is the benefit of factory-terminated pigtails in the SU?**
A: [MPO cord optical fibre pigtails](https://americas.scalefibre.com/en/products/optical-cable-assemblies/optical-fibre-pigtails/mpo-cord-optical-fibre-pigtails/) allow for rapid mass-fusion splicing at the Spine or Core layer. This factory-controlled termination at one end provides the benefits of pre-termination, whilst the 'blunt' end allows flexibility to fit off to the required length onsite.


## References

