Data Center Leaf 400G / 800G Comparison: Cisco Nexus 9332D-GX2B vs HPE Aruba CX 10000-48Y6C vs Juniper QFX5220-32CD vs Arista 7060X6

Four data center leaf / spine platforms at the 400G and 800G tier — the Cisco Nexus 9332D-GX2B, the HPE Aruba Networking CX 10000-48Y6C with Pensando DPU, the Juniper QFX5220-32CD, and the Arista 7060X6 (64PE / 32PE) — compared on port density and breakout, forwarding ASIC and buffer architecture, EVPN-VXLAN and fabric approach, RoCEv2 lossless Ethernet, Ultra Ethernet Consortium (UEC) readiness for AI workloads, published port-to-port latency, management plane, MACsec scope, and federal certification posture.

WiFi Hotshots is a vendor-agnostic enterprise engineering firm serving enterprise customers, infrastructure architects, data center operators, and AI-platform buyers across Southern California and the broader US market.

Multi-CCIE engineering bench

Vendor-agnostic — Cisco, HPE Aruba, Juniper, Arista

Fixed-fee SOW — no T&M surprises

25 years of enterprise networking leadership

The 400G / 800G data center leaf and spine tier comprises multiple architectural approaches, and this comparison positions the four platforms by capability rather than forcing apples-to-apples. The Cisco Nexus 9332D-GX2B and Juniper QFX5220-32CD are direct 400G peers at 32 x 400G QSFP-DD in 1RU. The Arista 7060X6-64PE sits one tier above at 64 x 800G OSFP in 2RU on Broadcom Tomahawk 5 silicon with UEC-ready positioning for AI fabrics. The HPE Aruba Networking CX 10000-48Y6C is a different class entirely — a 25G / 100G top-of-rack leaf whose defining feature is two embedded AMD Pensando Elba P4-programmable DPUs delivering 800 Gbps of stateful services at every port, not raw 400G / 800G port density. See data center network services, AI-ready infrastructure engineering, the full services catalog, or browse adjacent comparisons in the vendor comparison library.

Why These Four Platforms, and Why They Are Not Perfect Peers

Cisco, HPE Aruba Networking, Juniper Networks, and Arista Networks all ship data center leaf and spine platforms that appear on Fortune 500 short-lists for EVPN-VXLAN fabric, RoCEv2 lossless Ethernet, and AI / HPC east-west scale. They are not peers on raw uplink speed. The Cisco Nexus 9332D-GX2B and the Juniper QFX5220-32CD are the closest true peers in this set: both are 32 x 400G QSFP-DD 1RU fixed leaf / spine switches at 12.8 Tbps forwarding, both target the same EVPN-VXLAN leaf-spine role, and both predate the Ultra Ethernet Consortium 1.0 specification. The Arista 7060X6-64PE is a generation ahead on silicon — 51.2 Tbps on Broadcom Tomahawk 5 in 64 x 800G OSFP with explicit Arista AI Etherlink positioning and UEC founding-member membership.

The HPE Aruba Networking CX 10000-48Y6C is architecturally distinct: 48 x 25G downlinks plus 6 x 100G uplinks is a server-facing ToR, and its differentiator is the two embedded Pensando DPUs running a distributed stateful firewall, L4 load-balancer, NAT, DDoS, and session telemetry at wire rate without an external appliance tier. A buyer who treats this page as a raw 400G shoot-out will miss what each platform is actually for.

The Comparison Matrix: Specifications That Matter

Forwarding capacity and packet-per-second numbers in vendor datasheets are theoretical maxima gated by frame size, feature enablement, oversubscription, and real-world queue pressure — they should never substitute for a fabric capacity plan. Where a specification reads “not publicly documented” or “not claimed,” the vendor datasheet does not disclose that value.

SpecificationCisco Nexus 9332D-GX2BHPE Aruba CX 10000-48Y6CJuniper QFX5220-32CDArista 7060X6-64PE
Port configuration32 x 400G QSFP-DD + 2 x 1/10G SFP+ management. 1RU.48 x 1/10/25G SFP/SFP+/SFP28 downlinks + 6 x 40/100G QSFP+/QSFP28 uplinks. 1RU. 25G / 100G class — different architectural tier.32 x 400G QSFP-DD. 1RU.64 x 800G OSFP + 2 x SFP+ (64PE, 2RU). 32PE variant is 32 x 800G in 1RU.
Max per-port speed & breakout400G per port; breakout to 4 x 100G, 2 x 200G, 8 x 50G, 4 x 25G, 4 x 10G per QSFP-DD.100G uplink ceiling per port; 10/25G downlinks. Not a 400G platform.400G per port; breakout 4 x 100G / 4 x 25G / 4 x 10G / 2 x 200G / 8 x 50G.800G per port; per-port 800 / 400 / 200 / 100 / 50 / 25 / 10 GbE. Max breakout on 64PE-B is 256 x 100 GbE or 512 x 50 GbE.
Forwarding ASICCisco Cloud Scale LS12800GX2B ASIC (7nm). Cisco-internal silicon; Cloud Scale predates Cisco G200 (UEC-targeted silicon).Not disclosed on current datasheet; dual AMD Pensando Elba DPUs alongside the switch ASIC are the architectural distinction.Broadcom Tomahawk 3 (JVD-attributed; Juniper does not brand the silicon on its datasheet).Broadcom Tomahawk 5 — explicitly confirmed on the 7060X6 datasheet.
Forwarding capacity25.6 Tbps switching (full-duplex); 4.17 Bpps forwarding.3.6 Tbps bidirectional; 2,000 Mpps.12.8 Tbps switching; 8 Bpps.51.2 Tbps (102.4 Tbps full-duplex); 21.2 Bpps.
Packet buffer architecture120 MB buffer per datasheet.32 MB shared buffer across the switch pipeline.64 MB on-chip buffer (Tomahawk 3-class) per Juniper Pathfinder HCT; Juniper specs page publishes 128 MB for the same platform — flag as a Juniper documentation inconsistency.165 MB fully shared packet buffer on 64PE; 84 MB on 32PE.
Port-to-port latencyNot publicly documented on the Nexus 9332D-GX2B datasheet.< 1 µs with no DPU redirect. < 5 µs when traffic is redirected through the Pensando DPU pipeline for stateful services.750 ns (published).From 700 ns per 7060X6 datasheet.
Fabric approachDual-mode NX-OS (standalone / EVPN-VXLAN) or ACI (application-centric). EVPN-VXLAN Type 2 and Type 5, multi-site, anycast gateway.AOS-CX EVPN-VXLAN Type 2 / 3 / 5, symmetric IRB, EVPN multi-homing (ESI). HPE Aruba Networking Fabric Composer for fabric orchestration.Juniper Apstra-driven 3-stage EVPN-VXLAN as spine (JVD validated). Type 2 / Type 5 via Apstra reference design.Arista EOS EVPN-VXLAN (VXLAN Bridging today; VXLAN Routing not currently supported on 7060X6 in EOS per datasheet). Part of Arista AI Etherlink portfolio.
RoCEv2 / lossless EthernetRoCEv2 with PFC + ECN + DCQCN (Cisco-branded congestion control).RoCEv2 with PFC and three lossless pools. ECN underlay-only — ECN marking is not carried into the VXLAN overlay.RoCEv2 with PFC + ECN (Juniper uses “PFC + ECN” branding rather than DCQCN).RoCEv2 + PFC + ECN enhancements: latency-based, throughput-based, and dynamic marking. Headroom memory for PFC lossless classes. Source-interface-based RDMA load balancing.
Ultra Ethernet Consortium (UEC) 1.0 readinessUEC 1.0 not claimed on Cloud Scale silicon. Cisco’s UEC-aligned silicon is the G200 generation, not the Cloud Scale LS12800 family.UEC 1.0 is not a published member platform for the CX 10000-48Y6C (25G leaf predates UEC scope). HPE has publicly committed to UEC on future CX DC platforms.UEC 1.0 not claimed on Tomahawk 3 silicon — TH3 predates the UEC specification.UEC-ready. Part of Arista AI Etherlink portfolio and positioned as fully compatible with future UEC networks. Arista is a UEC founding member.
AI-specific features (where published)Nexus Dashboard Insights for congestion visibility; RoCEv2 / DCQCN tuning guides for AI clusters.Distributed stateful services (FW / DDoS / L4 LB / NAT / session telemetry) at every ToR port via two Pensando DPUs — 800 Gbps aggregate DPU throughput.PFC + ECN congestion management with Apstra intent-based operations.Dynamic Load Balancing, Cluster Load Balancing (RDMA-aware), Packet Spraying, Fast Link Failover under 500 ns, 128-way ECMP, 64-way MLAG, LANZ 1 ms polling, NetDL streaming telemetry, and AI Analyzer at 100 µs granularity (not currently in EOS per datasheet).
Minimum OS versionNX-OS 10.2(3)F or later (release coincident with the SKU introduction).AOS-CX 10.13 / 10.14 train.Junos EVO 19.2R1 minimum (platform launch); 19.3R1 added JTI / gRPC / OpenConfig streaming telemetry on this platform.EOS 4.32.2 (64PE); EOS 4.34.2 (32PE).
Management planeNexus Dashboard Fabric Controller (NX-OS mode) or Cisco APIC (ACI mode). Nexus Dashboard Insights for assurance / telemetry.HPE Aruba Networking Fabric Composer + NetEdit + Aruba Central (≥ 2.5.6) for cloud-assisted operations.Juniper Apstra (intent-based) + Paragon Automation. JTI gRPC / gNMI + OpenConfig from 19.3R1. Xeon D-1500 control-plane CPU; 100 GB SSD.Arista CloudVision on-prem or CVaaS. Field-replaceable supervisor (DCS-7001-SUP-A) — novel at this tier.
Power typical / max638 W typical / 1442 W max per datasheet. 1500 W AC PSU rating (PSI / PSE variants).Up to 750 W max per datasheet; 800 W PSU rating.730 W typical / 973 W max AC. 2 x 1600 W PSUs.640 W typical / 2218 W max (64PE). 348 W typical / 1136 W max (32PE). PWR-2421 HV 2400 W at 96% efficiency. LPO optics reduce total system power consumption by up to 50% per Arista’s AI Networking whitepaper.
Dimensions & weight1RU fixed form factor.1RU fixed form factor.17.26 x 21.1 x 1.72 in; 24.5 lb. 1RU.64PE: 17.32 x 3.46 x 23.9 in; 46 lb; 2RU. 32PE is 1RU.
Optics supportQSFP-DD 400G ecosystem (DR4, FR4, LR4, ZR/ZR+). QSFP28 100G via breakout.QSFP28 / QSFP+ 100G / 40G on uplinks. SFP28 / SFP+ / SFP on downlinks (10G / 25G).QSFP-DD 400G ecosystem with full breakout tree (200G / 100G / 50G / 25G / 10G).800G OSFP with LPO (Linear Pluggable Optics) support, plus the full 400G / 200G / 100G breakout tree down to 10G.
MACsec scopeMACsec / CloudSec at wire rate on the last 8 ports only — not all 32 x 400G ports.Check AOS-CX release notes per SKU; not confirmed on every port of the CX 10000-48Y6C in sources reviewed.MACsec not listed in QFX5220-32CD spec sheets at the time of this review.MACsec not listed in the 7060X6 datasheet standards-compliance block at the time of this review.
Federal certifications (FIPS / CC / UL)FIPS 140-3 / Common Criteria per-SKU not surfaced for the 9332D-GX2B in public sources reviewed — verify through Cisco Trust Portal.AOS-CX Crypto Module FIPS 140-2 validated (NIST CMVP #3958). HPE lists 140-3 cert #4876 for AOS-CX Crypto Module; a CX 10000-specific 140-3 listing was not confirmed on the NIST CMVP site in sources reviewed. JITC tactical interop certification dated 2024-09-06.FIPS 140-3 / Common Criteria per-SKU not publicly documented for the QFX5220-32CD in sources reviewed — verify via Juniper Pathfinder Compliance Advisor.UL 62368-1, IEC 62368-1 listed. FIPS and Common Criteria not listed on the 7060X6 datasheet standards-compliance block at the time of this review.

A 400G leaf refresh and an 800G AI fabric are two different projects with two different platforms. Send the rack elevations, east-west traffic profile, and RoCE / GPU cluster scope; WiFi Hotshots returns a fixed-fee SOW that picks the platform based on fit.

Per-Vendor Fact Summaries

Cisco Nexus 9332D-GX2B

A 1RU, 32 x 400G QSFP-DD fixed leaf / spine on the Cisco Cloud Scale LS12800GX2B ASIC (7nm) delivering 25.6 Tbps switching (full-duplex) and 4.17 Bpps forwarding, with a 120 MB packet buffer per datasheet. The platform is dual-mode: it runs standalone NX-OS with EVPN-VXLAN (Type 2 / Type 5, multi-site, anycast gateway, RoCEv2 with PFC + ECN + DCQCN) or operates as an ACI leaf under APIC. Minimum NX-OS is 10.2(3)F or later (the release coincident with the SKU introduction). Port-to-port latency is not publicly documented on the Nexus 9332D-GX2B datasheet.

MACsec / CloudSec operate at wire rate on the last 8 ports only, which constrains MACsec-everywhere postures. Ultra Ethernet Consortium 1.0 compliance is not claimed on Cloud Scale silicon — Cisco’s UEC-aligned silicon is the G200 generation on newer platforms. Management is Nexus Dashboard Fabric Controller (NX-OS mode) or APIC (ACI mode), with Nexus Dashboard Insights for assurance. Power draw is 638 W typical / 1442 W max per datasheet; 1500 W AC PSU rating (PSI / PSE variants).

HPE Aruba Networking CX 10000-48Y6C

The CX 10000-48Y6C is not a 400G leaf. It is a 1RU 25G top-of-rack with 48 x 10/25G SFP28 downlinks and 6 x 40/100G QSFP28 uplinks, 3.6 Tbps bidirectional forwarding, and a 32 MB shared buffer. Its architectural distinction is two embedded AMD Pensando Elba P4-programmable DPUs delivering 800 Gbps of stateful services at the rack: distributed firewall, DDoS mitigation, secure segmentation, and per-session telemetry are current-software capabilities; NAT and encryption are flagged as future software release per the CX 10000 datasheet. Latency is under 1 µs with no DPU redirect and under 5 µs when traffic is steered through the DPU pipeline. Fabric is AOS-CX EVPN-VXLAN (Type 2 / 3 / 5, symmetric IRB, ESI multi-homing) on the 10.13 / 10.14 train with RoCEv2 PFC and three lossless pools; ECN is underlay-only and not carried in VXLAN overlay.

AOS-CX Crypto Module holds FIPS 140-2 CMVP #3958 (Historical status); a CX 10000-specific FIPS 140-3 certificate was not publicly surfaced in the sources reviewed — verify the current certificate version with HPE Aruba before federal scoping. UEC 1.0 is not a published member platform for this SKU; HPE has committed to future UEC-aligned CX DC hardware. A buyer comparing this to a 400G or 800G leaf is comparing different categories — the DPU is the buying decision, not the port count.

Juniper QFX5220-32CD

A 1RU 32 x 400G QSFP-DD fixed switch with full breakout (4 x 100G / 4 x 25G / 4 x 10G / 2 x 200G / 8 x 50G) on Broadcom Tomahawk 3 silicon (JVD-attributed), 12.8 Tbps forwarding, 8 Bpps, and 64 MB of on-chip buffer. Port-to-port latency is published at 750 ns. Fabric is Apstra-driven EVPN-VXLAN (3-stage, JVD validated as spine) with PFC + ECN for RoCEv2 — Juniper avoids the “DCQCN” brand. Minimum Junos EVO is 19.3R1 with JTI gRPC / gNMI and OpenConfig support from that release.

Management is Juniper Apstra (intent-based) plus Paragon Automation; the control-plane CPU is a Xeon D-1500 with a 100 GB SSD. UEC 1.0 compliance is not claimed on Tomahawk 3. MACsec is not listed in the QFX5220-32CD spec sheets reviewed, and FIPS 140-3 / Common Criteria per-SKU listings were not publicly documented in the primary sources reviewed — verify via Juniper Pathfinder Compliance Advisor before a federal scope. Power is 730 W typical / 973 W max with dual 1600 W PSUs. Dimensions are 17.26 x 21.1 x 1.72 in at 24.5 lb.

Arista 7060X6-64PE (and 32PE)

Arista’s 7060X6-64PE is a generation ahead of the 400G peers in this comparison. It is 64 x 800G OSFP in 2RU on Broadcom Tomahawk 5 silicon, 51.2 Tbps (102.4 Tbps full-duplex), 21.2 Bpps, and 165 MB of fully shared packet buffer (32PE variant is 32 x 800G, 1RU, 84 MB buffer). Per-port speeds are 800 / 400 / 200 / 100 / 50 / 25 / 10 GbE with a maximum breakout of 512 x 100G on the 64PE-B. Latency is from 700 ns typical ~650 ns RFC 2544.

The platform is UEC-ready — it is part of Arista’s AI Etherlink portfolio and Arista is a founding member of the Ultra Ethernet Consortium. AI-fabric features on EOS include Dynamic Load Balancing, Cluster Load Balancing (RDMA-aware), Packet Spraying, Fast Link Failover under 500 ns, 128-way ECMP, 64-way MLAG, Headroom memory for PFC lossless classes, source-interface-based RDMA load balancing, LANZ 1 ms polling, and NetDL streaming telemetry.

LPO optics support reduces power draw 40–50% versus retimed optics. VXLAN Bridging is supported today; VXLAN Routing is not currently supported on 7060X6 per EOS release notes. MACsec is not listed on the 7060X6 datasheet, and FIPS / Common Criteria are not in the standards-compliance block of the datasheet reviewed — UL 62368-1 and IEC 62368-1 are. Minimum EOS is 4.32.2 (64PE) / 4.34.2 (32PE). Management is CloudVision on-prem or CVaaS, with a field-replaceable supervisor (DCS-7001-SUP-A) that is novel at this tier.

When Each Platform Is Worth Evaluating First

These are routing heuristics, not recommendations. A production decision requires a fabric capacity plan, a written scope, and a bake-off matched to the actual east-west traffic profile. WiFi Hotshots engineers data center fabrics across all four vendors; the routing below reflects what the documented specifications favor for common scenarios.

  • AI / GPU fabric with UEC-forward posture (training clusters, RoCE at scale, sub-µs tail latency, 800G uplinks): Arista 7060X6 is the documented-strongest platform in this set. Tomahawk 5, 51.2 Tbps, 165 MB shared buffer, UEC founding-member status, Cluster Load Balancing, and Fast Link Failover under 500 ns are the features the AI use case names specifically.
  • Distributed stateful services at the rack (east-west segmentation, zero-trust, per-flow telemetry without an external appliance tier): HPE Aruba Networking CX 10000-48Y6C is the differentiated platform. The two Pensando DPUs deliver 800 Gbps of stateful services (firewall, DDoS, secure segmentation, session telemetry) at wire rate without redirecting traffic out of the ToR; NAT and encryption are flagged future software release per the CX 10000 datasheet.
  • Existing Cisco ACI or NX-OS estate with phased 400G leaf refresh: Cisco Nexus 9332D-GX2B preserves both control-plane options (NX-OS EVPN-VXLAN or ACI under APIC) on a single SKU. Nexus Dashboard Fabric Controller and Nexus Dashboard Insights operational tooling carry forward.
  • Intent-based fabric operations with explicit 400G leaf / spine and published sub-µs latency: Juniper QFX5220-32CD with Apstra is the documented choice. The 750 ns port-to-port figure is the only published number in the 400G direct-peer group.
  • MACsec-everywhere on every port of a 400G leaf: None of the four platforms as listed is a clean fit today. Cisco Nexus 9332D-GX2B documents MACsec / CloudSec on the last 8 ports only. MACsec is not listed on the QFX5220-32CD spec sheet or the 7060X6 datasheet. CX 10000 MACsec scope per SKU requires AOS-CX release-note verification. A MACsec-everywhere requirement should drive a platform reselection within each vendor’s family.
  • Federal or FedRAMP-adjacent data center scope: All four vendors maintain active FIPS programs. For this specific SKU group, HPE Aruba Networking publishes the clearest documentation (AOS-CX Crypto Module FIPS 140-2 CMVP #3958; HPE-listed 140-3 cert #4876; JITC 2024-09-06). Cisco, Juniper, and Arista SKU-specific FIPS and Common Criteria listings were not publicly confirmed in the sources reviewed — verify each vendor’s compliance registry before downselecting.

Frequently Asked Questions

Are these four platforms direct peers at the same tier?

No. Cisco Nexus 9332D-GX2B and Juniper QFX5220-32CD are direct 400G peers (both 32 x 400G QSFP-DD, 1RU, 12.8 Tbps). Arista 7060X6-64PE is an 800G-class platform on Broadcom Tomahawk 5 at 51.2 Tbps — one generation ahead on silicon. HPE Aruba Networking CX 10000-48Y6C is a 25G ToR with 100G uplinks whose architectural distinction is two embedded Pensando DPUs delivering 800 Gbps of stateful services. Comparing them as raw 400G peers obscures what each is actually built for.

Which of these is UEC 1.0 ready?

Of this set, only the Arista 7060X6 is positioned as UEC-ready. It is part of Arista’s AI Etherlink portfolio and Arista is a founding member of the Ultra Ethernet Consortium. Cisco Cloud Scale silicon (9332D-GX2B) and Broadcom Tomahawk 3 (QFX5220-32CD) predate UEC 1.0 and do not claim the specification. HPE Aruba Networking has publicly committed to UEC on future CX DC platforms, but the CX 10000-48Y6C 25G SKU is not a UEC 1.0 member platform.

What is the Pensando DPU doing on the CX 10000 and why does it matter?

The CX 10000-48Y6C has two AMD Pensando Elba P4-programmable DPUs delivering 800 Gbps aggregate throughput for stateful services at every ToR port: distributed firewall, L4 load-balancer, NAT, DDoS mitigation, and per-session telemetry. Workloads typically steered to an external firewall or service-node tier run on the rack switch itself. Latency is under 1 µs without DPU redirect and under 5 µs when traffic transits the DPU pipeline. This is the CX 10000’s buying decision, not raw 400G port count.

What is the published port-to-port latency on each platform?

Juniper QFX5220-32CD publishes 750 ns. Arista 7060X6 publishes from 700 ns with a typical ~650 ns under RFC 2544 methodology. HPE Aruba CX 10000-48Y6C is under 1 µs without DPU redirect and under 5 µs with DPU redirect. Cisco Nexus 9332D-GX2B does not publish a port-to-port latency figure on the 9332D-GX2B datasheet reviewed. Federal and financial-trading buyers who require a published latency number in procurement should note the gap.

How does MACsec scope differ across these platforms?

Cisco Nexus 9332D-GX2B supports MACsec and CloudSec at wire rate on the last 8 of its 32 x 400G ports — not all 32. A MACsec-everywhere requirement should drive a platform reselection within the Cisco Nexus family. MACsec is not listed on the Juniper QFX5220-32CD spec sheet or the Arista 7060X6 datasheet reviewed. MACsec scope on the HPE Aruba CX 10000-48Y6C requires AOS-CX release-note verification per SKU. Buyers with a MACsec-everywhere requirement should confirm per-port scope with each vendor before downselecting.

What is the minimum OS version for each platform?

Cisco Nexus 9332D-GX2B requires NX-OS 10.2(3)F or later. HPE Aruba CX 10000-48Y6C runs on the AOS-CX 10.13 / 10.14 train. Juniper QFX5220-32CD requires Junos EVO 19.3R1 or later (JTI gRPC / gNMI and OpenConfig land from that release). Arista 7060X6 requires EOS 4.32.2 on the 64PE and EOS 4.34.2 on the 32PE. All four should be paired with current operational tooling (Nexus Dashboard, Aruba Central or Fabric Composer, Apstra, CloudVision) rather than legacy CLI-only workflows.

What is the federal certification posture across this set?

HPE Aruba Networking publishes the clearest documentation in this set: AOS-CX Crypto Module FIPS 140-2 validated (NIST CMVP #3958); 140-3 cert #4876 listed by HPE; JITC tactical interop dated 2024-09-06. A CX 10000-specific 140-3 listing was not confirmed on the NIST CMVP site in sources reviewed. Cisco Nexus 9332D-GX2B, Juniper QFX5220-32CD, and Arista 7060X6 SKU-specific FIPS 140-3 and Common Criteria listings were not publicly documented in the primary sources reviewed — verify through Cisco Trust Portal, Juniper Pathfinder Compliance Advisor, and Arista product certifications indexes respectively before a federal scope.

The Arista 7060X6 datasheet lists UL 62368-1 and IEC 62368-1.

Can I mix vendors in a single EVPN-VXLAN fabric?

Yes, per standards, but the operational cost is non-trivial. EVPN (RFC 7432) and VXLAN (RFC 7348) are standards; Type 2 and Type 5 EVPN route exchange between Cisco NX-OS, AOS-CX, Junos EVO, and Arista EOS is tested in lab scenarios and deployed in some multi-vendor brownfields. In production, the trade-off is operational: Apstra, Nexus Dashboard Fabric Controller, Fabric Composer, and CloudVision are single-vendor intent engines — mixing vendors forces CLI or OpenConfig operations, and feature parity (EVPN multi-homing ESI behavior, anycast-gateway scale, VXLAN Routing support) diverges across platforms.

Most Fortune 500 deployments standardize one vendor per fabric and bridge fabrics at the DCI layer.

Which platform should I evaluate first for an AI / GPU training fabric?

For a greenfield AI or GPU training fabric with 800G uplinks, RoCEv2 at scale, and UEC-forward posture, the Arista 7060X6 is the documented-strongest platform in this set: Tomahawk 5 at 51.2 Tbps, 165 MB shared buffer, Cluster Load Balancing (RDMA-aware), Packet Spraying, Fast Link Failover under 500 ns, 128-way ECMP, and UEC founding-member status. For a 400G east-west leaf in a non-AI estate, the Cisco Nexus 9332D-GX2B or Juniper QFX5220-32CD are the direct peers.

For east-west stateful services at every rack, the HPE Aruba CX 10000-48Y6C with Pensando DPUs is the differentiated choice.

Send the GPU cluster size, collective-communication profile, and uplink plan to produce a written fabric scope.

Where does this fit in the broader WiFi Hotshots services catalog?

Data center fabric design, EVPN-VXLAN build-outs, and AI / GPU network scoping fall under data center network services and AI-ready infrastructure engineering. Adjacent comparisons in the vendor comparison library cover the access-layer generations (Wi-Fi 6E and Wi-Fi 7 flagship APs) and campus infrastructure. Campus and branch scoping, structured cabling, SD-WAN, and wireless site survey engagements are in the broader services catalog.

What is the practical difference between QSFP-DD and OSFP form factors at 400G?

QSFP-DD is backwards-compatible with QSFP28 cages — a QSFP-DD port accepts a QSFP28 100G module (downshift to 100G) in addition to 400G QSFP-DD modules. OSFP is not backwards-compatible with QSFP28 and has a slightly larger envelope with integrated heat-sink, designed for higher-power optics (typical 15W vs QSFP-DD 12-14W).

Cisco Nexus 9332D-GX2B uses QSFP-DD. Arista 7060X6-64PE uses OSFP or QSFP-DD variants (check the -PE vs -DE SKU suffix). Juniper QFX5220-32CD uses QSFP-DD. OSFP is more common in AI training spine roles where 800G and 1.6T modules require the larger heat-dissipation envelope; QSFP-DD is more common in enterprise DC leaf.

What is the reach difference between 400G-FR4, 400G-DR4, and 400G-LR4 optics?

400G-FR4 reaches 2 km on duplex singlemode (IEEE 802.3cu) — the common campus-to-DC or DC-to-DC interconnect at 2 km. 400G-DR4 reaches 500 m on 4-pair singlemode with 4x 100G-PAM4 per fiber pair (IEEE 802.3bs) — the default intra-rack or row-to-row DC optic. 400G-LR4 reaches 10 km on duplex singlemode (IEEE 802.3cu).

400G-SR8 and 400G-SR4.2 are the multi-mode short-reach variants (100m on OM4 or OM5). Optic choice drives fiber-plant cost and power: DR4 needs a 4-fiber MPO (MPO-12 with 4 active pairs) which cabling teams often prefer for AI-cluster density, while FR4 / LR4 use standard duplex LC.

What is the difference between NRZ and PAM4 modulation at 400G, and why does it matter for latency?

NRZ (Non-Return-to-Zero) encodes 1 bit per symbol; PAM4 (Pulse Amplitude Modulation 4-level) encodes 2 bits per symbol. 400G Ethernet is PAM4 at 50 Gbaud or 26.5 Gbaud-per-lane depending on the modulation scheme (8x 50G PAM4 or 4x 100G PAM4). 100G NRZ was the prior generation (4x 25G NRZ for 100G).

PAM4 gets higher throughput per fiber but has lower SNR margin than NRZ — it requires stronger FEC (RS FEC per IEEE 802.3cd / 802.3bs) which adds ~100 ns to the end-to-end latency budget. For ultra-low-latency trading fabrics, NRZ-based 100G is still preferred over PAM4 400G; for AI back-end and mainstream DC, PAM4 400G is the default.

Can each leaf platform break out a 400G port into 4x 100G or 8x 50G?

Cisco Nexus 9332D-GX2B: 400G QSFP-DD ports break out into 4x 100G (QSFP-DD-to-4x-QSFP28 breakout) per NX-OS 10.2+ config. Arista 7060X6-64PE: 800G ports break out into 2x 400G or 4x 200G; 400G ports break out into 4x 100G or 2x 200G per Arista 7060X6 data sheet. Juniper QFX5220-32CD: 400G breaks out into 4x 100G per Junos EVO documentation.

HPE Aruba CX 10000-48Y6C carries 6 x 100G QSFP28 uplinks plus 48 x 25G SFP28 downlinks — no 400G breakout on the 48Y6C SKU. Breakout flexibility matters at the leaf when mixed 100G / 400G clients attach to the same switch.

What is DCQCN tuning on each leaf platform for RoCEv2 deployments?

DCQCN (Data Center Quantized Congestion Notification) is the reference congestion-control algorithm for RoCEv2 — ECN marking (RFC 3168) combined with a rate-adjustment loop. Cisco Nexus 9332D-GX2B supports DCQCN via NX-OS PFC / ECN configuration with recommended thresholds from the Cisco RoCE Deployment Guide.

Arista 7060X6 supports DCQCN plus adaptive-routing extensions per the Arista AI Networking reference design. Juniper QFX5220-32CD supports DCQCN via Junos EVO QoS configuration. NVIDIA SN5600 Spectrum-X adds switch-level telemetry + SuperNIC DDP on top of DCQCN to reach ~95 percent effective bandwidth per NVIDIA whitepapers.

How does buffer allocation differ between AI back-end and traditional east-west workloads on these platforms?

AI back-end fabrics push incast bursts during AllReduce — multiple GPUs simultaneously send to the same destination, filling leaf egress buffers. AI-optimized buffer tuning dedicates more buffer to priority queues carrying RoCEv2 traffic (typically PFC class 3) and enables adaptive routing to spread congestion across spine paths.

Cisco Nexus 9332D-GX2B on Cloud Scale ASIC has 40 MB shared buffer. Arista 7060X6-64PE on Tomahawk 5 has 165 MB shared buffer per Arista 7060X6 data sheet. Juniper QFX5220-32CD on Tomahawk 3 has ~64 MB. HPE Aruba CX 10000-48Y6C buffer specs reviewed in primary sources for the 48Y6C SKU should be verified at procurement.

What EVPN Type 2, Type 3, and Type 5 route generation does each leaf produce?

Type 2 (MAC/IP Advertisement per RFC 7432): all four platforms generate Type 2 on every learned host MAC + ARP. Type 3 (Inclusive Multicast Ethernet Tag): all four generate Type 3 for each VNI for BUM (broadcast / unknown-unicast / multicast) handling. Type 5 (IP Prefix route per RFC 9136): all four support Type 5 for prefix-based routing across EVPN fabrics.

Cisco Nexus 9332D-GX2B on NX-OS 10.x: full Type 1-5 support. Arista 7060X6: full Type 1-5 on EOS 4.28+. Juniper QFX5220-32CD: full Type 1-5 on Junos EVO 19.3+. HPE Aruba CX 10000: full Type 2/3/5 on AOS-CX 10.13+. Type 5 is the most common route type for Layer 3 prefix advertisement in modern fabrics.

How does BlueField-3 or Pensando DPU integration differ between these leaf platforms?

NVIDIA BlueField-3 is a PCIe SmartNIC (not integrated in the switch) that offloads services on the server side. Arista 7060X6 and Cisco Nexus 9332D-GX2B both work with BlueField-3 via standard RoCEv2 uplinks but do not integrate DPU silicon in the switch ASIC.

HPE Aruba CX 10000-48Y6C is the architectural outlier — two AMD Pensando Elba DPUs are embedded in the switch ASIC path, delivering 800 Gbps of P4-programmable stateful services (firewall, NAT, LB) at every ToR port. That is why the CX 10000 is not a drop-in 400G peer for Nexus / Arista / QFX — it is a different architectural category focused on east-west stateful services, not raw 400G port density.

Can these leaf switches handle NVMe-oF over TCP and RoCE storage traffic on the same fabric as AI workloads?

Yes, but PFC class separation is required. RoCEv2 AI traffic typically runs on PFC priority 3 (lossless, ECN-enabled); NVMe-oF over TCP runs on a best-effort class (standard TCP congestion control). Mixing the two on the same leaf works when PFC is correctly segmented by queue.

Cisco Nexus 9332D-GX2B, Arista 7060X6, and Juniper QFX5220-32CD all support multi-class PFC / ETS per IEEE 802.1Qaz. HPE Aruba CX 10000-48Y6C is commonly paired with distributed storage tiers where Pensando DPU-based storage offload (NVMe-oF / iSCSI) runs at the ToR itself rather than offloading to external storage arrays.

What is the PFC high-watermark tuning pattern for lossless RoCEv2 on these leafs?

Standard PFC tuning per Mellanox / NVIDIA RoCE tuning guides recommends high-watermark at approximately 80 percent of egress buffer and low-watermark (XON threshold) at approximately 50 percent. Headroom is sized to accommodate in-flight bytes during PFC pause assertion — typically 2x the cable-delay-bandwidth product per priority queue.

Under-tuned PFC causes pause storms (PFC signal propagating upstream and inducing head-of-line blocking); over-tuned PFC (too-low thresholds) causes packet drops before PFC triggers. Each vendor publishes a recommended template; NVIDIA SN5600 ships with auto-tuning, while Cisco / Arista / Juniper require explicit operator config validated against the specific ASIC buffer size.

How does each platform compare on raw port-to-port latency at 400G?

Arista 7060X6-64PE: ~650 ns typical under RFC 2544 methodology per Arista data sheet. Juniper QFX5220-32CD: 750 ns published per the Juniper data sheet. Cisco Nexus 9332D-GX2B: specific port-to-port latency not published in the primary sources reviewed. HPE Aruba CX 10000-48Y6C: under 1 microsecond without DPU redirect, under 5 microseconds when traffic transits the DPU pipeline.

For AI training, sub-microsecond port-to-port latency is standard across Tomahawk 4 / 5 class silicon; the gap matters at trading-floor tick-to-trade time scales but is noise at AI training AllReduce scale.

Does each leaf support FEC profile tuning (RS-FEC / low-latency FEC) for different optics?

Yes. 400G RS-FEC per IEEE 802.3bs is the default for PAM4 optics (100G-DR, 400G-DR4, 400G-FR4, 400G-LR4). Low-latency FEC options are specific to DAC / AOC links within a rack and do not change the standards-mandated RS-FEC on single-mode optics.

Cisco Nexus 9332D-GX2B, Arista 7060X6, and Juniper QFX5220-32CD all support FEC profile selection per port. In an AI fabric, matching FEC profile between both sides of the link is required — mismatched FEC triggers link-down or silent drops depending on the platform.

What is the platform-specific 800G roadmap from each vendor at the DC leaf tier?

Arista 7060X6-64PE is already 800G-capable (51.2 Tbps on Tomahawk 5) — the 64x 800G spec is the current shipping ceiling per the 7060X6 data sheet. Cisco Nexus 9000 series has the 9364D-GX2A and upcoming 800G SKUs on Silicon One and follow-on Cloud Scale silicon per Cisco roadmap announcements.

Juniper QFX5240 (successor to QFX5220) is positioned at 800G per Juniper AI cluster announcements (2024+). Follow-on HPE Aruba CX data-center platforms will carry 800G per HPE public roadmap. For greenfield AI leaf builds in 2026, 800G-capable leaf is the forward-looking purchase; 400G is the current cost-optimized floor.

What HBM tailroom or deep-buffer spec does each leaf carry for AI workload tuning?

HBM (High-Bandwidth Memory) is typically reserved for spine / deep-buffer router platforms (Arista 7800R3/R4, Cisco 8000-series) not leaf. At the 1RU leaf tier, buffer is SRAM shared across ports. Arista 7060X6-64PE: 165 MB shared SRAM buffer. Cisco Nexus 9332D-GX2B Cloud Scale: 40 MB shared buffer.

Juniper QFX5220-32CD Tomahawk 3: ~64 MB. HPE Aruba CX 10000-48Y6C: buffer specs reviewed at procurement. Deep-buffer (GB-scale HBM-backed) leaf is not the typical 400G leaf pattern — the Arista 7280R4K and Arista 7800R4 are the deep-buffer family at the DC tier for storage / incast-sensitive traffic.

Which platforms are on the Ultra Ethernet Consortium (UEC) 1.0 compliant roadmap?

UEC 1.0 was released June 11, 2025. Arista 7060X6 is positioned as UEC-ready and forwards-compatible with UET (Ultra Ethernet Transport) per Arista Etherlink positioning. Cisco is a UEC steering member; upcoming Cisco Nexus platforms on G200 silicon are positioned as UEC-compliant, but the 9332D-GX2B (existing Cloud Scale) is not.

Juniper is a UEC member; the QFX5220 predates UEC 1.0 and is not positioned as a UEC platform — the follow-on QFX on Broadcom Tomahawk 5 / NVIDIA silicon will carry UEC compliance. HPE Aruba has publicly committed to UEC on future CX DC platforms. NVIDIA Spectrum-X is the proprietary alternative to UEC; InfiniBand is outside UEC scope (IBTA-governed).

Does each platform support PTP (Precision Time Protocol, IEEE 1588) for time-sensitive workloads?

All four support PTP per IEEE 1588-2019. Cisco Nexus 9332D-GX2B supports PTP with hardware timestamping per NX-OS 10.x. Arista 7060X6 supports PTP with hardware timestamping per EOS 4.28+. Juniper QFX5220-32CD supports PTP per Junos EVO 19.3+.

HPE Aruba CX 10000-48Y6C supports PTP per AOS-CX 10.13+. For trading floors or time-sensitive networking (TSN) workloads, PTP Grandmaster clock selection and BMCA (Best Master Clock Algorithm) tuning matters more than raw platform support.

Buying a Fabric, Not a Spec Sheet

A 400G leaf refresh, an 800G AI training fabric, and a DPU-enabled stateful-services ToR are three different projects. The right platform for a 2,048-GPU training cluster is not the right platform for a 1,200-rack general-purpose east-west refresh is not the right platform for a zero-trust segmentation rollout. Send rack elevations, east-west traffic profiles, RoCE / GPU cluster scope, existing fabric, and compliance posture — WiFi Hotshots returns a fixed-fee SOW that picks the platform based on fit.