Campus Core & Aggregation Switch Comparison: Cisco Catalyst 9500 vs HPE Aruba CX 8360 vs Juniper EX9200 vs Arista 7500R3 / 7280R3

Four campus core and aggregation platforms — the Cisco Catalyst 9500-40X and 9500-48Y4C, the HPE Aruba Networking CX 8360 v2, the Juniper EX9200 modular chassis, and the Arista 7500R3 / 7280R3 deep-buffer platforms — compared on forwarding capacity, buffer architecture, virtual-chassis and multi-homing capability, MACsec line-rate support, EVPN-VXLAN and SD-Access readiness, redundancy, licensing, certifications, and lifecycle posture.

WiFi Hotshots is a vendor-agnostic enterprise engineering firm serving enterprise customers, enterprise architects, infrastructure buyers, and network engineering teams across Southern California and the broader US market.

Multi-CCIE engineering bench — R&S, Enterprise Infrastructure, Wireless

Campus LAN design on Cisco, Aruba, Juniper, Arista

Fixed-fee SOW — no T&M surprises

25 years of enterprise networking leadership

Campus core and aggregation sits above the wiring-closet access layer covered in the 48-port multigigabit access switch comparison. The four platforms below are chosen for different architectural bets: Cisco Catalyst 9500 pairs campus features with the UADP ASIC and Cisco-native fabrics, HPE Aruba CX 8360 v2 is a pizza-box core with VSX for active-active redundancy, Juniper EX9200 is a mature modular chassis for deterministic campus designs, and Arista 7500R3 / 7280R3 are deep-buffer data-center-class platforms that some enterprises deploy into a collapsed campus-core-plus-DC role. See the campus LAN services page for how WiFi Hotshots scopes these builds, or browse the full services portfolio or the vendor comparison library for adjacent deep-dives.

Why These Four Platforms, and What They Actually Compete On

Campus core buyers are not cross-shopping on port count alone. The selection criteria that matter at the core and distribution layers are forwarding capacity, line-rate MACsec, buffer depth under incast, fabric redundancy model (StackWise Virtual, VSX, Virtual Chassis, MLAG / EVPN multi-homing), operating-system maturity, and multi-vendor interoperability via EVPN-VXLAN. Cisco, HPE Aruba Networking, and Juniper Networks are positioned in the Leaders quadrant of the 2024 Gartner Magic Quadrant for Enterprise Wired and Wireless LAN Infrastructure (March 2024); Arista Networks is positioned as a Visionary in the same report; Extreme Networks and Dell Networking also sell competitive campus core platforms and appear in adjacent comparison pages. The Arista 7500R3 modular chassis is primarily a data-center spine platform per Arista’s own positioning; the 7280R3 fixed-form-factor is more commonly deployed at campus aggregation. Both are included here because enterprises with collapsed campus-plus-DC architectures evaluate them against Cisco 9500, Aruba CX 8360, and Juniper EX9200 at the same procurement step.

The Comparison Matrix: 15 Metrics That Matter for Campus Core

Vendor datasheet forwarding capacities are aggregate full-duplex maxima and assume line-rate conditions on every port with no feature-enabled penalty. Production forwarding is gated by ASIC resource allocation, feature set enabled (ACL depth, MACsec, NetFlow / sFlow sampling, EVPN), and buffer consumption under incast. Sizing at the core should reference per-slot throughput, not chassis-level sum. Where a specification reads “not published,” the vendor datasheet does not disclose that value in the primary source reviewed.

SpecificationCisco Catalyst 9500-40X / 48Y4CHPE Aruba CX 8360 v2Juniper EX9200 (modular)Arista 7500R3 / 7280R3
Port configuration9500-40X: 40x 1/10G SFP+ + 2x 40G or 8x 10G via C9500-NM-2Q / NM-8X uplink module. 9500-48Y4C: 48x 1/10/25G SFP28 + 4x 40/100G QSFP28.1U fixed. Variants: 12C (12x 100G), 16Y2C, 24XF2C, 32Y4C (32x 25G + 4x 100G), 48Y6C (48x 25G + 6x 100G), 48XT4C. Smart Rate 1/2.5/5G on XT models.Modular chassis: EX9204 (5U, 4-slot, 3 line-card slots), EX9208 (8U, 8-slot), EX9214 (16U, 14-slot). Line cards range 10G / 40G / 100G; up to 120x 100G or 480x 10GbE wire-speed in EX9214.7500R3 modular: 7504R3 (76.8 Tbps, 96x 400G), 7508R3 (153 Tbps, 192x 400G), 7512R3 (230 Tbps, 288x 400G). 7280R3 fixed 1U/2U: 25G / 100G / 400G options up to 21.6 Tbps.
Forwarding capacity (Tbps / Bpps)9500-40X: 960 Gbps / 720 Mpps. 9500-48Y4C: 3.2 Tbps / 1 Bpps.Up to 4.8 Tbps / 2,678 Mpps (48Y6C v2).Up to 1.5 Tbps per slot with EX9200-SF3 in redundant configuration; midplane supports future 13.2 Tbps system.7500R3: up to 230 Tbps / 48 Bpps (7512R3). 7280R3: up to 21.6 Tbps / per platform variant.
ASIC / architecture9500-40X: UADP 2.0. 9500-48Y4C: UADP 3.0.Merchant-silicon (Broadcom Trident class) on AOS-CX; exact ASIC family not emphasized in current datasheet.Juniper Trio / Express line-card silicon depending on card generation.Jericho2 / Jericho2c+ merchant silicon with HBM deep-buffer memory; VOQ architecture.
Buffer architecture9500-48Y4C documented at up to 80 MB dedicated buffer plus 8 GB high-bandwidth buffer. 9500-40X per UADP 2.0 allocation. Shallow-buffer class.Shallow-buffer merchant-silicon class; datasheet does not publish a single aggregate buffer number.Per-PFE buffer on each line card; chassis-level capacity depends on card mix.Deep buffer VOQ. 7500R3 line cards: 8 GB (36CQ) to 16 GB (24D/24P). 7280R3 variants: 2 GB to 24 GB. Virtual output queue eliminates HOL blocking.
Virtual chassis / multi-homingCisco StackWise Virtual (SVL) — pair of 9500 chassis operate as one logical switch with active-active data plane, stateful-switchover control plane.Aruba VSX — active-active pair with independent control planes, live-upgradeable. EVPN multi-homing also supported in current AOS-CX.Juniper Virtual Chassis for EX9200 supported on specific generations; EVPN multi-homing (ESI-LAG) for standards-based designs.Arista MLAG (multi-chassis LAG) for active-active; EVPN-VXLAN with EVPN-ESI multi-homing for scalable standards-based fabrics.
SD-Access / EVPN-VXLAN supportSD-Access (VXLAN over LISP + Catalyst Center / ISE) supported on both models. EVPN-VXLAN campus fabric available on IOS-XE 17.x Network Advantage.EVPN-VXLAN campus fabric native to AOS-CX. Aruba Fabric Composer available for orchestration. Not part of Cisco SD-Access.EVPN-VXLAN native to Junos for campus fabric; commonly orchestrated via Juniper Apstra. Not part of Cisco SD-Access.EVPN-VXLAN native across EOS. CloudVision for orchestration. Not part of Cisco SD-Access.
MACsec (line-rate)Line-rate 256-bit MACsec (MKA / IEEE 802.1AE) across ports on both 9500-40X and 9500-48Y4C. WAN-MACsec supported.Line-rate MACsec on select CX 8360 v2 models (32Y4C, 48Y6C, 48XT4C designated MACsec-capable variants).Line-rate MACsec AES-256 on MACsec-capable EX9200 line cards; not every line card supports it — verify per card SKU.Line-rate MACsec AES-256-GCM available on designated MACsec line cards. Deep-buffer platforms historically lagged MACsec availability; current 7280R3 and 7500R3 line-card families include MACsec variants.
Minimum OS versionIOS-XE 16.x for first GA; current features require IOS-XE 17.x (17.9 / 17.12 / 17.15 trains).AOS-CX 10.06.x for initial 8360 GA; current recommended 10.14 / 10.15 / 10.16 trains per 8360 release notes.Junos OS 14.1X53 and later; current releases on Junos 22.x / 23.x trains with EVPN-VXLAN and feature parity.Arista EOS; 7500R3 and 7280R3 features continuously added through EOS-4.x releases. Verify feature-specific EOS version with Arista.
Redundancy — supervisor / PSU9500-40X and 9500-48Y4C: fixed-form-factor 1U, no dual-supervisor option; dual hot-swap platinum-rated PSUs and redundant fans. StackWise Virtual pairs two chassis for chassis-level redundancy.1U fixed; redundant hot-swap PSUs and redundant fans. No dual-supervisor (single-CPU fixed platform). VSX pair gives chassis-level redundancy.Dual Routing Engines (dual REs) native to EX9208 / EX9214; dual SFBs (Switch Fabric Boards). Four PSU bays per chassis for N+1 or N+N redundancy per Juniper datasheet.7500R3: dual redundant supervisors with stateful failover, redundant fabric modules (6 per chassis), grid-redundant PSUs. 7280R3 fixed: dual redundant PSUs, no dual-supervisor.
PoE capability9500-40X and 9500-48Y4C: core-layer switches, no front-panel PoE. PoE is an access-layer function; see Catalyst 9300 / 9400 comparison.CX 8360 v2 core models: no front-panel PoE. CX 6300 / 6400 and CX 8100 handle PoE at access / distribution.EX9200 core platform: no front-panel PoE. EX4400 / EX4100 handle PoE at access.7500R3 / 7280R3: no front-panel PoE. 720XP / 750XPX are Arista’s PoE access-layer platforms.
Management planeCisco Catalyst Center (on-prem) + Cisco Cloud Monitoring for Catalyst (cloud). ThousandEyes Agent optional. IOS-XE CLI + NETCONF / RESTCONF / YANG.HPE Aruba Networking Central (cloud or on-prem), Aruba Fabric Composer, AOS-CX NAE (Network Analytics Engine) database-driven telemetry. CLI + REST.Junos Space + Juniper Apstra (intent-based fabric orchestration) + Mist AI for Wired Assurance on newer EX platforms. CLI + NETCONF / JSON RPC.Arista CloudVision (CVaaS cloud or on-prem) with CVP (CloudVision Portal) for telemetry, configuration, and EVPN orchestration. EOS CLI + eAPI (JSON-RPC).
Licensing tiersHardware: Network Essentials (C9500-40X-E, 48Y4C-E) vs Network Advantage (-A). Software: DNA Essentials vs DNA Advantage (now Cisco Catalyst subscription). SD-Access and advanced fabric require DNA Advantage.Perpetual base license with AOS-CX features included; Aruba Central subscription tiered (Foundation / Advanced). Aruba Fabric Composer separate.Base Junos OS + AFL (Advanced Feature License) or Premium feature license for EVPN, MPLS, and full L3 feature set.EOS base license plus add-on licenses for VXLAN, MPLS, and advanced features per platform. CloudVision subscription separate.
Lifecycle posture9500-40X: end-of-sale announced; last day to order April 30, 2024 per Cisco EoL bulletin. Migrate to 9500-48Y4C or 9500X high-performance models. 9500-48Y4C: active in current Catalyst 9500 data sheet.CX 8360 v2 is the current-generation shipping platform (v1 8360 original superseded). Active in 2026 across AOS-CX 10.14 / 10.15 / 10.16 release trains.Juniper EX9200 reached EOL announcement with EoS milestone logged; extended service life reported into 2027. Buyers should verify with Juniper for current chassis SKU EOS status before new deployment.7500R3 and 7280R3 are current-shipping Arista platforms under active EOS development as of 2026. Some earlier line cards have been superseded by 7280R3K / 7500R3K variants with 400G and larger route tables.
5-year TCO framingHardware list + DNA Advantage subscription (per-switch, 3 / 5 / 7 year terms) + Smart Net Total Care or Solution Support + Catalyst Center software subscription. SD-Access adds ISE and Catalyst Center sizing.Hardware list + Aruba Foundation Care or Network Care support + Aruba Central subscription (per-device, tiered). No separate overlay fabric license required in AOS-CX.Hardware list + Juniper Care support (Core, Next-Day, Same-Day) + Junos OS AFL / Premium feature license + Apstra subscription if used. EVPN-VXLAN orchestration optional.Hardware list + Arista A-Care support (multi-tier) + CloudVision subscription (per-device for CVaaS, perpetual for on-prem CVP). EOS base + feature licenses.
Common Criteria / FIPS / ULCatalyst 9500 Series holds FIPS 140-2 validations across multiple SKUs on CMVP #4525 (now Historical); buyers should verify specific C9500-48Y4C cert status in the Cisco Trust Portal and the 140-3 Implementation-Under-Test list; FIPS 140-3 validations in progress across IOS-XE trains. Common Criteria NDcPP. Verify per specific SKU and IOS-XE train via Cisco Trust Portal.AOS-CX cryptographic module FIPS 140-2 validated (NIST CMVP security policy #3958 on AOS-CX). DoDIN APL, NDcPP, USGv6 compliant per HPE Aruba datasheet. Current FIPS 140-3 transitions in progress.Junos OS FIPS mode available on EX9200; FIPS 140-2 validated on earlier Junos trains, FIPS 140-3 validations ongoing. Common Criteria NDcPP via Juniper Pathfinder Compliance Advisor.Arista EOS FIPS 140-2 validated modules exist for specific platforms; 7500R3 / 7280R3-specific FIPS 140-3 certificate should be verified with Arista compliance team. NDcPP Common Criteria available on designated EOS trains.
Deployment fitCampus core and distribution aligned to Cisco SD-Access fabric or traditional Cisco three-tier; collapsed core in mid-size enterprises via StackWise Virtual pair.Campus core, distribution, and small-mid DC leaf / spine. VSX pair is the most common campus core topology for new HPE Aruba builds. Native 1/10/25G access uplinks.Campus core for large enterprises and carriers that value modular chassis, deep MPLS / L3VPN feature depth, and Junos operational model. Collapsed aggregation-core in high-port-count builds.7500R3: data-center spine and cloud-provider role per Arista positioning; used as collapsed campus-plus-DC core in enterprises that value deep buffer and very high 400G density. 7280R3: data-center leaf / aggregation, campus aggregation where deep buffer is a design driver.

Campus core is where design decisions are durable — a 5-to-8 year bet on one vendor’s fabric model, one operating system, one operational muscle memory. Send your building count, closet counts, fabric preference, and compliance scope; WiFi Hotshots returns a fixed-fee SOW that picks the core platform based on fit.

Per-Vendor Fact Summaries

Cisco Catalyst 9500-40X and 9500-48Y4C

The 9500-40X is a 1U fixed core with 40x 1/10G SFP+ plus an uplink module slot (C9500-NM-2Q for 2x 40G, or C9500-NM-8X for 8x additional 10G), 960 Gbps switching, 720 Mpps, UADP 2.0 ASIC. The 9500-48Y4C is the higher-end 1U fixed with 48x 1/10/25G SFP28 and 4x 40/100G QSFP28, 3.2 Tbps, 1 Bpps, UADP 3.0, and up to 80 MB dedicated buffer plus 8 GB high-bandwidth buffer per the current Catalyst 9500 datasheet. Both support Cisco StackWise Virtual (SVL) for active-active chassis pairing, line-rate 256-bit MACsec, SD-Access fabric role (DNA Advantage), and EVPN-VXLAN on IOS-XE 17.x. The 9500-40X is end-of-sale per Cisco’s EoL bulletin for the C9500-12Q/24Q/40X with last day to order April 30, 2024; new deployments should move to the 9500-48Y4C or to the newer Catalyst 9500X high-performance line for 100 G / 400 G.

HPE Aruba Networking CX 8360 v2

Six 1U variants: 12C (12x 100G), 16Y2C, 24XF2C, 32Y4C (32x 25G + 4x 100G), 48Y6C (48x 25G + 6x 100G), and 48XT4C (Smart Rate 1/2.5/5/10G-T plus 100G uplinks). Up to 4.8 Tbps and 2,678 Mpps on 48Y6C v2. VSX is the Aruba core redundancy model — two 8360 v2 chassis operate as an active-active pair with independent control planes and live upgrade capability, which is operationally different from Cisco StackWise Virtual’s single control plane. Line-rate MACsec is available only on the 32Y4C v2 MACsec (JL700C/JL701C) and 48Y6C v2 MACsec (JL704C/JL705C) SKUs; the 48XT4C v2 is not listed with the MACsec designation.

AOS-CX is a database-driven, microservices NOS with NAE (Network Analytics Engine) telemetry. The 8360 runs AOS-CX 10.06 and later, with current recommended trains 10.14 / 10.15 / 10.16 per HPE Aruba release notes. FIPS 140-2 validated (AOS-CX cryptographic module, NIST CMVP #3958 — now in Historical status; Aruba is transitioning to FIPS 140-3 via the AOS-CX Crypto Module / HPE OpenSSL 3 Provider). Managed via Aruba Central (cloud or on-prem) and optionally orchestrated via Aruba Fabric Composer.

Juniper EX9200 Series

The only true modular chassis in this comparison at the campus-core tier. EX9204 (4-slot, 5U), EX9208 (8-slot, 8U), EX9214 (14-slot, 16U). EX9200-SF3 switch fabric delivers up to 1.5 Tbps per slot in redundant configuration; the pass-through midplane supports a documented future capacity of 13.2 Tbps system throughput. A fully configured EX9214 supports up to 120x 100GbE or 480x 10GbE at wire speed. Dual Routing Engines on EX9208 and EX9214 give a chassis-level stateful-switchover model similar to legacy Cisco Catalyst 6500 / 6800 deployments.

Four PSU bays per chassis for N+1 or N+N redundancy. EVPN-VXLAN and MC-LAG / ESI-LAG native to Junos OS for open-standards campus fabric. Line-rate MACsec on designated MACsec line cards. Managed via Junos CLI, Junos Space, and Juniper Apstra for intent-based fabric design. EX9200 reached EOL announcement with extended service milestones into 2027; buyers considering EX9200 today should verify active SKU availability with Juniper and evaluate against newer EX4650 / QFX campus-core options.

Arista 7500R3 and 7280R3

7500R3 is the modular deep-buffer platform: 7504R3 (4-slot, 76.8 Tbps, 96x 400G), 7508R3 (8-slot, 153 Tbps, 192x 400G), 7512R3 (12-slot, 230 Tbps, 288x 400G). Deep-buffer VOQ architecture with 8 GB per 36-port 100G line card or 16 GB per 24-port 400G line card. Dual supervisors, six redundant fabric modules, grid-redundant PSUs. 7280R3 is the fixed-form-factor family ranging 800 Gbps to 9.6 Tbps on base 7280R3, and up to 21.6 Tbps on 7280R3A in 1U / 2U with 25G / 100G / 400G mix and 2 GB to 16 GB deep buffer on base 7280R3, extending to 24 GB on 7280R3A and 7280R3K.

FlexRoute engine supports up to 5M IPv4 routes. MLAG and EVPN-VXLAN with EVPN-ESI multi-homing are the standards-based fabric options; line-rate MACsec is available on designated MACsec line cards. Arista’s own positioning for 7500R3 is data-center spine / cloud-provider, not campus core; enterprises typically deploy 7500R3 when campus and data center are collapsed into one spine-leaf fabric, or when deep buffer is a requirement. 7280R3 is the more common Arista choice at campus aggregation. Managed via CloudVision (CVaaS or on-prem CVP). EOS is the common denominator across Arista platforms.

When Each Platform Is Worth Evaluating First

These are routing heuristics, not recommendations. A production decision requires a site assessment, fabric design workshop, and written scope. WiFi Hotshots engineers platforms across all four vendors; the routing reflects what the documented specifications favor for common scenarios, not a vendor preference.

  • Cisco SD-Access fabric with ISE-based segmentation: Catalyst 9500-48Y4C with DNA Advantage is the documented path. StackWise Virtual pair for active-active chassis redundancy without needing a second control plane. 9500-40X remains a valid refresh target only if existing spares and operational familiarity outweigh the post-EoS timeline.
  • Open-standards EVPN-VXLAN campus fabric with active-active core and simpler operational model: HPE Aruba CX 8360 v2 VSX pair is the typical design. 48Y6C v2 delivers 4.8 Tbps of capacity in a 1U footprint with MACsec and 25G / 100G flexibility. AOS-CX is widely considered the cleanest database-driven NOS in the campus segment.
  • Large campus or carrier-grade deployments needing deep MPLS / L3VPN feature sets and true modular chassis redundancy: Juniper EX9200 (EX9208 / EX9214) with dual Routing Engines. Junos CLI, Apstra orchestration, and EVPN-VXLAN are mature. Verify SKU-level availability given the platform’s EOL announcement timeline.
  • Collapsed campus-plus-data-center spine with very high 400G density and deep buffer requirements: Arista 7500R3 (7504R3 / 7508R3 / 7512R3) modular. EOS plus CloudVision. Not the right platform for a single-building campus refresh but strongly favored in campus-plus-DC consolidation.
  • Campus aggregation where deep buffer helps absorb incast from high-density Wi-Fi 6E / Wi-Fi 7 closets: Arista 7280R3 fixed-form-factor. Also an option for the distribution tier in builds where access is Catalyst 9300 / Aruba CX 6300 / Juniper EX4400 and the fabric is open EVPN-VXLAN.
  • Line-rate MACsec at the campus core for federal, healthcare, or financial data-in-motion compliance: Cisco Catalyst 9500 (both models), HPE Aruba CX 8360 v2 MACsec-capable variants (32Y4C v2 MACsec (JL700C/JL701C) and 48Y6C v2 MACsec (JL704C/JL705C)), Juniper EX9200 on designated MACsec line cards, and Arista 7280R3 / 7500R3 MACsec line cards all support line-rate 256-bit MACsec. Verify the specific SKU in each vendor’s compliance registry before procurement.
  • Multi-vendor interoperability at the core (wired today, wireless tomorrow): EVPN-VXLAN-native platforms (HPE Aruba CX 8360 v2, Juniper EX9200, Arista 7500R3 / 7280R3, and Cisco Catalyst 9500 on Network Advantage) all interoperate via open standards. Cisco SD-Access is a closed fabric; if multi-vendor is a requirement, evaluate open-EVPN options first.

Frequently Asked Questions

How is StackWise Virtual different from VSX and MLAG?

Cisco StackWise Virtual (SVL) merges two Catalyst 9500 chassis into one logical switch with a single control plane and active-active data plane, operated as one device. Aruba VSX pairs two CX 8360 v2 chassis with independent control planes (each chassis remains individually managed) and synchronizes state for active-active forwarding, enabling live upgrades of one chassis without dropping the pair.

Arista MLAG is a multi-chassis LAG model where two switches share a LAG to downstream devices with independent control planes; EVPN-ESI multi-homing extends the same idea to a standards-based fabric.

The operational tradeoff is single-pane-of-glass simplicity (SVL) versus independent upgradeability and blast-radius isolation (VSX / MLAG).

Is the Arista 7500R3 really a campus core switch?

Per Arista’s own positioning in the 7500R3 data sheet, the platform is optimized for data-center spine, cloud provider, content delivery, and AI / HPC leaf-and-spine designs. It appears in campus procurement when an enterprise is collapsing campus and data center into one fabric, or when deep buffer is required at the core (for example, storage-over-Ethernet traffic sharing the fabric with campus).

For a single-building campus refresh with no DC consolidation, Cisco 9500, Aruba CX 8360 v2, or Juniper EX9200 are more typical.

The 7280R3 fixed-form-factor appears more commonly at campus aggregation than the 7500R3 modular.

Which platform has the deepest buffers, and does that matter in a campus core?

Arista 7500R3 and 7280R3 are the only deep-buffer platforms in this comparison (up to 16 GB per 400G line card on 7500R3, up to 24 GB on select 7280R3 variants). Cisco Catalyst 9500, HPE Aruba CX 8360 v2, and Juniper EX9200 are shallow-buffer merchant-silicon or ASIC platforms. Deep buffer matters in a campus core when you have asymmetric speed transitions at scale (25G leaf to 100G spine under incast), storage traffic sharing the fabric, or high-fan-in from dense Wi-Fi closets.

For a traditional campus with voice / video / web traffic, shallow-buffer platforms are fine and often lower cost.

Can I run line-rate MACsec on every port of these core switches?

Cisco Catalyst 9500-40X and 9500-48Y4C support line-rate 256-bit MACsec across ports per the Catalyst 9500 data sheet. HPE Aruba CX 8360 v2 supports line-rate MACsec only on the 32Y4C, 48Y6C, and 48XT4C v2 model variants — 12C, 16Y2C, and 24XF2C v2 are not MACsec-capable.

Juniper EX9200 and Arista 7500R3 / 7280R3 support line-rate MACsec on designated MACsec line cards only — not every line card SKU is MACsec-capable.

Verify the specific SKU in the vendor configuration guide before committing to a design that depends on line-rate MACsec at the core.

How does SD-Access compare to EVPN-VXLAN for a campus fabric?

Cisco SD-Access is a closed-fabric campus architecture using VXLAN data plane over LISP control plane, orchestrated by Cisco Catalyst Center and segmented via Cisco ISE. It requires the Cisco Networking Subscription Advantage tier (formerly DNA Advantage; Catalyst Software Subscription for Switching is the component SKU) and ISE licensing. EVPN-VXLAN is the IETF open-standards alternative, implemented on HPE Aruba CX 8360 v2, Juniper EX9200, Arista 7500R3 / 7280R3, and on Cisco Catalyst 9500 under Network Advantage.

EVPN-VXLAN is the usual choice for multi-vendor shops, enterprises valuing open standards, or buyers wanting to avoid a single-vendor fabric lock-in.

SD-Access tends to win where Cisco ISE is already deployed, group-based policy (SGT) is the segmentation model, and Catalyst Center is the operational tool.

Does the Cisco 9500-40X end-of-sale mean I should not buy it?

Cisco announced end-of-sale for the 9500-40X (along with 9500-12Q and 9500-24Q) with last day to order April 30, 2024 per Cisco’s EoL bulletin. TAC support continues per the service-contract lifecycle in the EoL notice. New deployments in 2026 should target the 9500-48Y4C (current shipping) or the newer Catalyst 9500X (C9500X-28C8D and related) high-performance line for 100G / 400G requirements. Spares-level inventory of the 40X may still be appropriate for existing fleets; a greenfield design is hard to justify.

Which core is the easiest to manage day to day?

Operational simplicity is subjective and tied to the team’s existing muscle memory. HPE Aruba CX 8360 v2 on AOS-CX with Aruba Central is widely seen as the cleanest “single pane” experience for teams new to a vendor. Cisco Catalyst 9500 with Cisco Catalyst Center is strongest for shops already invested in IOS-XE, DNA / Catalyst subscriptions, and ISE.

Juniper EX9200 under Junos with Apstra is the strongest operational model for teams valuing deterministic config hierarchy and rollback.

Arista EOS with CloudVision is the strongest for teams valuing Linux-native tooling, eAPI, and open telemetry. All four vendors expose NETCONF / YANG and streaming telemetry; automation pipelines (Ansible, Nornir, Terraform) work across all four.

Where do I go for the matching access-layer comparison?

The sibling comparison is the 48-port multigigabit access switch comparison covering Cisco Catalyst 9300, HPE Aruba CX 6300, Juniper EX4400, and Arista 720XP at the wiring-closet tier, with PoE budgeting for Wi-Fi 6E / Wi-Fi 7 access points. The core / aggregation switches on this page sit directly above those access platforms in a three-tier or collapsed two-tier campus design. See campus LAN services for WiFi Hotshots’ full design scope or browse the comparison library for adjacent deep-dives.

When does a collapsed-core (two-tier) design make sense vs a traditional three-tier campus?

Collapsed-core merges distribution and core into one tier — typically a pair of Cisco Catalyst 9500 or Aruba CX 8360 running as StackWise Virtual / VSX. It fits single-building campuses under ~3,000 users, total AP count under ~300, and where the uplink bandwidth demand at core is under ~2 Tbps aggregate.

Three-tier (access -> distribution -> core) is the right pattern for multi-building campuses, campus-to-DC transitions with distinct failure domains, or deployments above ~3,000 users / 500+ APs where collapsing tiers forces the core into aggregation work it was not sized for. The decision hinges on blast-radius, maintenance-window policy, and future scale.

How many 100G QSFP28 ports does each core platform deliver in 1 RU or chassis?

Cisco Catalyst 9500-48Y4C: 48 x 25G SFP28 + 4 x 100G QSFP28 in 1 RU. Cisco Catalyst 9500X-28C8D: 28 x 100G QSFP28 + 8 x 400G QSFP-DD in 1 RU per the 9500X data sheet. HPE Aruba CX 8360 v2 48Y6C: 48 x 25G + 6 x 100G QSFP28 in 1 RU.

Juniper EX9200 is a modular chassis supporting 100G line cards (EX9200-12C 12-port 100G QSFP28). Arista 7280R3 fixed-form: 7280CR3-96 has 96 x 100G QSFP28 in 2 RU. Arista 7500R3 modular chassis supports 36 x 400G QSFP-DD per line card at 14.4 Tbps per card.

What 400G QSFP-DD port density does each platform deliver at the campus core tier?

Cisco Catalyst 9500X-28C8D: 8 x 400G QSFP-DD ports in 1 RU per the 9500X data sheet. Aruba CX 8360 v2 does not ship 400G at the 8360 tier; the 8400 chassis and upcoming CX 10000 platforms carry 400G. Juniper EX9200 modular chassis accepts 400G line cards (EX9200-2C-400GE in some deployments).

Arista 7500R3 supports 36 x 400G QSFP-DD per DCS-7500R3-36CQ-LC line card. Arista 7800R3 supports 36 x 400G or 144 x 100G per 7800R3-36CQ line card. For a campus building that is genuinely 400G-core-worthy today (rare outside hyperscale), Arista 7500R3 or 7800R3 or Cisco 9500X are the short list.

Which platforms support EVPN-VXLAN as the spine in a campus spine-leaf fabric?

All four support EVPN-VXLAN spine roles. Cisco Catalyst 9500 / 9500X run EVPN-VXLAN under Network Advantage plus DNA / Catalyst subscription. HPE Aruba CX 8360 v2 runs EVPN-VXLAN natively on AOS-CX 10.x with Aruba Fabric Composer orchestration.

Juniper EX9200 runs EVPN-VXLAN on Junos OS with Apstra intent-based orchestration or direct CLI. Arista 7500R3 and 7280R3 run EVPN-VXLAN on EOS with CloudVision Studios EVPN Studio. Campus spine-leaf is less common than data-center spine-leaf, but some large-campus deployments (university districts, multi-building headquarters) are adopting the pattern.

How is BGP EVPN anycast gateway configured across these core platforms?

Cisco 9500 / 9500X implement EVPN Distributed Anycast Gateway (DAG) via a shared virtual-MAC and virtual-IP on every leaf; the host’s default gateway is the anycast IP. Config uses router bgp + l2vpn evpn address-family. HPE Aruba CX 8360 v2 implements the same pattern via AOS-CX evpn vxlan config.

Juniper EX9200 implements DAG via Junos routing-instances type evpn with integrated routing and bridging (IRB). Arista 7500R3 / 7280R3 implement DAG via Arista Virtual ARP (VARP) per Arista EOS — VARP was the pre-EVPN Arista pattern and now coexists with EVPN Type 2 / Type 5 advertisement.

Which platforms support EVPN multi-homed leaf with ESI (RFC 7432)?

All four support EVPN Ethernet Segment Identifier (ESI) multi-homing per RFC 7432 (EVPN) plus RFC 8365 (network virtualization overlay). Cisco 9500 / 9500X: IOS-XE 17.x and later. HPE Aruba CX 8360 v2: AOS-CX 10.x. Juniper EX9200: Junos 18.x and later. Arista 7500R3 / 7280R3: EOS 4.20 and later.

ESI multi-homing enables all-active forwarding across two or more leaf switches sharing an ESI ID, replacing proprietary MLAG / VSX / StackWise Virtual with a standards-based multi-vendor alternative. For customers building toward a multi-vendor campus fabric, ESI is the right abstraction.

Does each platform support In-Service Software Upgrade (ISSU) at the campus core?

Cisco Catalyst 9500 supports ISSU on Network Advantage licensing with some caveats — specific software-version-pair restrictions apply per IOS-XE release notes. HPE Aruba CX 8360 v2 in VSX pair supports Live Upgrade — upgrading one peer while the other continues forwarding, with no data-plane interruption per AOS-CX VSX documentation.

Juniper EX9200 supports Unified In-Service Software Upgrade (Unified ISSU) on Junos per EX9200 documentation. Arista 7500R3 modular supports Smart System Upgrade (SSU) via EOS hitless restart. Verify the target software-version-pair is ISSU-eligible per each vendor’s release notes before counting on zero downtime.

Is FCoE relevant at the campus core anymore, and do these platforms support it?

No, FCoE is not relevant at campus core in 2026. FCoE (Fibre Channel over Ethernet, IEEE 802.1Qbb / DCB) was a data-center SAN convergence pattern that has been largely replaced by NVMe-oF over TCP or RoCE, plus iSCSI for non-NVMe storage. Modern campus cores do not carry FC or FCoE traffic.

If a design genuinely needs FCoE today, it belongs on a data-center Nexus (Cisco Nexus 9000 or 5600) or Aruba CX 10000 class platform, not a campus core. Cisco Catalyst 9500 does not support FCoE. Aruba CX 8360, Juniper EX9200, and Arista 7500R3 do not position FCoE as a feature at this tier.

Is active-active HA or active-standby the right campus-core pattern?

Active-active is the default for modern campus cores. StackWise Virtual on Cisco 9500, VSX on Aruba CX 8360 v2, and MLAG on Arista / Juniper all deliver active-active forwarding with sub-second convergence on link failure. Active-standby (HSRP / VRRP alone) leaves half the uplink capacity idle.

EVPN ESI multi-homing gives active-active with vendor-neutrality. For single-pane operations, StackWise Virtual or VSX is simpler. For blast-radius isolation (upgrade one core at a time with zero impact on the pair), VSX with independent control planes is the strongest operational model.

How is inter-VRF route leaking for shared services handled on each platform?

Cisco Catalyst 9500 implements inter-VRF leaking via import / export route-target under VRF config, plus route-map-based leaking. HPE Aruba CX 8360 v2 implements inter-VRF leaking via AOS-CX VRF static leaking and BGP route-leaking constructs.

Juniper EX9200 uses rib-groups and auto-export / instance-export for inter-VRF leaking in Junos. Arista 7500R3 / 7280R3 supports VRF leaking via EOS route-target import / export plus static leaking. For shared-services (DNS, DHCP, NTP) across multiple tenant VRFs, BGP EVPN Type 5 with VRF import / export is the cleanest modern pattern.

What IPFIX / NetFlow / sFlow telemetry does each core platform support natively?

Cisco Catalyst 9500 / 9500X supports Flexible NetFlow v9 (RFC 3954) and IPFIX (RFC 7011) plus AVC (Application Visibility and Control). HPE Aruba CX 8360 v2 supports sFlow (RFC 3176) and IPFIX per AOS-CX documentation.

Juniper EX9200 supports JFlow (Juniper NetFlow equivalent) plus IPFIX, sFlow. Arista 7500R3 / 7280R3 supports sFlow natively plus IPFIX. For a mixed-vendor core, IPFIX is the portable baseline; sFlow is widely supported but sampled-only (not full-flow).

How much MAC / ARP / BGP-EVPN scale does each platform carry at the campus core?

Cisco Catalyst 9500-48Y4C: 256,000 MAC / 128,000 ARP / 256,000 BGP-EVPN routes per the 9500 data sheet. HPE Aruba CX 8360 v2 48Y6C: 288,000 MAC / 64,000 ARP / 128,000 BGP-EVPN routes. Juniper EX9200 with EX9200-12C: 1 million MAC / 512,000 ARP / 2 million BGP-EVPN routes.

Arista 7500R3: 2 million MAC / 2 million ARP / 2 million BGP-EVPN routes per DCS-7500R3-36CQ line card. Arista 7280R3: 1 million MAC / 1 million ARP / 1 million BGP-EVPN. For campus-only cores, Cisco / Aruba scale is fine; for campus-plus-DC convergence, Juniper and Arista carry deeper tables.

What is the typical PSU / power-budget range on these campus-core chassis?

Cisco Catalyst 9500-48Y4C: 2x 650W PSUs (1300W total with N+1 redundancy). Cisco Catalyst 9500X-28C8D: 2x 1100W PSUs per the 9500X data sheet. HPE Aruba CX 8360 v2 48Y6C: 2x 650W PSUs (redundant) per AOS-CX 8360 data sheet.

Juniper EX9200 varies by chassis — EX9204 with two 2520W PSUs per chassis spec. Arista 7280R3 2RU: 2x 1500W PSUs. Arista 7500R3 modular chassis: 4x 3000W PSUs. PDU / UPS sizing at the core closet should budget 1.5x nameplate to leave headroom for peak draw and PSU aging.

How does Cisco Silicon One differ from Arista DCS / Juniper Trio / Express / Broadcom merchant silicon?

Cisco Silicon One is Cisco’s in-house unified ASIC family (Q100, Q200, Q201, Q202) spanning webscale routing and AI back-end from ~3.2 Tbps to ~25.6 Tbps, announced Dec 2019. Silicon One appears on Cisco 8000-series routers and select Nexus platforms; it does NOT appear on Catalyst 9500-series campus cores.

Arista uses Broadcom merchant silicon (Trident, Tomahawk, Jericho family) across 7500R3 / 7280R3 / 7060X. Juniper EX9200 uses the Juniper Trio chipset; newer QFX and PTX use Express series. Aruba CX 8360 v2 uses Broadcom Trident 3. Campus core silicon choice drives buffer depth, VXLAN scale, and feature parity with data-center silicon.

Why is Dell PowerSwitch not in the main comparison matrix on this page?

Dell PowerSwitch (S5248F, S5296F, Z9332F, Z9664F) is primarily positioned as data-center leaf / spine silicon rather than a campus-core SKU. Dell’s campus presence is thinner than Cisco, HPE Aruba, Juniper, and Arista at the 1,000-user-plus campus tier — most Fortune 500 campus-core procurements evaluate the four on this page first.

Dell PowerSwitch is a legitimate choice in hyperscale DC builds, Open Networking (SONiC or DENT) deployments, or cost-sensitive mid-scale campus. A customer actively evaluating PowerSwitch for campus core would want a sibling comparison on a different page — not this one.

How does buffer management differ for lossless (RoCE) vs lossy (standard) mixed traffic at campus core?

Campus cores rarely carry RoCE (lossless storage) traffic — that is a data-center pattern. Most campus cores are pure lossy: buffer management via tail-drop or WRED (Weighted Random Early Detection) on oversubscribed queues. Cisco 9500, Aruba CX 8360 v2, and Juniper EX9200 use shallow-buffer ASICs and rely on WRED + traffic shaping for congestion control.

Arista 7500R3 and 7280R3 are the deep-buffer exceptions (16 GB / 24 GB per line card) — useful when campus is being collapsed with data-center fabric, storage traffic crosses the core, or incast patterns from high-density closets saturate shallow buffers.

What management platform drives each campus core for day-to-day operations?

Cisco Catalyst 9500 / 9500X: Catalyst Center (on-premises, formerly DNA Center) is the primary operational tool, plus Meraki Dashboard for the -M SKU. HPE Aruba CX 8360 v2: Aruba Central (cloud or on-prem) plus Aruba Fabric Composer for EVPN-VXLAN fabric workflows.

Juniper EX9200: Junos Space Network Director is the legacy path; Apstra is the modern intent-based orchestration tool. Mist does not manage EX9200 (Mist manages EX4400 / EX4650 access class). Arista 7500R3 / 7280R3: CloudVision CVP (on-prem) or CVaaS plus CloudVision Studios for intent-driven config.

What is the realistic EoS/EoL lifecycle for each campus-core platform today?

Cisco Catalyst 9500-40X end-of-sale was April 30, 2024 (TAC support continues per EoL bulletin). Catalyst 9500-48Y4C is current-shipping. HPE Aruba CX 8360 v2 is current-shipping (late-2024 refresh of the v1 platform). Juniper EX9200 has a long-shipping history — specific SKU lifecycle status should be pulled from the Juniper Pathfinder EOL tool.

Arista 7500R3 and 7280R3 are current-shipping; 7500R (non-R3) was EoL in recent years. A campus-core refresh today should target current-shipping SKUs with at least 5-year EoL runway — not platforms near end-of-sale.

Buying a Campus Fabric, Not a Spec Sheet

A comparison table is a starting point. The right core for a 2,500-bed hospital with Cisco ISE already deployed is not the right core for a 35,000-student K-12 district going open-EVPN is not the right core for a financial-services campus consolidating into one spine-leaf fabric. Send building counts, closet counts, fabric preference, existing identity platform, and compliance scope — WiFi Hotshots returns a fixed-fee SOW that picks the core and aggregation platform based on fit.