Data Center Fabric Controller Comparison: Cisco ACI APIC vs Juniper Apstra vs Arista CloudVision vs HPE Aruba Fabric Composer

Four enterprise data center fabric management and intent-based controllers — Cisco Application Centric Infrastructure (ACI) APIC, Juniper Apstra, Arista CloudVision (CVP / CVaaS), and HPE Aruba Networking Fabric Composer — compared on multi-vendor scope, EVPN-VXLAN reference designs, streaming telemetry architecture, zero-touch provisioning, day-2 assurance, pre-deploy validation and rollback, API depth, VMware / Kubernetes / OpenShift integration, and licensing.

WiFi Hotshots is a vendor-agnostic enterprise engineering firm serving enterprise customers, enterprise architects, data center teams, and network engineering leadership across Southern California and the broader US market.

Multi-CCIE engineering bench — Data Center, R/S, Security

EVPN-VXLAN spine-leaf production experience

Fixed-fee SOW — no T&M surprises

25 years of enterprise networking leadership

All four controllers deliver a fabric-manager abstraction above individual switches — they run spine-leaf EVPN-VXLAN fabrics, push intent-based configuration, and stream telemetry to a centralized analytics plane. The architectural differences are where procurement decisions are made: whether the controller is single-vendor (ACI APIC, Fabric Composer) or multi-vendor (Apstra, and to a narrower extent CloudVision), how root-cause analysis is wired into the assurance engine, the depth of pre-deploy validation and rollback, and how the control plane integrates with VMware vCenter, Kubernetes, and OpenShift. See data center engineering services, AI-ready infrastructure, or the broader services overview — adjacent in this library, the 400G data center leaf comparison covers the switch hardware these controllers manage.

Why These Four Controllers, and Why the Scope Matters

These four controllers represent the full spectrum of modern data center fabric management philosophy. Cisco ACI APIC is the mature, tightly-coupled, single-vendor intent model built on the Cisco Nexus 9000 family with policy-centric Endpoint Groups and Contracts. Juniper Apstra is the vendor-neutral reference — the single intent plane that can operate Juniper QFX, Cisco Nexus, Arista EOS, Dell SONiC, and Dell EMC fabrics from one graph database. Arista CloudVision (CVP on-prem and CVaaS) is the telemetry-first approach — streaming state through TerminAttr into the Network Data Lake (NetDL), with Studios for intent and CloudVision APIs for integration. HPE Aruba Networking Fabric Composer manages AOS-CX leaf-spine EVPN-VXLAN fabrics with deep Pensando DPU integration for distributed stateful security. Kubernetes-native SDN controllers (Calico, Cilium), hyperscaler-specific fabric tooling (NVIDIA NetQ, SONiC orchestrators), and pure network-observability platforms are out of scope for this page.

The Comparison Matrix: Controller Capabilities That Matter

All four controllers support EVPN-VXLAN spine-leaf fabrics, intent-based configuration, streaming telemetry, and RESTful APIs. The details below reflect documented capability in current vendor primary sources — where a value reads “not verified,” the specific claim was not isolated in a primary source during research for this page. Feature tiers and exact SKU boundaries evolve; procurement teams should confirm current capability and licensing with each vendor before downselecting.

CapabilityCisco ACI APICJuniper ApstraArista CloudVisionHPE Aruba Fabric Composer
Vendor support scopeSingle-vendor — Cisco Nexus 9000 series only. Tight coupling of policy model to ACI-mode Nexus 9300 / 9500 hardware.Multi-vendor — Juniper QFX (Junos), Cisco Nexus (NX-OS), Arista EOS, Dell SONiC, Dell EMC Z-series. Third-party fabrics are a Premium-tier feature.Primarily Arista EOS. Multi-vendor telemetry via CV UNO (SNMP + flow from third-party devices, VMware vCenter API). Full intent and Studios are Arista-native.Single-vendor — HPE Aruba Networking AOS-CX switches (CX 8325, 8360, 10000, 9300, 8100 series). AMD Pensando DPU integration on CX 10000.
Intent-based networkingPolicy-centric: Tenants, VRFs, Bridge Domains, EPGs, Contracts. The EPG / Contract model is the canonical ACI abstraction.Graph-database-backed intent with continuous validation. Positioned as “the industry’s only fabric manager that provides true intent-based networking.”CloudVision Studios for intent-based workflows — built-in Studios for EVPN, underlay, L3 leaf-spine, plus user-defined Studios.Guided Setup wizards translate high-level intent (fabric topology, VNI ranges, underlay IGP) into per-switch AOS-CX configuration.
Reference designs shippedACI stretched fabric, Multi-Pod (single APIC cluster across pods), Multi-Site (multiple APIC clusters via Nexus Dashboard Orchestrator / MSO), Remote Leaf.3-stage IP Clos, 5-stage Clos, collapsed fabric, and Freeform architectures — all available across licensing tiers per Juniper product page.Leaf-spine EVPN-VXLAN reference designs via Studios (L3 leaf-spine, EVPN services); DCI and multi-site EVPN stitching via Studios.EVPN-VXLAN spine-and-leaf plus Two-Tier topologies; multi-fabric EVPN-VXLAN via Fabric Composer. Validated Solution Guides (VSGs) published.
EVPN-VXLAN control planeMP-BGP EVPN between spines (Multi-Pod) and across sites (Multi-Site). Data-plane VXLAN encapsulation end-to-end. Distributed anycast gateway on leaf.EVPN-VXLAN with integrated DCI — “integrated data center interconnect (DCI) with seamless VXLAN stitching.” RFC 7432 / 8365 conformant across managed NOSes.EVPN-VXLAN with symmetric IRB on Arista EOS; MLAG and EVPN multihoming. CloudVision Studios generate EVPN configuration.AOS-CX EVPN-VXLAN with underlay OSPF or eBGP; overlay iBGP EVPN or eBGP EVPN. CX 10000 with Pensando DPU adds distributed stateful firewall at the VTEP.
Controller architectureAPIC cluster — 3, 5, or 7 active controllers (quorum-based; minimum 3). Virtual APIC supported on VMware ESXi. In Nexus Dashboard era, ACI services run on ND.Single Apstra server (VM) or HA cluster. Graph database is authoritative single source of truth for intent and operational state.CloudVision Portal (CVP) on-premises (single-node or 3-node HA cluster) or CloudVision-as-a-Service (CVaaS) multi-tenant SaaS.Self-contained ISO or OVA — single instance or 3-node high-availability cluster for virtual / physical hosts.
Telemetry architectureAPIC health scores, fault codes. Nexus Dashboard Insights (NDI) adds streaming telemetry, software telemetry, flow analytics, and assurance.Apstra intent-time analytics plus streaming telemetry on Advanced / Premium tiers. Root-Cause Identification (RCI) on Advanced / Premium.Streaming state via TerminAttr agent (gRPC transport) into NetDL — real-time, not SNMP polling. CV UNO adds SNMP / flow / vCenter API for third-party visibility.Real-time streaming telemetry from AOS-CX switches; integration with HPE Aruba Central NetConductor for unified data-center + campus visibility.
Day-0 zero-touch provisioningAPIC auto-discovers fabric nodes via LLDP and DHCP-based ZTP; fabric membership policy admits switches. PnP Connect on Catalyst-adjacent.ZTP via Apstra — device onboarding pulls Apstra-rendered config, validates against intent before commit.ZTP as-a-Service in CloudVision Studios — image provisioning, initial configuration rendering, continuous state reconciliation.Fabric Composer Guided Setup handles day-0 underlay and overlay configuration; ZTP onboarding for AOS-CX switches.
Pre-deploy validation + commit/rollbackAPIC configuration snapshots and import/export; policy-model validation pre-commit. Nexus Dashboard adds pre-change analysis (NDI).Time Voyager — versioned rollback to any retained blueprint commit (5 most recent by default, up to 100 configurable, plus indefinitely-pinned revisions). Continuous validation ensures deployed state matches intent.CloudVision change control: network-wide change workflows, snapshot-based rollback, automated upgrades with rollback.Fabric Composer versioned config and rollback per-switch; pre-apply validation against reference model.
Drift detection + assuranceContinuous health scoring; Nexus Dashboard Insights adds assurance analytics, compliance checks, and epoch-based pre/post change diff.Continuous anomaly detection on intent deviations; Apstra flags configuration drift against the graph-DB source of truth automatically.CloudVision compliance and bug-exposure views against EOS releases; configuration drift visible via network snapshots and change review.Fabric Composer validates running state against the reference model; drift surfaces on dashboards.
Root-cause analysis / alert contextNDI anomaly analytics with advisories, security advisories, compliance; topology-aware event correlation.Advanced-tier Root-Cause Identification (RCI) — built-in probes for L2 / L3 / EVPN / optical / BGP anomalies with traceable correlation to intent.CVP anomaly detection with state-streaming timeline; bug and CVE exposure by device and release.Fabric Composer event correlation within the AOS-CX fabric; integration with HPE InfoSight for cross-stack RCA.
Northbound APIREST API (XML / JSON), NX-API on Nexus switches. Terraform / Ansible providers. gRPC / gNMI on modern NX-OS.REST API. Official Terraform provider and Ansible collection. Integration with third-party automation frameworks.REST plus gRPC APIs. OpenConfig data models. Official cloudvision-apis gRPC repo (github.com/aristanetworks/cloudvision-apis).REST API. Integration Packs for VMware vSphere, Nutanix, HPE iLO Amplifier, Pensando PSM.
VMware vCenter integrationACI Virtual Machine Manager (VMM) domain integration with vCenter — DVS push, port-group mapping to EPGs, micro-segmentation via AVE / AVS.Apstra integrates with virtualization inventories via Apstra Cloud Services; Juniper vDC designs document vCenter workflows.CV UNO integrates with vCenter APIs for cross-domain inventory and flow visibility.Fabric Composer native vCenter integration — DVS and PVLAN policy automation pushed from the fabric.
Kubernetes / OpenShift integrationACI CNI plugin for Kubernetes and OpenShift — distributed routing / switching with VXLAN overlay, hardware-accelerated load balancing, VMM domain per cluster.Apstra is fabric-centric; Kubernetes workloads consume fabric as underlay. Juniper Cloud-Native Contrail Networking is the separate CNI product.CloudVision integrates with Kubernetes observability via CV UNO and streaming flow; Arista also ships Cluster Load Balancing for K8s via EOS.Fabric Composer surfaces K8s workloads through infrastructure integrations; Pensando DPU provides distributed policy at the VTEP.
SOC 2 / ISO 27001 postureCisco Trust Portal publishes SOC 2 / ISO 27001 / FedRAMP / C5 attestations across product portfolio; specific APIC / Nexus Dashboard certificates verifiable in Trust Portal.Juniper publishes compliance via Juniper Pathfinder Compliance Advisor; Apstra-specific certificate coverage to be confirmed with Juniper compliance team.Arista publishes SOC 2 and other compliance documentation for CVaaS via Arista product certifications index — verify CVaaS-specific attestation per procurement requirement.HPE maintains enterprise-scale SOC 2 and ISO 27001 programs; Fabric Composer certificate scope verifiable through HPE Trust Center.
Licensing modelTiered: DCN Essentials, DCN Advantage, DCN Premier. Multi-Site requires Advantage or higher. Subscription (3 / 5 / 7 yr) or perpetual (Essentials / Advantage only, not Premier).Three-tier Flex: Standard, Advanced, Premium. Third-party vendor fabrics require Premium. Per managed device, 1-, 3-, 5-, or 7-year terms.CloudVision per-device subscription for CVP and CVaaS; CV UNO is a premium feature on CVaaS. Feature tiering distinguishes core CVP from add-on modules.Annual per-switch software subscription for Fabric Composer; separate AOS-CX switch subscription licensing (Foundation / Advanced).

Choosing a fabric controller without validating how it handles pre-deploy checks and rollback is how data center migrations stall. Send current topology, switch inventory, and workload mix — WiFi Hotshots returns a fixed-fee design SOW.

Per-Controller Fact Summaries

Cisco ACI APIC

The mature, tightly-coupled, single-vendor reference. APIC runs on a 3-, 5-, or 7-node cluster (quorum-based; minimum 3 active) tied to ACI-mode Cisco Nexus 9000 leaf and spine switches. The policy model — Tenants, VRFs, Bridge Domains, Endpoint Groups (EPGs), Contracts — is the canonical ACI abstraction, and is distinct from the CLI-config mental model of every other controller on this page. Multi-Pod extends a single APIC cluster across pods via MP-BGP EVPN over the IPN; Multi-Site uses Nexus Dashboard Orchestrator (NDO, formerly MSO) to push policy across independent APIC clusters.

ACI CNI integration with Kubernetes and OpenShift is production-class: distributed routing and switching with VXLAN overlays, hardware-accelerated load balancing, VMM domain per cluster. Licensing: DCN Essentials, Advantage, or Premier (Multi-Site requires Advantage or higher; Nexus Dashboard Insights is tiered by DCN license since Cisco Q2 FY24). Weakness: the single-vendor tie — ACI does not manage non-Cisco switches, so organizations with a multi-vendor strategy must operate ACI alongside a separate controller for the non-Cisco estate.

Juniper Apstra

The multi-vendor intent engine. Apstra’s graph database holds the fabric model as the authoritative source of truth; device-specific configuration is rendered from the intent, deployed via device drivers, and continuously validated. Apstra 5.x qualified versions include Arista EOS 4.28.7.1M, Cisco Nexus 93600CD-GX as a supported Device Profile, Dell EMC Z9432F-ON as a leaf, plus Dell SONiC — enabling vendor-neutral EVPN-VXLAN deployments across mixed fabrics. Cisco Nexus deployments require NX-OS TCAM carving before EVPN works. Reference designs: 3-stage IP Clos, 5-stage Clos, collapsed fabric, Freeform. Time Voyager rollback and Root-Cause Identification (RCI, Advanced tier) are the differentiators against ACI’s policy model. Licensing: three Flex tiers — Standard, Advanced, Premium, per managed device, 1- / 3- / 5- / 7-year. Third-party (non-Juniper) fabrics require Premium. Weakness: Kubernetes / OpenShift CNI integration is not native — Juniper’s Cloud-Native Contrail is a separate product.

Arista CloudVision (CVP / CVaaS)

Telemetry-first. TerminAttr, the streaming state agent, publishes all EOS operational state to CloudVision over gRPC into the Network Data Lake (NetDL) — real-time, not SNMP-polled. CloudVision Studios are the intent layer: built-in Studios for L3 leaf-spine, EVPN, Streaming Telemetry Agent config, plus user-defined Studios for custom workflows. Deployment is CVP on-prem (1-node or 3-node HA) or CVaaS multi-tenant SaaS. Multi-vendor reach is through CV UNO (Universal Network Observability), which adds SNMP and flow data from third-party devices and VMware vCenter API integration — but intent and Studios remain Arista-native. CloudVision APIs are published as gRPC (OpenConfig-aligned) and REST; the official cloudvision-apis repo is on GitHub. Licensing: per-device subscription for CVP and CVaaS with CV UNO as a premium CVaaS feature. Weakness: full-intent and Studios are bound to Arista EOS — organizations needing a single-pane intent model across Juniper, Cisco, and Arista should evaluate Apstra instead.

HPE Aruba Networking Fabric Composer

The AOS-CX fabric automation plane. Fabric Composer’s Guided Setup walks teams through baseline switch configuration, underlay addressing and routing, and overlay EVPN-VXLAN — suited to teams doing their first spine-leaf fabric without handrolling per-switch config. Topologies: EVPN-VXLAN spine-and-leaf, Two-Tier, and multi-fabric. The CX 10000 with AMD Pensando DPU is the distinguishing story — distributed stateful firewall and microsegmentation at the VTEP, managed through Fabric Composer’s integration with the Pensando Policy and Services Manager (PSM). Native VMware vCenter integration pushes DVS and PVLAN policy automatically. Deployment: self-contained ISO or OVA, single instance or 3-node HA cluster. Licensing: annual per-switch subscription, separate from AOS-CX switch subscription. Weakness: single-vendor scope — Fabric Composer manages AOS-CX switches only; a heterogeneous Nexus / Arista / Juniper estate needs a different tool or separate controllers.

When Each Platform Is Worth Evaluating First

These are routing heuristics from documented capability and field patterns, not vendor preferences. A production decision requires topology review, workload mapping, and a written design. WiFi Hotshots engineers across all four controllers; the routing reflects what the documented architecture favors for common scenarios.

  • Cisco-native data center estate with policy-centric segmentation: Cisco ACI APIC remains the reference — EPGs, Contracts, VMM integration with vCenter, and ACI CNI for Kubernetes / OpenShift are the most mature single-vendor intent stack, provided the organization commits to a Nexus 9000 ACI-mode hardware footprint.
  • Multi-vendor fabric today or mixed-vendor strategy tomorrow: Juniper Apstra is the documented-strongest multi-vendor intent engine — Juniper QFX, Cisco Nexus, Arista EOS, Dell SONiC, Dell EMC Z-series managed from one graph database. Premium tier is required for non-Juniper fabrics.
  • Telemetry-first operations and streaming state as the source of truth: Arista CloudVision with TerminAttr / NetDL is the documented-strongest streaming telemetry architecture of the four — relevant for teams building observability pipelines, flow analytics, and real-time network data consumption via gRPC / OpenConfig.
  • Distributed stateful security at the VTEP (PCI, healthcare, financial-services microsegmentation): HPE Aruba CX 10000 + Fabric Composer + Pensando PSM is the documented integration for line-rate distributed firewall enforcement at the leaf, avoiding hairpin trombone through a centralized appliance.
  • Kubernetes / OpenShift tight integration at the fabric: Cisco ACI APIC with the ACI CNI plugin is the most documented path — distributed routing, hardware-accelerated load balancing, VMM domain per cluster. Other controllers reach K8s through separate CNI products or observability integrations.
  • Continuous validation and time-traveled rollback as a design constraint: Juniper Apstra’s Time Voyager plus Root-Cause Identification (Advanced tier) is the documented-strongest rollback story. CloudVision and Fabric Composer support rollback via snapshots; APIC does via config export / import.
  • SaaS-only operations (no on-prem controller appliance): Arista CloudVision-as-a-Service (CVaaS) is the native multi-tenant SaaS option. ACI APIC, Apstra, and Fabric Composer are primarily on-prem controllers (APIC has virtual APIC; Apstra runs as a VM; Fabric Composer as ISO or OVA).

Frequently Asked Questions

Which of these controllers manage non-native switches from other vendors?

Juniper Apstra is the only controller in this comparison designed as a multi-vendor intent engine. Apstra 5.x qualified versions include Juniper QFX (Junos), Cisco Nexus (NX-OS, with TCAM carving prerequisite), Arista EOS 4.28.7.1M, Dell SONiC, and Dell EMC Z9432F-ON. Third-party (non-Juniper) fabrics require the Premium license tier.

Cisco ACI APIC manages only Cisco Nexus 9000 ACI-mode switches.

HPE Aruba Fabric Composer manages only AOS-CX. Arista CloudVision’s full intent and Studios are Arista-native; multi-vendor reach is through CV UNO streaming SNMP and flow data from third-party devices for observability only.

What EVPN-VXLAN reference designs do these controllers support?

Juniper Apstra ships 3-stage IP Clos, 5-stage Clos, collapsed fabric, and Freeform across licensing tiers per Juniper product documentation. Cisco ACI APIC supports stretched fabric, Multi-Pod (single APIC across pods via MP-BGP EVPN on the IPN), Multi-Site (multiple APIC clusters via Nexus Dashboard Orchestrator), and Remote Leaf.

Arista CloudVision Studios include built-in L3 leaf-spine and EVPN Studios plus user-defined Studios for custom topologies. HPE Aruba Fabric Composer supports EVPN-VXLAN spine-and-leaf, Two-Tier, and multi-fabric EVPN-VXLAN per the HPE Aruba Validated Solution Guides.

How does each controller handle pre-deploy validation and rollback?

Juniper Apstra’s Time Voyager allows versioned rollback to any previous intent state, with continuous validation ensuring deployed state matches intent. Arista CloudVision offers network-wide change control workflows, snapshot-based rollback, and automated upgrades with rollback. Cisco ACI APIC supports configuration snapshots via export / import, plus pre-change analysis through Nexus Dashboard Insights. HPE Aruba Fabric Composer supports versioned config and rollback per switch plus pre-apply validation against the reference model.

Which controllers have native Kubernetes and OpenShift integration?

Cisco ACI APIC is the most mature — the ACI CNI plugin delivers distributed routing and switching with VXLAN overlays, hardware-accelerated load balancing for external LoadBalancer services, and a per-cluster VMM domain. The ACI CNI is supported on Red Hat OpenShift and vanilla Kubernetes. Juniper’s container networking is handled by Cloud-Native Contrail Networking, a separate product from Apstra.

Arista integrates with Kubernetes observability via CV UNO and offers EOS-native Cluster Load Balancing.

HPE Aruba Fabric Composer surfaces K8s workloads through infrastructure integration packs and provides distributed policy at the VTEP via Pensando DPU on CX 10000.

What is the streaming telemetry architecture on each controller?

Arista CloudVision is telemetry-first — TerminAttr streams all EOS operational state over gRPC into the Network Data Lake (NetDL), eliminating SNMP polling. Cisco ACI exposes APIC health scores and fault codes, with streaming telemetry and flow analytics added by Nexus Dashboard Insights (NDI). Juniper Apstra provides intent-time analytics across all tiers, with streaming telemetry and Root-Cause Identification on Advanced and Premium. HPE Aruba Fabric Composer streams real-time telemetry from AOS-CX, with unified data-center + campus visibility through HPE Aruba Central NetConductor.

How do these controllers cluster for high availability?

Cisco ACI APIC runs as a 3-, 5-, or 7-node quorum-based cluster (minimum 3 active) of physical appliances, with Virtual APIC as an option on VMware ESXi. Juniper Apstra runs as a single Apstra server or HA cluster. Arista CloudVision Portal (on-prem) runs single-node or 3-node HA; CVaaS is multi-tenant SaaS without customer-managed clustering. HPE Aruba Fabric Composer installs as self-contained ISO or OVA, single instance or 3-node HA cluster on virtual or physical hosts.

Do these controllers have SOC 2 and ISO 27001 attestations?

SOC 2 and ISO 27001 are typically controller-plane attestations rather than per-SKU certificates. Cisco’s Trust Portal publishes SOC, FedRAMP, ISO, and C5 attestations across the product portfolio — specific APIC and Nexus Dashboard scope is verifiable there. Juniper publishes compliance via the Pathfinder Compliance Advisor; Apstra-specific certificate coverage should be confirmed with Juniper’s compliance team.

Arista publishes compliance documentation for CloudVision-as-a-Service via its product certifications index — procurement teams should request the CVaaS-specific attestation.

HPE maintains enterprise-scale SOC 2 and ISO 27001 programs; Fabric Composer scope is verifiable through HPE Trust Center. FedRAMP, StateRAMP, and FIPS scopes vary by product and should be verified on a per-controller basis before procurement.

How do licensing models compare across the four controllers?

Cisco ACI uses tiered DCN licenses — Essentials, Advantage, Premier. Multi-Site requires Advantage or higher. Subscription (3 / 5 / 7 years) or perpetual (Essentials / Advantage only, not Premier). Juniper Apstra uses a three-tier Flex model — Standard, Advanced, Premium — per managed device, 1 / 3 / 5 / 7 years; non-Juniper fabrics require Premium.

Arista CloudVision is per-device subscription for CVP and CVaaS; CV UNO is a premium CVaaS feature.

HPE Aruba Fabric Composer is an annual per-switch subscription, separate from AOS-CX switch licensing. Procurement teams should request current pricing and the controller’s dependency chain on base switch subscription licenses for total cost comparison.

Which controller is right for an AI-ready or AI back-end fabric?

AI back-end networks are typically non-blocking 1:1 oversubscribed RoCE / InfiniBand fabrics with different operational requirements than general-purpose east-west enterprise DC. For the AI-back-end front-end and storage network, all four controllers manage the EVPN-VXLAN underlay competently. Arista CloudVision is deployed by multiple hyperscalers and AI cloud operators. Juniper Apstra brings vendor-neutral flexibility. See AI-ready infrastructure for platform-specific guidance on GPU cluster fabric design, and the 400G data center leaf comparison for the switch hardware underneath.

What is the difference between Cisco NDFC and ACI APIC for data-center fabric management?

Cisco Nexus Dashboard Fabric Controller (NDFC, formerly DCNM) is a multi-fabric / multi-vendor capable controller for standards-based EVPN-VXLAN, classic LAN, and FCoE / SAN fabrics running NX-OS. It manages customer-owned config on standard Nexus 9000 platforms running NX-OS.

ACI APIC manages only Cisco Nexus 9000 switches in ACI-mode (a Cisco-proprietary fabric mode using VXLAN data plane with ACI policy engine). APIC policy uses EPGs (Endpoint Groups), contracts, and application profiles — a declarative model. NDFC and APIC are different architectural approaches — choose based on whether the fabric is ACI-mode (APIC) or NX-OS EVPN-VXLAN (NDFC).

What is the difference between Arista CloudVision CVP (on-prem) and CloudVision-as-a-Service (CVaaS)?

CloudVision Portal (CVP) is Arista’s on-premises deployment — installed on customer hardware (single-node or 3-node HA) as a VM or container. CVP is the right fit for air-gapped environments, sovereignty-controlled deployments, or customers with no cloud egress mandate.

CloudVision-as-a-Service (CVaaS) is the Arista-hosted SaaS deployment with Arista running the operational plane. CV UNO (Universal Network Observability) is a premium CVaaS feature surfacing third-party device data. CVaaS removes the customer-operated controller burden but requires internet egress from the managed fleet. Federal / sovereignty buyers typically land on CVP on-prem.

What intent-based fabric patterns does Juniper Apstra support natively?

Juniper Apstra 5.x ships 3-stage IP Clos, 5-stage Clos, collapsed fabric, and Freeform design patterns per Juniper Apstra documentation. The Clos patterns are the standard EVPN-VXLAN spine-leaf templates; Freeform is the unrestricted topology mode for customers with non-Clos requirements.

Apstra is the only controller in this comparison designed ground-up as a multi-vendor intent engine. Qualified NOS targets in Apstra 5.x include Juniper QFX (Junos), Cisco Nexus (NX-OS, with TCAM carving prereq), Arista EOS 4.28.7.1M, Dell SONiC, and Dell EMC Z9432F-ON. Third-party (non-Juniper) fabrics require the Apstra Premium license tier.

What is the relationship between NVIDIA Cumulus Linux, NetQ, and NVIDIA Air?

NVIDIA Cumulus Linux is the open-Linux NOS running on NVIDIA Spectrum switches and supported third-party silicon. NetQ is NVIDIA’s telemetry and fabric validation tool — it collects Cumulus agents’ data into a time-series database for query and anomaly detection.

NVIDIA Air is the digital-twin sandbox — a cloud-hosted lab that models a Cumulus fabric topology for pre-deployment testing, training, and what-if scenarios. The three are complementary: Cumulus Linux is the NOS, NetQ is the operational plane, Air is the pre-deployment validation lab. For NVIDIA Spectrum-X AI fabrics, all three typically appear together in the operational workflow.

Which controller has the most mature Terraform provider today?

Arista CloudVision Terraform provider (registry.terraform.io/providers/aristanetworks/cvp) has been in active maintenance since 2020, with resources for studios, configlets, device onboarding, and change control workflows. Cisco ACI Terraform provider (ciscodevnet/aci) has strong maturity for EPG, contract, tenant, and application-profile resources.

Cisco NDFC Terraform provider is available and supports fabric deployment, VRF, network, and switch inventory resources. Juniper Apstra Terraform provider (Juniper/apstra) ships comprehensive blueprint, rack, and device lifecycle resources. NVIDIA Cumulus has community Terraform modules via the NetQ API. Ansible module maturity generally matches Terraform maturity across all four ecosystems.

What is the difference between intent-based and imperative configuration on a DC fabric controller?

Imperative config directly instructs the device: “configure VLAN 100 on port 1/1/1 as access.” The controller becomes a CLI pusher. Intent-based config declares the desired state: “this rack should be in VRF Red with tenant segmentation” — the controller computes the device-level config needed to achieve that intent.

Intent-based: Juniper Apstra (native intent), Cisco ACI APIC (EPG declarative model), Arista CloudVision Studios (intent studios). Imperative: legacy DCNM / NDFC fabric builder mode, CLI + Ansible playbooks. The operational advantage of intent is drift detection and automatic remediation; the downside is a steeper learning curve for teams accustomed to CLI-centric operations.

Does Apstra Time Voyager actually roll back hardware config, or only the intent database?

Time Voyager rolls back the intent state in the Apstra database and then computes the device-level config delta needed to bring the fabric to that earlier state. Apstra pushes those device-level changes via standard NETCONF / gNMI / CLI — so yes, it rolls back hardware config, not just the database record.

The rollback is not a transactional commit on the switch itself — it is a re-derive + re-push workflow. This is why Apstra’s rollback is slower than a local Junos rollback commit (which commits the last candidate config atomically on a single device). For fabric-wide recovery, Time Voyager’s advantage is the intent model; for per-device fast recovery, native Junos rollback commit is faster.

What multi-site fabric scope does each controller cover?

Cisco ACI APIC supports stretched fabric, Multi-Pod (single APIC cluster across pods via MP-BGP EVPN on the inter-pod network), Multi-Site (multiple APIC clusters via Nexus Dashboard Orchestrator), and Remote Leaf. Cisco NDFC supports Multi-Site Domain (MSD) for stretched EVPN-VXLAN.

Juniper Apstra supports multi-blueprint with inter-fabric peering via DCI. Arista CloudVision supports multi-fabric management natively across a single CVP / CVaaS instance. For federated / multi-cluster scope, architecture differs per vendor — designs with more than 3 geographically distributed fabrics benefit from explicit DCI + BGP EVPN inter-fabric peering regardless of controller choice.

What simulation or dry-run mode does each controller provide before a config push hits production switches?

Juniper Apstra computes a delta between intent and live fabric state, simulates the config push, and shows anomalies without applying; Apstra’s staging environment lets operators preview intent-driven changes. Cisco Nexus Dashboard Insights (NDI) pre-change analysis models the impact of a proposed ACI config modification before APIC commits it.

Arista CloudVision Studios compile phase validates the rendered configlet against the fabric schema before any switch config changes. NVIDIA Air provides a full digital-twin sandbox — operators build the proposed change in Air, validate, then promote to production NetQ-managed Cumulus. Cisco NDFC runs fabric-consistency validation but does not offer a full simulation environment natively.

What is the difference between ACI EPG policy model and Apstra / NDFC segmentation models?

Cisco ACI EPG (Endpoint Group) is a logical grouping of endpoints in an application profile with a policy contract specifying allowed traffic between EPGs. Security is built into the fabric policy plane — EPGs communicate only when a contract exists.

Apstra segmentation uses VRF / virtual-network / security-zone primitives with explicit routing-policy. Cisco NDFC on NX-OS uses standard VRF / VLAN / anycast-gateway constructs plus VXLAN EVPN Type 5 for L3 segmentation. Arista CloudVision uses EVPN VRF + Arista Multi-Domain Segmentation Services (MSS) for tag-based segmentation. The four models differ in abstraction level, not in what traffic ultimately gets allowed or blocked.

How does Ansible, PyEZ, pyATS, and NAPALM integration maturity compare across these controllers?

Cisco: Ansible (cisco.nxos, cisco.aci, cisco.dcnm collections) is mature; pyATS (Cisco-native test automation framework) is first-class on NX-OS / ACI. Juniper: PyEZ (official Juniper Python library) plus Ansible juniper.device collection for Junos; Apstra Ansible collection for intent-level operations.

Arista: pyeapi (Arista-native Python client) plus Ansible arista.eos collection; extensive EOS SDK. NAPALM (Network Automation and Programmability Abstraction Layer with Multivendor support) covers Cisco NX-OS, Juniper Junos, and Arista EOS as first-class driver targets. NVIDIA Cumulus: Ansible cumulus_linux module plus NetQ CLI automation.

What RBAC granularity does each fabric controller expose?

Cisco ACI APIC: security-domain-based RBAC with per-tenant / per-VRF / per-EPG granularity, plus read-only / admin / custom roles. Cisco NDFC: role-based access with fabric / switch / object-level permissions, integrated with LDAP / RADIUS / TACACS+.

Juniper Apstra: resource-level RBAC with per-blueprint, per-rack, per-design permissions. Arista CloudVision: studio-level, configlet-level, and device-level RBAC with SAML / LDAP integration. NVIDIA NetQ: per-role access to fabric views and API endpoints. All four support SAML federation for SSO integration with corporate IdPs.

What API-driven CMDB integration patterns exist for NetBox or Nautobot?

NetBox (open-source IPAM/DCIM) integrates with all four controllers via the NetBox REST API — most common pattern is NetBox-as-source-of-truth driving fabric controller config via Ansible or Terraform. Nautobot is the Arista-sponsored NetBox fork with a plug-in architecture for direct controller integration (Nautobot plug-ins for Arista CloudVision).

For Cisco ACI + NetBox, the nautobot-plugin-device-onboarding and custom Ansible workflows are common. For Apstra + NetBox, Apstra’s API exports blueprint data consumable by NetBox. Custom CMDB integrations typically use Ansible or Terraform as the pipeline glue between CMDB and fabric controller.

How does upgrade orchestration work across these fabric controllers?

Cisco ACI APIC orchestrates switch firmware upgrades per-maintenance-group with staged rollout; APIC itself upgrades as a controller cluster. Cisco NDFC supports ZTP switch onboarding plus staged image push. Juniper Apstra orchestrates OS upgrades per-rack / per-fabric with pre-change snapshots and rollback hooks.

Arista CloudVision runs automated image compliance, staged upgrade per-device / per-container, with hitless upgrade on supported platforms. NVIDIA NetQ validates post-upgrade state but does not directly orchestrate Cumulus Linux upgrades — that runs via Ansible or per-device apt upgrade with NetQ-validated post-checks.

What is the published scalability ceiling (max switches / max routes) per fabric controller?

Cisco ACI APIC: up to 1,200 leaf switches in a single fabric per the APIC scalability guide; 500 tenants, 24,000 EPGs, 500,000 endpoints. Cisco NDFC: up to 1,000 switches per fabric per the NDFC scalability guide. Juniper Apstra: up to 500 devices per blueprint per Apstra 5.x release notes.

Arista CloudVision CVP single instance: 2,500 devices per the Arista CloudVision sizing guide (single-node); higher scale via CVaaS multi-tenant. NVIDIA NetQ: scales horizontally per NetQ deployment architecture. Customers approaching ceilings should engage vendor sizing teams with specific workload data — published ceilings are upper bounds, not operational sweet spots.

Which controllers offer native Kubernetes CNI integration for DC fabric + container networking?

Cisco ACI is the most mature — the ACI CNI plugin delivers distributed routing and switching with VXLAN overlays, hardware-accelerated load balancing for external LoadBalancer services, and per-cluster VMM domain. Supported on Red Hat OpenShift and vanilla Kubernetes.

Juniper’s container networking is handled by Cloud-Native Contrail Networking — a separate product from Apstra. Arista integrates with Kubernetes observability via CV UNO and offers EOS-native Cluster Load Balancing. NVIDIA Cumulus pairs with NVIDIA AI Enterprise stack for GPU-cluster networking. HPE Aruba Fabric Composer integrates via infrastructure integration packs and Pensando DPU at the VTEP.

How is fabric observability retained at scale — days of flow data, and query response times on large fabrics?

Arista CloudVision Network Data Lake (NetDL) retains multi-year telemetry history at scale, with per-device retention policies configurable by operator; query response times on large fabrics (>500 devices) depend on NetDL hardware sizing — Arista publishes reference sizing for CVP on-prem.

Cisco Nexus Dashboard Insights typically retains 30-90 days of flow data by default with extended retention via tiered storage. Juniper Apstra retains intent-time analytics history per the deployed storage class. NVIDIA NetQ retention defaults are typically 30 days with configurable extensions. At multi-thousand-switch scale, retention planning drives storage sizing more than compute sizing.

Buying a Fabric, Not a Controller

A controller comparison is a starting point. The right fabric-management plane for a 10-rack co-location POP is not the right plane for a 96-rack enterprise-campus DC is not the right plane for a 400G AI back-end cluster. Send current topology, switch inventory, workload mix, and compliance scope — WiFi Hotshots returns a fixed-fee design SOW that picks the platform based on fit.