Chapter 12: Orchestration — Nexus Dashboard, APIC, Hyperfabric, and Intersight

Learning Objectives

Section 1: Nexus Dashboard for AI Fabric Management

Pre-Quiz: Nexus Dashboard

1. What major change occurred with Nexus Dashboard version 4.1?

A) DCNM was renamed to Nexus Dashboard
B) NDFC, NDO, and Insights were consolidated into a single-image installation
C) It was migrated to a cloud-only SaaS model
D) Fabric Controller was deprecated in favor of Terraform

2. Which NDFC feature pre-configures QoS policies for lossless RoCEv2 transport?

A) Nexus-as-Code templates
B) Built-in AI fabric templates
C) Ansible playbooks
D) VXLAN Multi-Site wizard

3. What does Nexus Dashboard Insights use to detect sub-millisecond traffic bursts?

A) SNMP polling
B) NetFlow v9 sampling
C) Microburst detection via streaming telemetry
D) Syslog message correlation

4. What is the primary role of NDO (Nexus Dashboard Orchestrator)?

A) Single-site fabric provisioning
B) Multi-site and multi-fabric policy management
C) GPU workload scheduling
D) Firmware compliance auditing

5. Which IaC tool uses a declarative approach to define the entire NDFC fabric configuration as code?

A) Ansible
B) Python REST API scripts
C) Nexus-as-Code (NaC)
D) PowerShell SDK

Key Points

Platform Architecture and Unified Services

Cisco Nexus Dashboard is the unified operations platform for data center network management. With version 4.1, three previously separate services -- Fabric Controller (NDFC), Orchestrator (NDO), and Insights -- were consolidated into a single-image installation. Exam questions may reference them individually, but they operate within one platform.

graph TD ND["Nexus Dashboard 4.1
(Single Image Installation)"] ND --> NDFC["Fabric Controller (NDFC)
Provisioning & Automation"] ND --> NDO["Orchestrator (NDO)
Multi-Site Policy Management"] ND --> INS["Insights
Telemetry & Analytics"] NDFC --> T1["AI Fabric Templates"] NDFC --> T2["QoS Config (PFC/ECN)"] NDFC --> T3["Config Drift Detection"] NDO --> M1["Cross-Site VLAN/VRF"] NDO --> M2["ACI-to-VXLAN EVPN Interop"] INS --> A1["Microburst Detection"] INS --> A2["In-Band Network Telemetry"] INS --> A3["Anomaly Correlation"] style ND fill:#1a5276,color:#fff style NDFC fill:#2e86c1,color:#fff style NDO fill:#2e86c1,color:#fff style INS fill:#2e86c1,color:#fff
Functional AreaPurposeKey AI-Relevant Capabilities
Fabric Controller (NDFC)Provisioning and automation of NX-OS and MDS fabricsAI fabric templates, QoS configuration for PFC/ECN, config drift detection
Orchestrator (NDO)Multi-site and multi-fabric policy managementCross-site VLAN/VRF consistency, ACI-to-VXLAN EVPN interoperability
InsightsTelemetry, analytics, and anomaly detectionMicroburst detection, INT, anomaly correlation and remediation

NDFC for AI Fabric Provisioning

NDFC is the comprehensive management and automation solution for Cisco Nexus and MDS platforms running NX-OS. For AI workloads it provides:

Analogy: Think of NDFC as a building contractor's project management software. Rather than visiting each site (switch) with paper blueprints, the contractor loads the approved blueprint (fabric template) into the system, which pushes instructions to every subcontractor (device) simultaneously and flags deviations.

Multi-Site Orchestration with NDO

NDO provides centralized network and policy management across ACI, Cloud ACI, and VXLAN EVPN sites:

Telemetry and Insights for AI Fabrics

Nexus Dashboard Insights collects streaming telemetry from every fabric node. For AI workloads -- where a single congested link can stall training across thousands of GPUs -- this visibility is critical.

CapabilityDescription
Anomaly DetectionLearns baseline behavior and flags deviations automatically
Event CorrelationLinks related anomalies to identify root causes
Suggested RemediationActionable recommendations for detected issues
Microburst DetectionSub-millisecond traffic bursts causing packet drops
In-Band Network Telemetry (INT)Telemetry embedded in live traffic for real-time path visibility
Tail TimestampingPrecise packet departure times for latency measurement
Congestion SignalingECN-marked traffic pattern detection across fabric

Infrastructure-as-Code with NDFC

NDFC exposes full-featured REST APIs supporting multiple IaC frameworks:

IaC ToolHow It Works with NDFC
Terraformterraform plan validates intent; terraform apply pushes changes; terraform destroy removes infrastructure
AnsiblePlaybooks automate fabric deployment, interface management, vPC setup, and policy config
Nexus-as-Code (NaC)Declarative approach defining entire fabric config as code, enabling version control and CI/CD
Python / REST APIDirect API calls for custom automation and third-party integration
Animation Slot: Nexus Dashboard 4.1 unified platform walkthrough -- showing NDFC template-driven provisioning flow from template selection through PFC/ECN configuration push to drift detection alerts.
Post-Quiz: Nexus Dashboard

1. In Nexus Dashboard 4.1, NDFC, NDO, and Insights are delivered as:

A) Three separate virtual machines
B) A single-image installation with integrated features
C) Separate SaaS microservices
D) Kubernetes pods managed by Intersight

2. What two QoS mechanisms do NDFC AI fabric templates configure for lossless RoCEv2?

A) DSCP remarking and traffic shaping
B) Priority Flow Control (PFC) and Explicit Congestion Notification (ECN)
C) Weighted Fair Queuing and policing
D) PAUSE frames and tail drop

3. Which Nexus Dashboard component embeds telemetry data within live traffic for real-time path-level visibility?

A) NDFC configuration compliance
B) NDO policy sync
C) Insights with In-Band Network Telemetry (INT)
D) Fabric discovery via LLDP

4. What happens when a device deviates from the intended configuration in NDFC?

A) The device is automatically rebooted
B) Configuration drift detection raises an alarm
C) The change is silently reverted
D) NDO takes over management of the device

5. Which statement about DCNM is correct for the exam?

A) DCNM and NDFC are interchangeable terms
B) DCNM is the preferred platform for new AI deployments
C) DCNM has reached End of Life; all modern capabilities are exclusive to NDFC
D) DCNM was merged into Intersight

Section 2: APIC and ACI for AI Workloads

Pre-Quiz: APIC and ACI

1. In the ACI policy model, what is the correct hierarchy from top to bottom?

A) VRF > Tenant > EPG > Bridge Domain
B) Tenant > VRF > Bridge Domain > Subnet
C) Application Profile > Tenant > Contract > EPG
D) EPG > Application Profile > VRF > Tenant

2. What ACI construct defines allowed communication between Endpoint Groups?

A) Bridge Domain
B) VRF
C) Contract
D) Application Profile

3. For AI training workloads, which traffic pattern is dominant?

A) North-South (external API requests)
B) East-West (inter-GPU gradient exchange)
C) Management plane traffic
D) Multicast replication traffic

4. How many APIC controllers are typically deployed for redundancy?

A) One
B) Two
C) Three
D) Five

5. Why are separate VRFs used for GPU backend traffic and storage traffic in ACI?

A) ACI does not support multiple subnets in one VRF
B) To isolate high-bandwidth lossless GPU traffic from storage I/O
C) VRFs are required for VLAN assignment
D) APIC cannot manage more than one Bridge Domain per VRF

Key Points

ACI Fabric Architecture for AI

Cisco ACI is the SDN solution for data centers, providing application-driven policy management through a declarative, object-oriented framework. It uses a spine-leaf topology where all policy is defined declaratively and enforced by APIC.

The ACI Policy Model

Understanding this hierarchy is essential for the exam:

graph TD T["Tenant"] --> VRF["VRF
(L3 Forwarding Domain)"] T --> AP["Application Profile"] VRF --> BD["Bridge Domain"] BD --> SUB["Subnet"] AP --> EPG["Endpoint Group (EPG)"] EPG --> EP["Endpoints
(Servers, VMs, Containers)"] EPG --> CON["Contracts
(Inter-EPG Communication)"] style T fill:#1a5276,color:#fff style VRF fill:#2874a6,color:#fff style AP fill:#2874a6,color:#fff style BD fill:#2e86c1,color:#fff style EPG fill:#2e86c1,color:#fff style SUB fill:#5dade2,color:#fff style EP fill:#85c1e9,color:#000 style CON fill:#85c1e9,color:#000
ComponentDefinitionAI Infrastructure Relevance
TenantLogical container for policies; unit of isolationSeparate tenants for AI training vs. inference, or per business unit
VRFUnique L3 forwarding and policy domainIsolates GPU backend traffic from general DC traffic
Bridge DomainForwarding policy providing VLAN-like behaviorMaps to specific AI cluster segments needing L2 adjacency
Application ProfileContainer for EPGs within a tenantGroups all EPGs of a single AI app (e.g., training pipeline)
EPGNamed logical entity containing endpointsGPU servers in one EPG, storage in another, management in a third
ContractPolicy enabling inter-EPG communicationPermits RoCEv2 between GPU EPGs; allows storage access for checkpointing

Worked Example: AI Training Tenant Design

Tenant: AI-Training-Prod
  VRF: Backend-GPU-VRF
    Bridge Domain: GPU-Cluster-A-BD (subnet 10.100.1.0/24)
    Bridge Domain: GPU-Cluster-B-BD (subnet 10.100.2.0/24)
  VRF: Storage-VRF
    Bridge Domain: NFS-Storage-BD (subnet 10.200.1.0/24)
  Application Profile: LLM-Training-App
    EPG: GPU-Workers (bound to GPU-Cluster-A-BD)
    EPG: Parameter-Servers (bound to GPU-Cluster-B-BD)
    EPG: Training-Storage (bound to NFS-Storage-BD)
    Contract: GPU-to-GPU (permits RoCEv2, ICMP)
    Contract: GPU-to-Storage (permits NFS/iSCSI)

AI/ML Traffic Patterns in ACI

flowchart LR subgraph EW["East-West Traffic (GPU Backend)"] direction LR GPU1["GPU Worker
EPG"] -- "RoCEv2
Lossless" --> GPU2["Parameter Server
EPG"] GPU2 -- "Gradient Updates
All-Reduce" --> GPU1 end subgraph NS["North-South Traffic (Storage/External)"] direction TB GPU3["GPU Worker
EPG"] -- "NFS/iSCSI
Checkpointing" --> STOR["Training Storage
EPG"] EXT["External API
Clients"] -- "Inference
Requests" --> GPU3 end style EW fill:#f9e79f,color:#000 style NS fill:#aed6f1,color:#000

East-West (Inter-GPU): The dominant pattern during training. GPUs exchange gradient updates via all-reduce operations. Requires ultra-low latency and lossless delivery -- a single dropped packet can stall the entire collective across thousands of GPUs.

North-South (Storage/External): Data ingestion, model checkpointing, and inference API serving. High-bandwidth but more tolerant of minor latency variations.

APIC as the SDN Controller

The APIC cluster (typically three controllers) serves as the centralized policy engine:

Network Requirements for AI/ML in ACI

RequirementACI Implementation
Lossless RoCEv2 transportECN and PFC configured on leaf switches
Dedicated queues for small packetsQoS policies prioritize latency-sensitive ACK/response packets
Multiple no-drop queuesSeparate queues prevent head-of-line blocking
East-West optimizationSpine-leaf ensures every GPU is 2 hops from every other
North-South storage accessSeparate bridge domains and contracts with appropriate QoS
Animation Slot: ACI policy model hierarchy builder -- interactive drag-and-drop showing how Tenants, VRFs, Bridge Domains, Application Profiles, EPGs, and Contracts relate, with an AI training tenant example.
Post-Quiz: APIC and ACI

1. In an AI training ACI design, GPU backend traffic should be placed in a separate VRF because:

A) ACI requires one VRF per physical switch
B) It isolates high-bandwidth lossless RoCEv2 traffic from other data center traffic
C) VRFs are only used for external routing
D) Bridge Domains cannot span multiple VRFs

2. A Contract in ACI performs which function?

A) Assigns IP addresses to endpoints
B) Defines allowed communication between EPGs via filters and subjects
C) Configures VXLAN tunnel endpoints
D) Provisions physical switch ports

3. How does APIC discover spine and leaf switches in the ACI fabric?

A) Manual IP address registration
B) Automatic discovery via LLDP
C) DHCP snooping
D) BGP peering auto-negotiation

4. Why is a single dropped packet in East-West GPU traffic so impactful?

A) It causes the entire ACI fabric to reconverge
B) It triggers retransmission that can stall the entire collective operation across thousands of GPUs
C) It forces APIC to reprogram all leaf switches
D) It disables PFC on the affected link

5. In a spine-leaf ACI topology, how many hops separate any two GPUs?

A) One
B) Two
C) Three
D) It varies based on fabric size

Section 3: Hyperfabric Deployment

Pre-Quiz: Hyperfabric

1. What distinguishes Hyperfabric's operational model from Nexus Dashboard and ACI?

A) It uses on-premises APIC controllers
B) It uses a cloud controller managed by Cisco
C) It requires manual CLI configuration
D) It only supports NX-OS switches

2. Hyperfabric full stack AI infrastructure is compliant with which NVIDIA architecture?

A) NVIDIA DGX BasePOD
B) NVIDIA Enterprise Reference Architecture (ERA)
C) NVIDIA HGX Blueprint
D) NVIDIA SuperPOD

3. What provisioning model does Hyperfabric use for Day-1 deployment?

A) Manual CLI bootstrap
B) POAP with DHCP
C) Zero-touch plug-and-play provisioning
D) Ansible push from local server

4. How many distinct fabric tiers does the Hyperfabric AI cluster architecture provide?

A) One unified fabric
B) Two (compute and storage)
C) Three (backend, frontend, storage)
D) Four (backend, frontend, storage, management)

5. What management approach does Hyperfabric use during Day-2 operations?

A) SNMP-based polling
B) Assertion-based management
C) Manual health checks
D) Syslog-driven automation

Key Points

Architecture and Cloud-Managed Model

Cisco Nexus Hyperfabric uses a cloud controller managed by Cisco to design, deploy, and manage fabrics located anywhere -- primary data centers, colocation facilities, and edge sites. This eliminates the operational burden of maintaining on-premises controllers.

Analogy: The difference between self-hosting your email server (Nexus Dashboard/ACI) versus using a managed email service (Hyperfabric). Both deliver email, but the managed service eliminates platform maintenance overhead.

Hyperfabric Full Stack AI Infrastructure

The full stack option is a turnkey, vertically integrated AI platform that is NVIDIA ERA-compliant:

ComponentSpecifics
NetworkingCisco Silicon One switches (6000 Series and N9100/N9300 Series)
ComputeCisco UCS C885A M8 Rack Servers with NVIDIA HGX GPUs
StorageIntegrated high-throughput storage systems
ManagementNexus Hyperfabric cloud controller for end-to-end lifecycle
AI SoftwarePre-validated AI software stack

Day-0 Through Day-2 Operations

flowchart LR subgraph D0["Day-0: Design & Order"] D0A["Select AI Cluster
Template"] --> D0B["Capacity Planning
via Cloud Controller"] --> D0C["Place Order"] end subgraph D1["Day-1: Deploy & Provision"] D1A["Hardware
Arrives On-Site"] --> D1B["Power On
Switches"] --> D1C["Zero-Touch
Plug-and-Play"] --> D1D["Cloud Controller
Claims & Configures"] end subgraph D2["Day-2: Operate & Scale"] D2A["Continuous
Monitoring"] --> D2B["Assertion-Based
Validation"] D2B --> D2C["Firmware
Upgrades"] D2B --> D2D["Add Nodes
(Auto-Provisioned)"] end D0 --> D1 --> D2 style D0 fill:#f5cba7,color:#000 style D1 fill:#abebc6,color:#000 style D2 fill:#aed6f1,color:#000

Day-0 (Design and Order): Preconfigured templates for AI clusters, cloud-assisted capacity planning, integrated ordering workflow.

Day-1 (Deployment): Hardware arrives, switches power on and auto-connect to the cloud controller via zero-touch plug-and-play. Fully operational fabric in minutes -- no manual CLI.

Day-2 (Operations): Continuous loss/latency monitoring, assertion-based validation against defined intent, cloud-managed firmware upgrades, and auto-provisioned scaling.

Three-Tier AI Cluster Network Architecture

graph TD CC["Hyperfabric Cloud Controller
(Managed by Cisco)"] CC --> BF CC --> FF CC --> SF subgraph BF["Backend Fabric"] BS["Backend Spine"] --- BL1["Backend Leaf"] BS --- BL2["Backend Leaf"] BL1 --- GPU1["GPU Node"] BL1 --- GPU2["GPU Node"] BL2 --- GPU3["GPU Node"] BL2 --- GPU4["GPU Node"] end subgraph FF["Frontend Fabric"] FS["Frontend Spine"] --- FL["Frontend Leaf"] FL --- MGMT["Management / Scheduling"] FL --- DATA["Data Ingestion"] end subgraph SF["Storage Fabric"] SS["Storage Spine"] --- SL["Storage Leaf"] SL --- ST1["Storage Array"] SL --- ST2["Storage Array"] end style CC fill:#1a5276,color:#fff style BF fill:#fadbd8,color:#000 style FF fill:#d5f5e3,color:#000 style SF fill:#d6eaf8,color:#000
Fabric TierPurposeCharacteristics
Backend NetworkGPU-to-GPU communicationLow-latency, lossless, optimized for collective operations (all-reduce, all-gather)
Frontend FabricManagement and data ingestionStandard DC connectivity for job scheduling, monitoring, data loading
Storage FabricHigh-throughput data accessDedicated bandwidth for training data retrieval and model checkpointing

Integration with Cisco Ecosystem

Hyperfabric does not operate in isolation in the full stack AI model:

Animation Slot: Hyperfabric Day-0 to Day-2 lifecycle animation -- showing the progression from template selection and ordering, through zero-touch provisioning with cloud controller auto-claim, to assertion-based monitoring and node scaling.
Post-Quiz: Hyperfabric

1. Hyperfabric's cloud controller is:

A) Deployed on-premises as a VM cluster
B) Managed by Cisco in the cloud
C) A feature within Nexus Dashboard
D) An APIC running in public cloud

2. The purpose of the three-tier fabric architecture (backend, frontend, storage) is to:

A) Reduce the number of switches needed
B) Eliminate contention between GPU collective operations and storage I/O
C) Provide redundancy for the cloud controller
D) Enable multi-tenant isolation

3. During Day-1 Hyperfabric deployment, what happens after switches are powered on?

A) An engineer SSHs into each switch for initial config
B) They auto-connect to the cloud controller via zero-touch plug-and-play
C) NDFC discovers them via LLDP
D) Ansible runs a bootstrap playbook

4. Assertion-based management in Hyperfabric means:

A) Engineers write unit tests for switch configurations
B) The system continuously validates that operational state matches defined intent
C) SNMP traps trigger automated remediation scripts
D) Configuration changes require approval assertions from two admins

5. Which compute platform is included in the Hyperfabric full stack AI infrastructure?

A) Cisco UCS B-Series blade servers
B) Cisco UCS C885A M8 Rack Servers with NVIDIA HGX GPUs
C) Cisco HyperFlex HX-Series
D) Third-party GPU servers only

Section 4: Intersight for Infrastructure Management

Pre-Quiz: Intersight

1. What type of platform is Cisco Intersight?

A) On-premises-only fabric controller
B) SaaS infrastructure management platform
C) Cloud-managed network switch OS
D) GPU workload scheduler

2. Which Intersight deployment model is designed for fully air-gapped environments?

A) SaaS
B) Connected Virtual Appliance (CVA)
C) Private Virtual Appliance (PVA)
D) Hybrid Cloud Appliance

3. Which IaC tools does Intersight support? (Select the best answer)

A) Only Terraform
B) Only Ansible and Python
C) Terraform, Ansible, PowerShell SDK, and Python SDK
D) Only REST API with no pre-built integrations

4. What does Intersight check server components against to ensure compatibility?

A) Cisco TAC case database
B) Hardware Compatibility Lists (HCL)
C) NVIDIA GPU driver matrix only
D) VMware compatibility guides

5. Intersight's primary management focus is on:

A) Network fabric orchestration
B) Compute and cross-domain infrastructure lifecycle
C) Application deployment and containers
D) DNS and load balancing

Key Points

Platform Overview

Cisco Intersight provides unified, intelligent management of Cisco UCS compute infrastructure from core to edge. While Nexus Dashboard focuses on network fabric and Hyperfabric provides cloud-managed full-stack solutions, Intersight concentrates on compute and cross-domain infrastructure lifecycle management.

Core Capabilities

Capability DomainWhat Intersight Manages
ComputeAll UCS server form factors: rack, blade, modular (including AI-optimized UCS C885A with NVIDIA HGX)
NetworkingNexus 9000 switches in NX-OS mode with inventory views and switch configuration
StorageHyperFlex, NetApp, Pure Storage, Hitachi integration
VirtualizationVMware vSphere, Microsoft Hyper-V
AutomationDrag-and-drop workflow designer with auto Python/PowerShell code generation

Deployment Options

flowchart TD INT["Cisco Intersight Platform"] INT --> SAAS["SaaS
(Cloud-Hosted at intersight.com)"] INT --> CVA["Connected Virtual Appliance
(On-Prem + Cloud Analytics)"] INT --> PVA["Private Virtual Appliance
(Fully Air-Gapped)"] SAAS --> S1["Standard Enterprise
with Internet"] CVA --> C1["Local Data Processing
+ Cloud Features"] PVA --> P1["Government / Defense
No External Connectivity"] SAAS -.->|"Full cloud features"| CLOUD["Cisco Cloud Services"] CVA -.->|"Selective connectivity"| CLOUD PVA -.->|"No connection"| AIR["Air-Gapped Network"] style INT fill:#1a5276,color:#fff style SAAS fill:#2e86c1,color:#fff style CVA fill:#2e86c1,color:#fff style PVA fill:#2e86c1,color:#fff
Deployment ModelDescriptionUse Case
SaaSCloud-hosted at intersight.comStandard enterprise with internet connectivity
CVAOn-premises appliance with cloud connectivityLocal data processing with cloud-based analytics
PVAFully air-gapped on-premises deploymentGovernment, defense, highly regulated environments

Infrastructure-as-Code with Intersight

Intersight provides full API access with pre-packaged clients:

Terraform Server Profile Example

resource "intersight_server_profile" "ai_gpu_node" {
  name            = "AI-GPU-Worker-01"
  target_platform = "FIAttached"

  organization {
    object_type = "organization.Organization"
    moid        = data.intersight_organization_organization.default.moid
  }

  policy_bucket {
    object_type = "bios.Policy"
    moid        = intersight_bios_policy.ai_optimized.moid
  }

  policy_bucket {
    object_type = "boot.PrecisionPolicy"
    moid        = intersight_boot_precision_policy.pxe_boot.moid
  }
}

Firmware and Compliance Management

Intersight and Nexus Dashboard Integration

Intersight provides global management of UCS, HyperFlex, APIC, and Nexus Dashboard from a single pane of glass. This cross-domain visibility is essential for AI troubleshooting: when a training job slows, the root cause could be network congestion, compute thermal throttling, or storage I/O bottleneck. Unified visibility across all three domains accelerates root cause analysis.

Animation Slot: Intersight deployment model comparison -- interactive selector showing SaaS, CVA, and PVA deployments with data flow diagrams, highlighting which features are available in each model and when to choose each for AI infrastructure.
Post-Quiz: Intersight

1. An organization handling classified defense data needs Intersight but cannot have any external network connectivity. Which deployment model should they use?

A) SaaS
B) Connected Virtual Appliance (CVA)
C) Private Virtual Appliance (PVA)
D) Hyperfabric cloud controller

2. Which Intersight feature validates that server firmware and drivers match Cisco's tested combinations?

A) Compliance dashboards
B) Hardware Compatibility List (HCL) checking
C) Rolling firmware upgrades
D) Terraform server profiles

3. How does Intersight's integration with Nexus Dashboard benefit AI infrastructure troubleshooting?

A) It replaces Nexus Dashboard Insights
B) It provides cross-domain visibility across network, compute, and storage for faster root cause analysis
C) It enables Intersight to configure ACI policies
D) It migrates all management to a single APIC cluster

4. What unique automation feature does the Intersight workflow designer provide?

A) Natural language configuration input
B) Drag-and-drop design with automatic Python/PowerShell code generation
C) AI-driven auto-remediation without human input
D) Direct GPU kernel compilation

5. When comparing orchestration platforms, which one is best suited for greenfield AI-first deployments?

A) Nexus Dashboard / NDFC
B) ACI / APIC
C) Hyperfabric
D) Intersight

Platform Comparison Summary

Criterion Nexus Dashboard / NDFC ACI / APIC Hyperfabric Intersight
Primary Focus NX-OS fabric management and telemetry SDN policy-driven fabric Cloud-managed turnkey fabric Compute and cross-domain lifecycle
Controller Location On-premises On-premises (APIC cluster) Cloud-hosted by Cisco SaaS, CVA, or PVA
AI Fabric Templates Yes (built-in) No (manual policy design) Yes (preconfigured) No (server profiles)
Zero-Touch Provisioning Partial (POAP) Fabric discovery via LLDP Full zero-touch plug-and-play Server profile auto-deploy
Best For Brownfield NX-OS + AI Existing ACI extending to AI Greenfield AI-first Server fleet management

Your Progress

Answer Explanations