1. Which HTTP method and endpoint obtains a Catalyst Center API token?
GET /dna/intent/api/v1/auth/token
POST /dna/system/api/v1/auth/token
POST /dna/intent/api/v1/token
PUT /dna/system/api/v1/auth/token
2. What does every mutating Catalyst Center API call (POST/PUT/DELETE) return instead of a direct result?
A JSON object with the completed operation data
An HTTP 202 Accepted status with no body
A taskId that must be polled for completion
A deploymentId specific to the resource type
3. What constraint applies to the Command Runner API?
It requires a separate SSH credential stored per device
It can only execute commands on one device at a time
It is strictly read-only — only show commands are permitted
It requires a minimum of IOS-XE 17.x on target devices
4. Which step in the Catalyst Center template lifecycle is most commonly skipped, causing deployment failure?
Creating the project before the template
Committing the template version before deployment
Specifying device types in the template payload
Including variable bindings in targetInfo
5. On Catalyst Center's health scoring scale, what score range indicates a "Good" (green) device health?
1–3
4–7
8–10
10 only
Authentication Architecture
Catalyst Center uses token-based authentication layered on HTTP Basic Auth. You POST credentials to the auth endpoint and receive a token valid for one hour. Every subsequent request carries that token in the X-Auth-Token header.
Authentication endpoint: POST /dna/system/api/v1/auth/token
Credentials are sent via HTTPBasicAuth. The response body is {"Token": "<value>"}. In long-running scripts, catch 401 Unauthorized and re-authenticate automatically. Never hardcode credentials — use environment variables or a secrets manager.
sequenceDiagram
participant Script as Python Script
participant Auth as POST /auth/token
participant API as Intent API Endpoint
Script->>Auth: HTTP POST with HTTPBasicAuth (username, password)
Auth-->>Script: 200 OK — {"Token": ""}
Note over Script: Store token; set 1-hour expiry timer
Script->>API: GET/POST with X-Auth-Token header
API-->>Script: JSON response data
Note over Script,API: Token reused for all subsequent calls
Script->>Auth: Re-authenticate after 401 or expiry
Auth-->>Script: New token issued
Token Authentication Flow
Python Script
→
→
POST /auth/token
→
→
Intent API
Credentials presented once; token reused for all subsequent requests (1-hour TTL)
Device Inventory API
The inventory API is the foundation of almost every workflow. Before pushing a template, running a command, or checking compliance, you need the device's UUID — the unique identifier Catalyst Center assigns every managed device.
Endpoint: GET /dna/intent/api/v1/network-device
Filter by management IP with a managementIpAddress query parameter. Key response fields include id (device UUID), hostname, reachabilityStatus, platformId, softwareVersion, and role.
| Field | Description |
id | Device UUID — required for all subsequent API calls |
hostname | Device hostname as known to Catalyst Center |
managementIpAddress | IP address used for management communication |
reachabilityStatus | Reachable, Unreachable, or PingReachable |
role | ACCESS, DISTRIBUTION, CORE, or BORDER ROUTER |
softwareVersion | IOS-XE or NX-OS version string |
Asynchronous Task Architecture
Every mutating API call (POST, PUT, DELETE) returns a task ID, not a result. Poll the task endpoint until endTime is set (success) or isError is True (failure). The failureReason field explains what went wrong.
Poll endpoint: GET /dna/intent/api/v1/task/{taskId}
flowchart TD
A([Mutating API Call\nPOST / PUT / DELETE]) --> B[Response: taskId]
B --> C[GET /dna/intent/api/v1/task/taskId]
C --> D{Check task state}
D -->|isError == True| E[Raise RuntimeError\nwith failureReason]
D -->|endTime is set| F([Task Completed\nReturn result])
D -->|Still running| G[Sleep poll_interval seconds]
G --> H{Max retries\nexceeded?}
H -->|No| C
H -->|Yes| I[Raise TimeoutError]
style E fill:#ff6b6b,color:#fff
style F fill:#51cf66,color:#fff
style I fill:#ff6b6b,color:#fff
Command Runner API
Command Runner executes read-only show commands on managed devices without SSH or stored credentials. Catalyst Center handles the secure connection using its own credential vault.
Endpoint: POST /dna/intent/api/v1/network-device-poller/cli/legit-reads
The workflow is a three-step chain: submit the job (receive taskId), poll the task (receive fileId in the progress JSON string), then download the output via GET /dna/intent/api/v1/file/{fileId}.
sequenceDiagram
participant Script as Python Script
participant CR as POST /legit-reads
participant Task as GET /task/{taskId}
participant File as GET /file/{fileId}
Script->>CR: POST payload: commands[], deviceUuids[], name
CR-->>Script: {"response": {"taskId": ""}}
loop Poll until endTime set
Script->>Task: GET /task/{taskId}
Task-->>Script: {isError, endTime, progress}
end
Note over Script: Parse fileId from task.progress JSON
Script->>File: GET /file/{fileId}
File-->>Script: [{"deviceUuid": "...", "commandResponses": {...}}]
Template Types and Scripting Languages
Catalyst Center templates are parameterized CLI documents. At deployment time, variable values are bound per device, and Catalyst Center renders and pushes the completed configuration.
| Template Type | Use Case | Trigger |
| Onboarding (PnP) | Day-0 initial provisioning of new devices | Plug and Play (PnP) event |
| Day-N | Ongoing configuration management for existing inventory | Manual or API-triggered deployment |
Two variable substitution engines are supported:
- Velocity — legacy engine using
$variableName syntax; widely documented
- Jinja2 — modern engine using
{{ variable }} syntax with full conditional and loop support; preferred for new development due to alignment with Python tooling (Ansible, Nornir)
Four-Phase Template Lifecycle
Template deployment follows a strict sequence. Skipping any phase — especially the commit — results in deployment failure.
flowchart TD
A([Start]) --> B["Phase 1: Create Project\nPOST /template-programmer/project"]
B --> B2[Poll task → get projectId]
B2 --> C["Phase 2: Create Template\nPOST /project/{projectId}/template\nLanguage: VELOCITY or JINJA"]
C --> C2[Poll task → get templateId]
C2 --> D{"Phase 3: Commit Version\nPOST /template/version\n⚠ Required before deploy"}
D --> D2[Poll task → version created]
D2 --> E["Phase 4: Deploy to Devices\nPOST /template/deploy\ntargetInfo: [{id, type, params}]"]
E --> F[Get deploymentId]
F --> G[Poll deploy status endpoint]
G --> H{Status?}
H -->|SUCCESS| I([Deployment Complete])
H -->|FAILURE| J([Deployment Failed\nCheck per-device errors])
H -->|In Progress| G
style D fill:#f59f00,color:#fff
style I fill:#51cf66,color:#fff
style J fill:#ff6b6b,color:#fff
Phase Details
Phase 1 — Create Project: POST /dna/intent/api/v1/template-programmer/project. Returns a taskId; the projectId is embedded in the task's progress JSON.
Phase 2 — Create Template: POST /dna/intent/api/v1/template-programmer/project/{projectId}/template. Specify language as VELOCITY or JINJA, along with deviceTypes and softwareType.
Phase 3 — Commit Version: POST /dna/intent/api/v1/template-programmer/template/version. Each commit creates an immutable version snapshot. An uncommitted template cannot be deployed.
Phase 4 — Deploy: POST /dna/intent/api/v1/template-programmer/template/deploy. The payload uses a targetInfo array where each entry has an id (device UUID), type: "MANAGED_DEVICE_UUID", and a params dict with per-device variable values. Deployment returns a deploymentId (not a taskId) polled at a separate status endpoint.
The deployment response contains a deploymentId, not the standard taskId. Poll the deployment status at GET /template-programmer/template/deploy/status/{deploymentId} — not the generic /task/ endpoint.
Health Scoring Model
Catalyst Center Assurance collects streaming telemetry from every managed device and client, processing it through ML and rule-based engines to produce health scores on a 0–10 scale.
| Score Range | Classification |
| 8–10 | Good (green) |
| 4–7 | Fair (yellow) |
| 1–3 | Poor (red) |
| 0 | No data / Idle |
Sample Network Health Dashboard
Green 8–10 · Yellow 4–7 · Red 1–3 · All scores returned from Catalyst Center Assurance APIs
Three Core Health Endpoints
Network Device Health: GET /dna/intent/api/v1/network-health — returns a rolled-up score across all network infrastructure, broken down by device role. Accepts an optional timestamp parameter (Unix epoch milliseconds) for historical queries.
Client Health: GET /dna/intent/api/v1/client-health — tracks health of wired workstations, wireless laptops, mobile devices, and IoT. Returns goodCount, fairCount, poorCount, and idleCount per client type.
Site Health: GET /dna/intent/api/v1/site-health — maps health data to the site hierarchy (AREA, BUILDING, or FLOOR). Returns per-site breakdowns with device role averages, wired/wireless client counts by category, and application health metrics. Filter by siteType query parameter.
Path Trace API
Path Trace is Catalyst Center's most powerful troubleshooting API. It uses the controller's complete topology model to compute the hop-by-hop path between two IPs — including ACL evaluation results, QoS markings, and interface statistics at every node. Unlike traceroute, it does not rely on ICMP TTL expiry that firewalls often block.
Path Trace is asynchronous with its own ID type:
POST /dna/intent/api/v1/flow-analysis → flowAnalysisId
GET /dna/intent/api/v1/flow-analysis/{id} → results when status == COMPLETED
DELETE /dna/intent/api/v1/flow-analysis/{id} → clean up after use
Poll the GET endpoint until request.status equals COMPLETED. Results are in networkElementsInfo — a list of hop objects each containing ingress/egress interface names, ACL analysis results, and device statistics. Always DELETE the trace after use to free resources.
| Inclusion | Data Collected at Each Hop |
INTERFACE-STATS | Input/output rates, error counters per interface |
ACL-TRACE | ACL name and permit/deny result at ingress interfaces |
QOS-STATS | DSCP markings and QoS policy actions |
DEVICE-STATS | CPU and memory utilization at each hop device |
Compliance Framework Overview
Configuration drift is the gradual divergence between a device's running configuration and its intended policy-defined state. Catalyst Center's built-in compliance framework continuously compares device running configs against defined network profiles and software image baselines.
| Compliance Category | What It Checks |
RUNNING_CONFIG | Running config vs. assigned network profile/template |
STARTUP_CONFIG | Whether running config matches startup config (unsaved changes detection) |
IMAGE | Whether the running software image matches the approved image baseline |
NETWORK_PROFILE | Whether device assignment and config match its network profile |
Compliance API Operations
Trigger a check: POST /dna/intent/api/v1/compliance — body can specify deviceUuids (omit for all devices) and complianceType (omit for all categories). Returns a taskId.
Query results per device: GET /dna/intent/api/v1/compliance/{deviceUuid}
Fleet-wide summary with filter: GET /dna/intent/api/v1/compliance?complianceStatus=NON_COMPLIANT — valid status values are COMPLIANT, NON_COMPLIANT, IN_PROGRESS, NOT_APPLICABLE.
Drift Detection with Configuration Archive
The Configuration Archive API stores historical snapshots of running and startup configurations accessible at GET /dna/intent/api/v1/network-device-archive/cleartext?deviceId={uuid}. Python's difflib module can generate unified diffs between the archived snapshot and the current running config retrieved via Command Runner — producing a git-style change report showing exactly what drifted.
Automated Remediation Pipeline
A complete compliance automation loop combines all four API domains covered in this chapter:
flowchart TD
A([Scheduled Trigger\nor Manual Run]) --> B["Phase 1: Trigger Compliance Check\nPOST /dna/intent/api/v1/compliance"]
B --> C[Poll task until complete]
C --> D["Phase 2: Query Non-Compliant Devices\nGET /compliance?complianceStatus=NON_COMPLIANT"]
D --> E{Any non-compliant\ndevices found?}
E -->|No| F([All Devices Compliant\nExit pipeline])
E -->|Yes| G[Filter: complianceType == RUNNING_CONFIG]
G --> H["Phase 3: Re-deploy Remediation Template\nPOST /template/deploy per device"]
H --> I[Wait for deployment and sync delay]
I --> J["Phase 4: Verify — Re-trigger Compliance Check\nfor remediated devices only"]
J --> K[Query compliance results]
K --> L{Remaining\nnon-compliant?}
L -->|None| M([Remediation Successful\nAll devices compliant])
L -->|Some remain| N([Escalate to Operations\nManual review required])
style F fill:#51cf66,color:#fff
style M fill:#51cf66,color:#fff
style N fill:#ff6b6b,color:#fff
The pipeline follows four phases: trigger compliance check, query non-compliant devices, re-deploy the appropriate remediation template, then re-run the compliance check to verify. A time.sleep(30) delay between deployment and the verification check is necessary to allow Catalyst Center to sync post-deployment state.
1. In a Command Runner workflow, where is the fileId found after the task completes?
In the top-level JSON response body of the POST request
Embedded as a JSON string in the task's progress field
In the endTime field of the task response
Returned by the GET /network-device endpoint
2. A Python script deploys a Catalyst Center template but gets an error saying the template cannot be deployed. What is the most likely cause?
The targetInfo array is missing device UUIDs
The template uses Jinja2 instead of Velocity
The template version was never committed
The project name contains special characters
3. Which endpoint is used to poll Path Trace results, and what status value indicates completion?
GET /task/{taskId} — status field equals SUCCESS
GET /flow-analysis/{flowAnalysisId} — request.status equals COMPLETED
GET /flow-analysis/{flowAnalysisId} — endTime field is set
GET /task/{taskId} — isError field is False
4. A script calls POST /dna/intent/api/v1/compliance with no request body. What is the effect?
The call fails — deviceUuids is a required field
It checks RUNNING_CONFIG compliance only for all devices
It triggers a compliance check across all categories for all managed devices
It returns the last known compliance status without triggering a new check
5. In the template deployment payload, what value is used for the type field in each targetInfo entry?
DEVICE_UUID
MANAGED_DEVICE_UUID
INVENTORY_DEVICE
CATALYST_DEVICE_ID
6. Which Catalyst Center API endpoint retrieves health scores broken down by geographic site location?
GET /dna/intent/api/v1/network-health
GET /dna/intent/api/v1/client-health
GET /dna/intent/api/v1/site-health
GET /dna/intent/api/v1/topology/site-topology
7. What Python library can be used to generate a unified diff between an archived configuration and the current running config for drift detection?
hashlib
difflib
textwrap
deepdiff
8. What compliance category checks whether a device's running software image matches the approved baseline?
RUNNING_CONFIG
STARTUP_CONFIG
IMAGE
NETWORK_PROFILE
9. After a template deployment via API, what identifier is returned — and why does it matter which polling endpoint you use?
A taskId is returned; poll at GET /task/{taskId} like all other operations
A deploymentId is returned; poll at the deploy-status endpoint, not the generic task endpoint
A flowAnalysisId is returned; poll at the flow-analysis endpoint
A templateId is returned; poll at the template version endpoint
10. Which HTTP header carries the Catalyst Center token for all API calls after authentication?
Authorization: Bearer <token>
X-Auth-Token: <token>
API-Key: <token>
Cookie: session=<token>