Build Ansible playbooks to automate Cisco Catalyst Center using the cisco.dnac collection, including device inventory, site hierarchy, PnP provisioning, and compliance management
Implement Ansible automation for Cisco Meraki using the cisco.meraki collection to manage networks, devices, SSIDs, VLANs, and firewall rules
Automate Cisco SD-WAN operations with purpose-built Ansible modules and URI-based REST API calls to vManage
Design multi-controller Ansible automation workflows using roles, structured inventories, ansible-vault, and import_playbook orchestration
Pre-Quiz — Test Your Prior Knowledge
1. The cisco.dnac Ansible collection communicates with Catalyst Center using which transport?
2. What is the correct order for provisioning a new device in Catalyst Center using Ansible workflow manager modules?
3. When using the cisco.meraki collection, the Ansible control node communicates directly with which endpoint?
4. Which Ansible module is commonly used to automate vManage REST API calls when a purpose-built collection is not available?
5. What is the primary purpose of ansible-vault in a multi-controller automation project?
Section 1: Ansible for Catalyst Center (cisco.dnac Collection)
The cisco.dnac collection is Cisco's official Ansible interface for Catalyst Center (formerly DNA Center). Every module communicates exclusively over HTTPS REST using the Cisco Catalyst Center Python SDK as its transport — no SSH or NETCONF is involved. This means tasks always target localhost and the control node must have the SDK installed.
A minimum Catalyst Center version of 2.3.5.3 is required for workflow manager modules; enhanced features require 2.3.7.9+.
Workflow Manager Modules
The collection's *_workflow_manager modules are idempotent lifecycle managers. They compare desired state against live configuration and make only necessary changes. Running the same playbook twice with state: merged is always safe.
Before any device can be provisioned, the three-level site hierarchy must exist. The parent_name field uses a slash-delimited path from the Global root, which is also referenced by provision_workflow_manager.
All cisco.dnac modules communicate via HTTPS REST using the dnacentersdk Python SDK — never SSH or NETCONF
Tasks must target localhost; all three connection vars (dnac_host, dnac_username, dnac_password) are required per task
The site hierarchy (Area → Building → Floor) must exist before devices can be provisioned — this order is enforced by Catalyst Center
All *_workflow_manager modules are idempotent: state: merged creates or updates; state: deleted removes; safe to run repeatedly
PnP supports three modes — ZTP (auto-boot), Planned (pre-staged), and Unclaimed (dynamic discovery) — controlled by pnp_workflow_manager
Section 2: Ansible for Meraki (cisco.meraki Collection)
Meraki is cloud-managed: Ansible never connects to individual devices. Every task is an HTTPS call to api.meraki.com from localhost. Authentication uses a Dashboard API key generated from Organization > Settings > Dashboard API access.
ansible-galaxy collection install cisco.meraki
# For full Dashboard API v1.33.0+ surface:
ansible-galaxy collection install meraki.dashboard
State Model
State
Action
present
Create if absent; update if exists
absent
Delete the resource
query
Read and return current resource info
Core Modules
Module
Manages
cisco.meraki.meraki_network
Networks (create, update, delete, query)
cisco.meraki.meraki_device
Devices (claim, remove, rename)
cisco.meraki.meraki_mr_ssid
Wireless SSIDs (auth, encryption, VLAN)
cisco.meraki.meraki_mx_vlan
MX appliance VLANs (subnet, DHCP)
cisco.meraki.meraki_mx_site_to_site_firewall
Site-to-site VPN firewall rules
Performance: Use Numeric IDs
Always prefer numeric org_id and net_id over name-based parameters. Name-based resolution requires additional API round-trips, which compounds at scale.
Query-Then-Act Pattern with selectattr()
Meraki API responses return lists, not keyed dictionaries. Use Jinja2's selectattr() filter to extract items by attribute:
- name: Extract target network ID by name
set_fact:
target_net_id: >-
{{ network_list.data
| selectattr('name', 'equalto', 'Branch-Office-NYC')
| map(attribute='id')
| list
| first }}
SSID Configuration
SSIDs are numbered 0–14 per Meraki MR network. Set auth_mode to psk for WPA-PSK or open for open networks; control isolation with ip_assignment_mode ("Bridge mode" or "NAT mode").
sequenceDiagram
participant PB as Ansible Playbook (localhost)
participant DASH as Meraki Dashboard API (api.meraki.com)
participant MDEV as Meraki Devices (cloud-managed)
Note over PB,DASH: All communication is HTTPS from localhost
PB->>DASH: GET /organizations/{orgId}/networks (state: query)
DASH-->>PB: 200 OK — list of network objects
Note over PB: selectattr extracts net_id from list
PB->>DASH: POST /networks (state: present)
DASH-->>PB: 201 Created
PB->>DASH: PUT /networks/{netId}/wireless/ssids/0
DASH-->>PB: 200 OK
PB->>DASH: PUT /networks/{netId}/appliance/vlans
DASH-->>PB: 200 OK
DASH->>MDEV: Push config changes (cloud-managed channel)
MDEV-->>DASH: Acknowledgement
Key Points — Section 2: cisco.meraki
Meraki automation never connects to devices directly — all API calls go from localhost to the cloud-hosted Dashboard at api.meraki.com
Protect the Dashboard API key with ansible-vault encrypt_string; never hardcode it in playbooks
Prefer numeric org_id / net_id over names — name-based params require extra API round-trips to resolve IDs
Meraki API responses are lists — use Jinja2's selectattr() filter to extract specific resources by attribute value
The query state is unique to Meraki; use it for the query-then-act pattern before any state-changing operation
Section 3: Ansible for SD-WAN (URI Module)
Cisco SD-WAN is managed through the vManage REST API. Because purpose-built modules may not cover every endpoint, the ansible.builtin.uri module is the primary tool — and mastering it teaches the underlying REST pattern that all controller automation relies on.
vManage API Categories
Category
Base Path
Purpose
Monitoring
/dataservice/device
Device health, reachability, interface stats
Real-Time Monitoring
/dataservice/device/bfd/...
Live BFD, OMP, tunnel state
Configuration
/dataservice/template/
Feature templates, device templates, policy
Administration
/dataservice/admin/
Users, certificates, cluster management
Session-Cookie Authentication (Two-Step)
vManage uses session-cookie authentication. Step 1 POSTs credentials to /j_security_check to obtain a session cookie. Step 2 includes that cookie in the Cookie header of all subsequent requests.
All POST/PUT operations require a CSRF token retrieved from GET /dataservice/client/token. Include it as an X-XSRF-TOKEN request header.
Building Idempotency Manually
The uri module has no built-in idempotency. Use a check-before-act pattern: query existing resources, then use a when condition to act only if the resource is absent.
sequenceDiagram
participant PB as Ansible Playbook (uri module)
participant VM as vManage REST API
Note over PB,VM: Step 1 — Authenticate
PB->>VM: POST /j_security_check {j_username, j_password}
VM-->>PB: 200 OK + Set-Cookie: JSESSIONID=...
Note over PB: set_fact: vmanage_session = cookies_string
Note over PB,VM: Step 2 — Retrieve CSRF token
PB->>VM: GET /dataservice/client/token (Cookie: JSESSIONID)
VM-->>PB: 200 OK — {token: "xsrf-token-value"}
Note over PB,VM: Step 3 — Read operation
PB->>VM: GET /dataservice/device (Cookie: JSESSIONID)
VM-->>PB: 200 OK — {data: [...devices...]}
Note over PB,VM: Step 4 — State-changing operation
PB->>VM: POST /dataservice/template/feature (Cookie + X-XSRF-TOKEN)
VM-->>PB: 200 OK — template created
Key Points — Section 3: SD-WAN / URI Module
vManage authentication is a two-step session-cookie flow: POST to /j_security_check → store cookie → include in every subsequent request header
POST and PUT operations additionally require a CSRF token (X-XSRF-TOKEN header) from GET /dataservice/client/token
The uri module is not inherently idempotent — you must build query-first, act-only-if-absent patterns manually with when conditions
The four vManage API categories are: Monitoring, Real-Time Monitoring, Configuration (/dataservice/template/), and Administration (/dataservice/admin/)
The uri module approach generalizes to any REST API — a critical skill for the ENAUTO exam and real-world deployments
Section 4: Multi-Controller Automation Patterns
Orchestrating three controllers from a single Ansible project requires deliberate architectural discipline. The core principle: treat each controller as a domain with clear boundaries enforced by Ansible's role and inventory structures.
All three groups use ansible_connection=local — all communication is HTTPS REST from the control node, not SSH to remote hosts. Meraki uses localhost because there is no on-premises Meraki server.
ansible-vault Credential Management
Store all secrets in an AES-256 encrypted vault file. Never commit plain-text API keys or passwords to version control.
Use import_playbook (static, parse-time) for known sequences. Use include_tasks (dynamic, runtime) within roles when task selection depends on variables or conditions.
Error Handling with block/rescue/always
Multi-controller workflows can fail partway through. Use block/rescue/always — analogous to try/catch/finally — for graceful rollback and notifications regardless of outcome.
For Red Hat Ansible Automation Platform at scale, build a custom Execution Environment container bundling all collections and SDK dependencies. AAP Workflow Templates then chain Catalyst Center → Meraki → SD-WAN jobs with conditional branching, RBAC, and event-driven automation.
Group inventory by controller domain (catalyst_center, meraki_cloud, sdwan_vmanage); all groups use ansible_connection=local
All secrets belong in group_vars/all/vault.yml encrypted with ansible-vault — never in plain-text playbooks or inventory files
import_playbook (static/parse-time) is for orchestration; include_tasks (dynamic/runtime) is for conditional task loading within roles
block/rescue/always provides rollback on partial failures — rescue runs only on failure, always runs regardless of outcome
Red Hat AAP Execution Environments bundle cisco.dnac, cisco.meraki, and dnacentersdk for consistent, portable multi-controller automation at enterprise scale
Post-Quiz — Consolidate Your Learning
Post-Quiz — Test What You Learned
1. The cisco.dnac Ansible collection communicates with Catalyst Center using which transport?
2. What is the correct order for provisioning a new device in Catalyst Center using Ansible workflow manager modules?
3. When using the cisco.meraki collection, the Ansible control node communicates directly with which endpoint?
4. Which Ansible module is commonly used to automate vManage REST API calls when a purpose-built collection is not available?
5. What is the primary purpose of ansible-vault in a multi-controller automation project?
6. Which Jinja2 filter is essential for extracting a specific resource from a Meraki API list response?
7. In a multi-controller Ansible inventory, why is Meraki listed under a localhost host rather than an actual server address?
8. The vManage session-cookie authentication flow begins with a POST to which endpoint?
9. You need to deploy templates to vManage using a POST request. In addition to the session cookie, what else must you include in the request headers?
10. In the multi-controller block/rescue/always error-handling pattern, when does the always section execute?
11. Which cisco.dnac module should you use to detect configuration drift across devices managed by Catalyst Center?
12. What is the key difference between import_playbook and include_tasks in multi-controller orchestration?