Chapter 10: Incident Response, Security Policies, and SOC Operations

Learning Objectives

Section 1: Security Management Concepts

1.1 Asset Management and Configuration Management

Effective security begins before any attack occurs. Asset management is the systematic process of identifying, cataloging, and tracking every hardware, software, and data asset within an organization — you cannot protect what you do not know you have. Configuration management ensures assets are deployed and maintained in a known, approved state so analysts can distinguish legitimate changes from attacker activity.

PracticeDescription
Hardware inventoryCatalog of all endpoints, servers, network devices, and IoT equipment
Software inventoryLicensed and approved applications; shadow IT detection
Configuration baselinesApproved secure configurations (CIS Benchmarks, STIGs)
Change managementFormal approval process for any configuration change
Asset classificationTiering assets by sensitivity: public, internal, confidential, restricted

1.2 Mobile Device Management and BYOD Policies

MDM (Mobile Device Management) platforms allow security teams to enforce policies on mobile endpoints — requiring encryption, PIN codes, remote wipe capability, and app whitelisting. BYOD (Bring Your Own Device) policies must define which corporate resources a personal device may access, whether MDM manages only a corporate container, and how devices are wiped on departure or theft.

A well-designed BYOD deployment typically places personal devices on a guest VLAN isolated from corporate assets, with MDM creating an encrypted corporate container that can be selectively wiped without touching personal data.

1.3 Patch Management and Vulnerability Management

Patch management is the structured process of testing and deploying security patches to reduce exposure windows. WannaCry (2017) exploited EternalBlue — a vulnerability patched two months prior. Vulnerability management is broader: scanning, assessing, prioritizing, and remediating across the entire attack surface.

Patch management lifecycle: Identify → Assess → Test → Deploy → Verify → Document

PriorityCVSS RangeTarget Patch Window
Critical9.0–10.024–72 hours
High7.0–8.97–14 days
Medium4.0–6.930–60 days
Low0.1–3.990+ days or risk-accepted

1.4 Policy Frameworks

A security policy framework provides the organizational and legal foundation for all controls. Policies derive authority from senior leadership; standards, guidelines, and procedures are subordinate documents implementing policy.

Key Points — Section 1

Pre-Check — Section 1: Security Management

1. A CVSS score of 9.4 is assigned to a newly disclosed vulnerability. What is the target patch window?

2. An employee's personal phone is enrolled in a corporate MDM with a container policy. The employee resigns. What is the correct action?

3. Which step in the patch management lifecycle confirms that a patch was successfully applied?

Section 2: NIST SP 800-61 Incident Response

NIST Special Publication 800-61 defines a four-phase incident response lifecycle. Revision 3 (2024) aligns this to the NIST CSF 2.0 functions, but the r2 four-phase model remains the primary exam reference.

2.1 The Four-Phase IR Lifecycle

flowchart LR P["Preparation\n─────────\nPolicies · CSIRT\nPlaybooks · Tools\nTabletop Exercises"] D["Detection &\nAnalysis\n─────────\nMonitor · Triage\nCorrelate · Confirm\nClassify · Notify"] C["Containment,\nEradication &\nRecovery\n─────────\nIsolate · Remove\nRestore · Validate"] A["Post-Incident\nActivity\n─────────\nLessons Learned\nPlaybook Updates\nThreat Intel Share"] P --> D --> C --> A A -->|"Continuous\nImprovement"| P style P fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3 style D fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3 style C fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3 style A fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3
NIST SP 800-61 — Incident Response Lifecycle PHASE 1 Preparation CSIRT formation Policies & playbooks Monitoring deploy Tabletop exercises PHASE 2 Detection & Analysis Monitor SIEM/IDS Triage & correlate Confirm incident Classify & notify PHASE 3 Contain / Eradicate Isolate systems Remove malware Restore from backup Validate integrity PHASE 4 Post-Incident Lessons learned Update playbooks Share threat intel File reg. reports Continuous Improvement Loop RPO — Max acceptable data loss (backup frequency) RTO — Max acceptable downtime (restore speed)

2.2 Phase 1: Preparation

Preparation is the most important phase — it determines how effectively an organization will perform during a real incident. Key activities include establishing the CSIRT, deploying SIEM/IDS/EDR, writing playbooks, conducting tabletop exercises, documenting baselines, and mapping legal notification obligations (HIPAA: 60 days; GDPR: 72 hours).

2.3 Phase 2: Detection and Analysis

The central challenge is distinguishing true positives from false positives while avoiding false negatives.

Analysis steps: Monitor → Triage → Correlate → Confirm → Classify → Document → Notify

flowchart TD M["1. Monitor — SIEM · IDS/IPS · EDR · NetFlow"] T["2. Triage — Apply Severity Criteria"] FP["False Positive → Document & Close"] CO["3. Correlate — Link events across data sources"] CF["4. Confirm — Validate incident"] CL["5. Classify — Incident type & severity (P1–P4)"] DC["6. Document — Begin timeline, timestamp actions"] N["7. Notify — Escalate per matrix; engage legal if PII/PHI"] M --> T T -->|"Low confidence"| FP T -->|"Warrants investigation"| CO CO --> CF --> CL --> DC --> N style M fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3 style T fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3 style FP fill:#2d1a1a,stroke:#f85149,color:#e6edf3 style CO fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3 style CF fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3 style CL fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3 style DC fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3 style N fill:#1a2d1a,stroke:#3fb950,color:#e6edf3

2.4 Phase 3: Containment, Eradication, and Recovery

Containment limits spread without necessarily eliminating the threat. Short-term containment isolates affected systems; long-term containment patches vulnerabilities and deploys workarounds. Eradication removes malware, unauthorized accounts, and persistence mechanisms. Recovery restores from clean backups, reimages critical systems, and monitors closely post-recovery.

TermDefinitionExample
RPOMax acceptable data loss — how old can the restore point be?RPO of 4 hours → backups every 4 hours
RTOMax acceptable downtime — how long can the service be offline?RTO of 2 hours → systems restored within 2 hours

2.5 Phase 4: Post-Incident Activity

NIST recommends a lessons-learned meeting within two weeks of incident resolution. The output — a post-incident report — feeds back into Preparation, updating policies, playbooks, detection rules, and training. Post-incident activities also include sharing threat intel with sector ISACs and filing regulatory reports (SEC 8-K, HHS OCR for HIPAA breaches).

Key Points — Section 2

Post-Check — Section 2: NIST SP 800-61 Incident Response

4. During which NIST SP 800-61 phase should an analyst begin documenting a timestamped incident timeline?

5. An organization's database must be restored within 6 hours of an incident, and no more than 2 hours of transaction data can be lost. Which metrics describe these requirements?

6. NIST SP 800-61 recommends that a lessons-learned meeting be held within what timeframe after incident resolution?

Section 3: NIST SP 800-86 Digital Forensics

NIST SP 800-86 provides guidance on collecting, preserving, and analyzing digital evidence in a forensically sound manner — ensuring evidence is admissible in legal proceedings.

3.1 The Evidence Collection Order of Volatility

Evidence must be collected from most volatile (lost when power is removed) to least volatile. RFC 3227 defines the standard order. Never shut down a system before capturing volatile data — pulling the power cord destroys encryption keys, active connections, and process trees.

RFC 3227 — Evidence Collection Order of Volatility Priority Data Type Volatility Collect 1 MOST CPU Registers, Cache Lost when process stops Extremely high — nanoseconds 1st 2 Routing/ARP Tables, Process Table Lost on reboot/shutdown High — lost on any reboot 2nd 3 RAM (Running Processes, Connections) Encryption keys, open network sockets High — lost on shutdown 3rd 4 Temp Files, Swap/Page File May persist briefly after shutdown Medium — volatile on power-off 4th 5 Disk (File System, Logs, Installed SW) Persists until overwritten Low — persists on disk 5th 6 Remote Logs & Monitoring Data Semi-permanent; may rotate Low — rotation may purge 6th 7 LEAST Physical Config, Network Topology, Docs Essentially permanent Permanent 7th Collect First ↑ Collect Last ↑

3.2 Volatile Data — What It Reveals

Volatile Data SourceForensic Value
Running process listIdentifies malicious processes (e.g., mimikatz.exe, cmd.exe spawned by a service)
Network connection table (netstat)Shows active C2 connections; remote IPs and ports
ARP cacheMaps IP addresses to MACs — reveals previously connected devices
Logged-in usersShows active sessions including remote logins
Encryption keys in memoryCritical for decrypting malware communications or payloads
Loaded kernel modulesMay reveal rootkit modules loaded into the kernel

3.3 Data Integrity and Chain of Custody

Cryptographic hashing (SHA-256 or MD5) proves evidence has not been modified: hash before and after analysis — if they match, integrity is confirmed. Always work from forensic copies, never originals.

Chain of custody documents every person who handled evidence, when, and why — including transfer signatures, storage conditions, and unique evidence identifiers. An undocumented gap in custody can derail a prosecution.

3.4 Preservation and Legal Considerations

Investigators must have legal authority (warrant, consent, or organizational policy) before collecting evidence. For PII, PHI, PSI, or IP, forensic investigation must balance evidence preservation with data protection obligations (HIPAA, GDPR). Cloud evidence may span multiple legal jurisdictions.

Key Points — Section 3

Post-Check — Section 3: NIST SP 800-86 Digital Forensics

7. An analyst arrives at a compromised server that is still running. According to RFC 3227, what should be captured FIRST?

8. After imaging a hard drive for forensic analysis, what must an investigator do before beginning analysis to prove evidence integrity?

9. Which type of volatile data can reveal an attacker's command-and-control server IP address on a running compromised host?

Section 4: Network and Server Profiling

Before analysts can detect anomalies, they must define what "normal" looks like. Profiling establishes behavioral baselines for network traffic and host systems. Deviations from baseline are the primary signal that an intrusion may have occurred.

4.1 Network Profiling Elements

Network Profile ElementDescriptionWhy It Matters
Total throughputAverage bandwidth utilization over timeSpikes may indicate data exfiltration or DDoS
Session durationTypical length of sessions by protocolUnusually long sessions may indicate C2 dwell
Port usageExpected vs. unexpected port activityHTTPS on port 4444 is anomalous
Critical asset addressesIPs/MACs of key servers, domain controllersUnauthorized comms to critical assets = high priority
Protocols in useExpected protocols by segmentDNS tunneling, protocol misuse are red flags
Peer-to-peer connectionsExpected vs. unexpected host-to-host trafficLateral movement appears as new P2P connections

4.2 Server Profiling Elements

Server Profile ElementDescription
Listening portsWhich ports the server legitimately listens on (documented at deployment)
Running processesExpected process tree; baseline of normal services
User accountsAuthorized accounts; expected login times and source IPs
Service availabilityExpected uptime, scheduled maintenance windows
Network connectionsExpected peers, protocols, and connection patterns
CPU/Memory/DiskNormal operational ranges; anomalies may indicate cryptomining or ransomware

A web server suddenly listening on port 4444 and spawning cmd.exe processes is far outside its server profile — a clear indicator of compromise.

4.3 Protected Data Categories

Data CategoryDefinitionGoverning Regulation
PII — Personally Identifiable InformationName, SSN, address, email, phoneGDPR, CCPA, state breach laws
PHI — Protected Health InformationHealth records linked to an individualHIPAA, HITECH Act
PSI — Payment/Sensitive InformationCardholder data, card numbers, CVV, PINsPCI DSS
IP — Intellectual PropertyTrade secrets, source code, proprietary researchTrade secret law, NDA controls

Key Points — Section 4

Pre-Check — Section 4: Network and Server Profiling

10. A SIEM alert fires showing 200x the normal DNS query volume from a single workstation at 2:00 AM. Based on network profiling, what attack pattern does this most likely indicate?

11. A healthcare organization experiences a data breach involving patient records. Which regulation requires breach notification to HHS and affected patients?

Section 5: Security Models and SOC Metrics

5.1 The Cyber Kill Chain

The Cyber Kill Chain is a seven-phase model developed by Lockheed Martin. The fundamental insight: attackers must successfully complete all seven phases to achieve their objective — defenders only need to break the chain once.

flowchart LR R["1. Reconnaissance"] W["2. Weaponization"] D["3. Delivery"] E["4. Exploitation"] I["5. Installation"] C2["6. C2"] A["7. Actions on\nObjectives"] R --> W --> D --> E --> I --> C2 --> A style R fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3 style W fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3 style D fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3 style E fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3 style I fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3 style C2 fill:#1a3a5c,stroke:#58a6ff,color:#e6edf3 style A fill:#3a1a1a,stroke:#f85149,color:#e6edf3
Cyber Kill Chain — Attack Phases & Defensive Disruption Points PHASE 1 Recon OSINT Port scanning PHASE 2 Weaponize Build exploit Macro docs PHASE 3 Delivery Phishing USB, web exploit PHASE 4 Exploitation Trigger vuln Execute on target PHASE 5 Installation RAT/Backdoor Persistence PHASE 6 C2 Encrypted channel Remote control PHASE 7 — OBJECTIVE Actions on Objectives Exfiltration · Ransomware Lateral movement · Destruction — attacker must complete ALL 7 phases — defender only needs to break ONE — DISRUPT Min. exposure Honeypots Scan detection DISRUPT Threat intel Patch before weaponized DISRUPT Email filter Web proxy User training DISRUPT Patching ASLR/DEP App controls DISRUPT EDR behavioral detection App whitelist DISRUPT DNS sinkhole TLS inspection Egress filter LAST RESORT — DISRUPT DLP tools · Honeypots Anomaly detection Network segmentation Diamond Model complements Kill Chain: Adversary ↔ Capability ↔ Infrastructure ↔ Victim Kill Chain answers "where to intervene" — Diamond Model answers "who is attacking and how are campaigns connected" SOC Time Metrics — MTTD · MTTC · MTTR MTTD MTTC MTTR Incident Start → Detection Elite: <1 hr · Avg: 197 days Detection → Containment Target: <72 hrs (critical: <4 hrs) Detection → Full Resolution P1: <1 hr · P2: <2 hrs · P3: <4 hrs · P4: 24–72 hrs SOAR automation can reduce MTTC and MTTR by 60–90% vs. fully manual processes

5.2 The Diamond Model of Intrusion Analysis

The Diamond Model (2013) models relationships between intrusion elements rather than attack sequence. Four vertices: Adversary (threat actor), Capability (tools/TTPs), Infrastructure (C2 servers, VPNs, bulletproof hosting), and Victim (targeted org/system).

Intelligence pivoting: if two incidents share the same C2 infrastructure, they may share the same adversary — even if victims appear unrelated. Event threading links Diamond Model events into an activity thread revealing a full adversary campaign.

AspectCyber Kill ChainDiamond Model
FocusSequential attack phasesRelationships between intrusion elements
Best forDefensive planning; disruption pointsAttribution, threat hunting, intel correlation
Key question"Where in the attack can we intervene?""Who is attacking and how are campaigns connected?"
OriginLockheed Martin — commercial defenseUS DoD / intelligence community
Temporal structureLinear — phase 1 through 7Event-based; phases as meta-features

5.3 CMMC — Cybersecurity Maturity Model Certification

CMMC 2.0 requires US DoD contractors to demonstrate cybersecurity maturity before being awarded contracts involving Controlled Unclassified Information (CUI).

LevelNameRequirements
Level 1 (Foundational)Basic Cyber Hygiene17 practices from NIST SP 800-171
Level 2 (Advanced)Advanced Cyber Hygiene110 practices from NIST SP 800-171
Level 3 (Expert)Expert110+ practices + NIST SP 800-172

5.4 SOC Metrics: MTTD, MTTC, and MTTR

The three core time-based metrics quantify detection and response speed — the primary levers for reducing incident impact.

SOAR platforms automate repetitive response tasks (IP blocking, account disabling, ticket creation, evidence collection), reducing MTTC and MTTR by 60–90% vs. fully manual processes.

Key Points — Section 5

Post-Check — Section 5: Security Models and SOC Metrics

12. A threat analyst discovers that two separate incidents at different organizations used the same C2 server IP. Which framework supports pivoting on this shared infrastructure to link the incidents?

13. According to Lockheed Martin's Cyber Kill Chain, at which phase does an attacker establish an encrypted outbound channel for remote control?

14. A SOC tracked 4 incidents last month. Detection times from incident start: 30 min, 60 min, 90 min, 60 min. What is the MTTD?

15. CMMC Level 2 requires a defense contractor to implement how many security practices from NIST SP 800-171?

Your Progress

Answer Explanations