Chapter 4: End-to-End IP Traffic Flow and Forwarding Architectures

Learning Objectives

Pre-Study Assessment

1. What fundamental weakness of fast switching did CEF resolve?

Fast switching could not perform per-packet load balancing
Fast switching required dedicated hardware ASICs on every line card
Fast switching used a demand-driven cache that was constantly invalidated by topology changes
Fast switching could not support IPv6 forwarding

2. In a CEF-enabled router, what two data structures work together to forward packets at wire speed?

The routing table (RIB) and the ARP cache
The Forwarding Information Base (FIB) and the adjacency table
The LFIB and the Label Information Base (LIB)
The CAM table and the MAC address table

3. On a NAT-enabled router, an inbound ACL applied to the outside interface evaluates packets using which addresses?

Post-NAT (translated, private) addresses
Pre-NAT (original, public) addresses
Both pre-NAT and post-NAT addresses simultaneously
It depends on whether CEF or process switching is active

4. What is load-balancing polarization and why is it a design concern?

It occurs when per-packet load balancing causes TCP reordering, degrading application performance
It occurs when routers along a path use the same hash algorithm, causing all traffic to converge onto a single link
It occurs when ECMP paths have unequal bandwidth, causing oversubscription on the smaller link
It occurs when IPv4 and IPv6 traffic are hashed differently, causing asymmetric paths

5. What distinguishes TCAM from CAM in hardware-based forwarding platforms?

TCAM is faster but more expensive; CAM is slower but higher capacity
TCAM supports three matching states per bit (0, 1, don't-care) enabling longest-prefix matching, while CAM supports only exact-match lookups
TCAM is used for Layer 2 forwarding; CAM is used for Layer 3 forwarding
TCAM stores adjacency information; CAM stores FIB entries

6. In distributed CEF (dCEF), what role does the Route Processor play in packet forwarding?

It forwards all packets that require policy routing or NAT
It handles only control plane functions and distributes FIB updates to line cards
It performs the initial FIB lookup, then delegates L2 rewrite to line cards
It acts as a backup forwarding engine when line card FIBs are full

7. During a packet walk through inter-VLAN routing, which statement is true?

Both IP addresses and MAC addresses remain unchanged throughout the journey
IP addresses change at every Layer 3 hop, but MAC addresses remain constant
IP addresses remain unchanged (assuming no NAT), but MAC addresses change at every Layer 3 hop
MAC addresses remain constant within the same broadcast domain but IP addresses change at VLAN boundaries

8. Why is TCAM capacity planning critical when deploying a full Internet BGP table?

BGP routes cannot be stored in software FIB tables, so TCAM is the only option
If TCAM is exhausted, routes are punted to software, destroying wire-speed forwarding performance
TCAM entries expire after a timeout, causing periodic forwarding black holes
BGP routes require TCAM entries for both inbound and outbound directions, doubling capacity needs

9. In the MQC framework, what is the correct order of components from classification to interface application?

service-policy, policy-map, class-map
policy-map, class-map, service-policy
class-map, policy-map, service-policy
class-map, service-policy, policy-map

10. What is penultimate hop popping (PHP) in MPLS, and why is it used?

The ingress PE removes the label before forwarding to reduce header overhead in the core
The second-to-last P router pops the label so the egress PE can perform a direct IP FIB lookup without a label operation
The egress PE pops both VPN and transport labels simultaneously to save processing cycles
Transit P routers pop and re-push labels at each hop to refresh the TTL field

11. In a dual-stack deployment, what is the risk of a single link that lacks IPv6 forwarding support?

It forces all traffic to use IPv4, doubling the load on IPv4 forwarding tables
It creates a forwarding black hole for IPv6 traffic that is invisible to IPv4 monitoring tools
It causes the entire IGP domain to fall back to single-stack IPv4 operation
It triggers automatic IPv6-to-IPv4 tunneling, which degrades performance

12. An engineer writes an output ACL on the WAN interface referencing private IP addresses (10.x.x.x). NAT is configured on the same router. What happens?

The ACL matches correctly because output ACLs are evaluated before NAT
The ACL never matches because NAT has already translated the private addresses to public addresses before the output ACL evaluates the packet
The ACL matches intermittently depending on whether CEF or process switching handles the packet
The ACL causes the router to punt all NATted packets to the CPU for process switching

13. What is a Forwarding Equivalence Class (FEC) in MPLS?

A set of routers that share the same OSPF area and exchange labels via LDP
A group of packets that receive identical forwarding treatment through the MPLS network, mapped to a common label
A QoS classification applied to labeled packets to determine their scheduling priority
A redundancy mechanism that provides backup labels when the primary LSP fails

14. Why must overlay encapsulators copy DSCP markings from the inner header to the outer header?

Because underlay routers inspect both inner and outer headers for QoS decisions
Because underlay routers only see the outer header, so QoS treatment is lost unless markings are copied
Because overlay tunnels strip DSCP markings during encapsulation by default
Because DSCP markings in the inner header are used for tunnel selection, not QoS

15. Which adjacency type indicates packets that CEF cannot forward and must be sent to the CPU?

Glean
Null
Punt
Discard

4.1 IP Forwarding Fundamentals at Scale

Every network design decision ultimately manifests in how packets move from source to destination. The difference between a well-designed network and a poorly designed one often comes down to how efficiently and predictably the forwarding process operates at scale.

The Evolution of Cisco Switching Methods

Process Switching is the oldest and simplest forwarding mechanism. The router's general-purpose CPU handles every single packet: receiving an interrupt, performing a full routing table lookup, constructing a new Layer 2 header, recalculating checksums, and transmitting the packet. It supports per-packet load balancing but is extremely slow and CPU-intensive.

flowchart LR A[Packet Arrives] --> B[CPU Interrupt] B --> C[Full Routing\nTable Lookup] C --> D[Build New\nL2 Header] D --> E[Recalculate\nChecksum] E --> F[Transmit on\nEgress Interface] style B fill:#f96,stroke:#333 style C fill:#f96,stroke:#333 style D fill:#f96,stroke:#333 style E fill:#f96,stroke:#333

Fast Switching introduced a demand-driven cache: the first packet to a new destination is process-switched, and the forwarding decision is cached for subsequent packets. The problem is that cache entries are frequently invalidated in dynamic networks -- every routing change, link flap, or topology update can flush the cache, forcing packets back to process switching.

CEF (Cisco Express Forwarding) resolved these weaknesses by pre-computing the entire forwarding table from the RIB before any traffic arrives. The FIB contains all known routes, eliminating cache maintenance. Wire-speed forwarding is achievable with minimal CPU involvement, and both per-packet and per-destination load balancing are supported.

CharacteristicProcess SwitchingFast SwitchingCEF
Lookup MethodFull routing table per packetCache (demand-driven)FIB (topology-driven)
CPU InvolvementEvery packetFirst packet per flow + cache missesMinimal (punted exceptions only)
SpeedSlowestModerateWire-speed capable
Load BalancingPer-packet onlyPer-destination onlyPer-packet or per-destination
Stability Under ChurnUnaffected (always slow)Degrades with frequent changesStable; updates are incremental
Animation: Side-by-side comparison showing packets flowing through process switching (slow, CPU-bound), fast switching (cache hits/misses), and CEF (consistent wire-speed) forwarding paths

Key Points: Switching Methods

CEF Architecture: FIB and Adjacency Tables

The FIB (Forwarding Information Base) is a one-to-one mirror of the RIB, reorganized for the fastest possible prefix lookup. It contains destination prefixes, next-hop IP addresses, outgoing interfaces, and recursively resolved next hops. The FIB is updated dynamically and incrementally -- no wholesale cache invalidation.

The adjacency table stores the Layer 2 rewrite information for each directly connected next hop: next-hop MAC address, egress interface source MAC, VLAN tags, and a pre-built encapsulation header string. This avoids per-packet ARP lookups and header construction.

flowchart TD A[Incoming Packet] --> B[Extract Destination IP] B --> C[FIB Longest-Match\nPrefix Lookup] C -->|Match Found| D[Retrieve Next-Hop\nfrom FIB Entry] C -->|No Match| E[Drop Packet] D --> F[Adjacency Table\nLookup] F --> G[Pre-built L2\nEncapsulation String] G --> H[Rewrite L2 Header\nin Single Operation] H --> I[Transmit on\nEgress Interface] style C fill:#4a9,stroke:#333,color:#fff style F fill:#4a9,stroke:#333,color:#fff style E fill:#f66,stroke:#333,color:#fff
Adjacency TypeDescriptionDesign Implication
NullRoutes to Null0 interfaceUsed for route filtering, blackhole routing
DropEncapsulation errors or unresolved routesIndicates a forwarding failure
DiscardPackets dropped by ACL or policyExpected behavior when filtering is applied
PuntPackets CEF cannot forwardSent to process switching; monitor for volume
GleanDirectly connected destination; triggers ARPNormal for connected subnets; watch for ARP storms
Animation: Packet arriving, FIB lookup finding a match, adjacency table providing the pre-built L2 header, and the single-operation rewrite before transmission

Key Points: CEF Data Structures

Hardware Implementation: CAM and TCAM

CAM (Content-Addressable Memory) stores Layer 2 information and performs exact-match lookups in a single clock cycle using hashing algorithms.

TCAM (Ternary Content-Addressable Memory) stores Layer 3 and policy information using three matching states per bit: 0, 1, and X (don't care). This ternary logic enables longest-prefix matching, wildcard ACL evaluation, and multi-field packet classification -- all in hardware, all in a single lookup cycle. TCAM entries use the VMR format: Value, Mask, Result.

Centralized CEF vs. Distributed CEF (dCEF)

Centralized CEF: A single Route Processor maintains the FIB and performs all forwarding decisions. Packets travel through the RP, creating a throughput bottleneck.

Distributed CEF (dCEF): Each line card maintains its own identical FIB and adjacency table copy. Packets are forwarded directly on the ingress line card without involving the RP. The RP handles only control plane functions. dCEF scales linearly with the number of installed line cards.

CEF Load Balancing

Per-destination (default): Hashes source-destination IP pair, preserving packet ordering. Per-packet: Round-robin across equal-cost paths -- better utilization but causes out-of-order delivery. Per-source: All packets from the same source use the same path.

Load-balancing polarization occurs when multiple routers use the same hash algorithm, causing traffic to converge onto a single link at every hop. Mitigation: unique hash seeds per router or algorithms that include Layer 4 port numbers.

Dual-Stack IPv4/IPv6 Forwarding

CEF maintains separate FIB tables for each address family. IPv6 CEF requires IPv4 CEF to be active first. A critical constraint: if any single link lacks forwarding for an address family, it creates a forwarding black hole invisible to the other address family's monitoring.

Key Points: Hardware and Scaling

4.2 Feature Interaction and Traffic Flow Analysis

The real complexity emerges when multiple features -- ACLs, NAT, QoS, encryption, policy routing -- are applied simultaneously. Each feature has a defined position in the packet processing pipeline, and their interactions can produce unexpected behavior.

The Packet Processing Pipeline: Order of Operations

Ingress Processing Sequence

  1. Packet arrives, stored in buffer memory
  2. Layer 2 header is stripped
  3. CEF fast path check
  4. Decryption/decompression
  5. Inbound ACL evaluation (pre-NAT, pre-routing addresses)
  6. Input QoS classification and policing
  7. NAT outside-to-inside translation
  8. Policy routing evaluation
  9. FIB lookup (CEF longest-match)
  10. Packet switched to egress interface

Egress Processing Sequence

  1. MTU check (fragment or ICMPv6 Packet Too Big)
  2. NAT inside-to-outside translation
  3. Output ACL evaluation (post-NAT, post-routing addresses)
  4. Output QoS (classification, marking, shaping, queuing)
  5. Policing / CAR
  6. Encryption
  7. Layer 2 header rewrite from adjacency table
  8. Transmission
flowchart TD subgraph Ingress ["Ingress Processing"] direction TB I1[Packet Arrives\non Interface] --> I2[Strip L2 Header] I2 --> I3[Decryption /\nDecompression] I3 --> I4[Inbound ACL\n-- pre-NAT addresses --] I4 --> I5[Input QoS\nClassify and Police] I5 --> I6[NAT Outside\nto Inside] I6 --> I7[Policy Routing] I7 --> I8[FIB Lookup\n-- CEF --] end subgraph Egress ["Egress Processing"] direction TB E1[MTU Check /\nFragmentation] --> E2[NAT Inside\nto Outside] E2 --> E3[Output ACL\n-- post-NAT addresses --] E3 --> E4[Output QoS\nShape and Queue] E4 --> E5[Encryption] E5 --> E6[L2 Header Rewrite\nfrom Adjacency Table] E6 --> E7[Transmit] end I8 --> E1 style I4 fill:#f90,stroke:#333,color:#fff style I6 fill:#69f,stroke:#333,color:#fff style I8 fill:#4a9,stroke:#333,color:#fff style E2 fill:#69f,stroke:#333,color:#fff style E3 fill:#f90,stroke:#333,color:#fff
Animation: A packet traversing the full ingress-to-egress pipeline, with each stage highlighting the addresses and headers visible at that point

Key Points: Packet Processing Pipeline

NAT Order of Operations

Inside-to-Outside (Outbound): Routing lookup first (uses original pre-NAT source), then NAT translation, then outbound ACL (sees post-NAT addresses).

Outside-to-Inside (Inbound): Inbound ACL first (sees pre-NAT public addresses), then NAT translation, then routing lookup (uses translated private address).

QoS Order of Operations (MQC Framework)

Classification, Marking, Policing/Metering, Queuing, Scheduling/Shaping. The MQC uses three components: class-map (classify), policy-map (define actions), service-policy (apply to interface).

Packet Punting: When CEF Cannot Forward

Punt CodeCauseDesign Concern
No_adjIncomplete adjacency (ARP not resolved)Transient; excessive = ARP issues
ReceivePacket destined for the router itselfNormal for control plane traffic
OptionsIP header options presentRare; can be used for DDoS
FragFragmentation requiredEnsure proper MTU configuration

Traffic Flow Analysis Methodologies

Key Points: Feature Interactions

4.3 Packet Walk Through Complex Networks

The "packet walk" is the most powerful analytical technique available to a network designer. It reveals design flaws before they become production outages.

Layer 2 to Layer 3 Boundary Transitions

Every time a packet crosses a L2/L3 boundary, the Ethernet frame is stripped and rebuilt. The critical point: IP addresses remain unchanged throughout the journey (assuming no NAT), but MAC addresses change at every Layer 3 hop.

sequenceDiagram participant A as Host A
VLAN 10
10.1.10.100 participant SW as L3 Switch
SVI 10 + SVI 20 participant B as Host B
VLAN 20
10.1.20.200 Note over A: IP src=10.1.10.100
IP dst=10.1.20.200 A->>SW: Eth Frame: dst MAC=SVI10 MAC
src MAC=Host A MAC Note over SW: Strip L2 header
FIB lookup: 10.1.20.200
directly connected VLAN 20 Note over SW: Adjacency table:
Host B MAC via VLAN 20 Note over SW: Decrement TTL
Recalculate checksum SW->>B: NEW Eth Frame: dst MAC=Host B MAC
src MAC=SVI20 MAC Note over B: Same IP addresses
Different MAC addresses

MPLS and CEF: The Overlay-Underlay Relationship

MPLS extends CEF by adding a label-based forwarding plane. At the ingress PE, an unlabeled IP packet undergoes a FIB lookup; if the destination matches an MPLS-enabled prefix, a label is imposed. Transit P routers perform only LFIB lookups -- they never examine the IP header. At the egress PE (often via penultimate hop popping), normal IP forwarding resumes.

A Forwarding Equivalence Class (FEC) groups packets that receive identical forwarding treatment through the MPLS network, mapped to a common label and Label Switched Path.

flowchart LR CE1[CE Router] -->|IP Packet| PE1[Ingress PE] PE1 -->|FIB Lookup then\nImpose Label 300| P1[Transit P] P1 -->|LFIB Lookup then\nSwap 300 to 200| P2[Penultimate P] P2 -->|LFIB Lookup then\nPop Label PHP| PE2[Egress PE] PE2 -->|IP FIB Lookup then\nForward IP Packet| CE2[CE Router] style PE1 fill:#69f,stroke:#333,color:#fff style P1 fill:#f90,stroke:#333,color:#fff style P2 fill:#f90,stroke:#333,color:#fff style PE2 fill:#69f,stroke:#333,color:#fff
Animation: End-to-end MPLS packet walk showing label imposition at ingress PE, label swaps through the core, PHP at the penultimate router, and IP forwarding at egress PE

Overlay and Underlay Traffic Flow Interactions

MTU Impact: VXLAN adds 50 bytes, GRE adds 24 bytes, IPsec tunnel mode adds 50-70 bytes. Reduced effective MTU can cause fragmentation or black holes if PMTUD fails.

Forwarding Plane Interaction: Underlay routers see only the outer header. To preserve QoS across overlays, the encapsulator must copy DSCP markings from inner to outer header.

Design-Time Verification Techniques

CommandPurpose
show ip cefDisplay FIB table entries
show adjacency summaryQuick adjacency table overview
show cef not-cef-switchedList packets bypassing CEF with reasons
show ip cef exact-route <src> <dst>Determine CEF path for a specific flow
show platform tcam utilizationVerify hardware table capacity

Key Points: Packet Walk and Complex Networks

Post-Study Assessment

1. What fundamental weakness of fast switching did CEF resolve?

Fast switching could not perform per-packet load balancing
Fast switching required dedicated hardware ASICs on every line card
Fast switching used a demand-driven cache that was constantly invalidated by topology changes
Fast switching could not support IPv6 forwarding

2. In a CEF-enabled router, what two data structures work together to forward packets at wire speed?

The routing table (RIB) and the ARP cache
The Forwarding Information Base (FIB) and the adjacency table
The LFIB and the Label Information Base (LIB)
The CAM table and the MAC address table

3. On a NAT-enabled router, an inbound ACL applied to the outside interface evaluates packets using which addresses?

Post-NAT (translated, private) addresses
Pre-NAT (original, public) addresses
Both pre-NAT and post-NAT addresses simultaneously
It depends on whether CEF or process switching is active

4. What is load-balancing polarization and why is it a design concern?

It occurs when per-packet load balancing causes TCP reordering, degrading application performance
It occurs when routers along a path use the same hash algorithm, causing all traffic to converge onto a single link
It occurs when ECMP paths have unequal bandwidth, causing oversubscription on the smaller link
It occurs when IPv4 and IPv6 traffic are hashed differently, causing asymmetric paths

5. What distinguishes TCAM from CAM in hardware-based forwarding platforms?

TCAM is faster but more expensive; CAM is slower but higher capacity
TCAM supports three matching states per bit (0, 1, don't-care) enabling longest-prefix matching, while CAM supports only exact-match lookups
TCAM is used for Layer 2 forwarding; CAM is used for Layer 3 forwarding
TCAM stores adjacency information; CAM stores FIB entries

6. In distributed CEF (dCEF), what role does the Route Processor play in packet forwarding?

It forwards all packets that require policy routing or NAT
It handles only control plane functions and distributes FIB updates to line cards
It performs the initial FIB lookup, then delegates L2 rewrite to line cards
It acts as a backup forwarding engine when line card FIBs are full

7. During a packet walk through inter-VLAN routing, which statement is true?

Both IP addresses and MAC addresses remain unchanged throughout the journey
IP addresses change at every Layer 3 hop, but MAC addresses remain constant
IP addresses remain unchanged (assuming no NAT), but MAC addresses change at every Layer 3 hop
MAC addresses remain constant within the same broadcast domain but IP addresses change at VLAN boundaries

8. Why is TCAM capacity planning critical when deploying a full Internet BGP table?

BGP routes cannot be stored in software FIB tables, so TCAM is the only option
If TCAM is exhausted, routes are punted to software, destroying wire-speed forwarding performance
TCAM entries expire after a timeout, causing periodic forwarding black holes
BGP routes require TCAM entries for both inbound and outbound directions, doubling capacity needs

9. In the MQC framework, what is the correct order of components from classification to interface application?

service-policy, policy-map, class-map
policy-map, class-map, service-policy
class-map, policy-map, service-policy
class-map, service-policy, policy-map

10. What is penultimate hop popping (PHP) in MPLS, and why is it used?

The ingress PE removes the label before forwarding to reduce header overhead in the core
The second-to-last P router pops the label so the egress PE can perform a direct IP FIB lookup without a label operation
The egress PE pops both VPN and transport labels simultaneously to save processing cycles
Transit P routers pop and re-push labels at each hop to refresh the TTL field

11. In a dual-stack deployment, what is the risk of a single link that lacks IPv6 forwarding support?

It forces all traffic to use IPv4, doubling the load on IPv4 forwarding tables
It creates a forwarding black hole for IPv6 traffic that is invisible to IPv4 monitoring tools
It causes the entire IGP domain to fall back to single-stack IPv4 operation
It triggers automatic IPv6-to-IPv4 tunneling, which degrades performance

12. An engineer writes an output ACL on the WAN interface referencing private IP addresses (10.x.x.x). NAT is configured on the same router. What happens?

The ACL matches correctly because output ACLs are evaluated before NAT
The ACL never matches because NAT has already translated the private addresses to public addresses before the output ACL evaluates the packet
The ACL matches intermittently depending on whether CEF or process switching handles the packet
The ACL causes the router to punt all NATted packets to the CPU for process switching

13. What is a Forwarding Equivalence Class (FEC) in MPLS?

A set of routers that share the same OSPF area and exchange labels via LDP
A group of packets that receive identical forwarding treatment through the MPLS network, mapped to a common label
A QoS classification applied to labeled packets to determine their scheduling priority
A redundancy mechanism that provides backup labels when the primary LSP fails

14. Why must overlay encapsulators copy DSCP markings from the inner header to the outer header?

Because underlay routers inspect both inner and outer headers for QoS decisions
Because underlay routers only see the outer header, so QoS treatment is lost unless markings are copied
Because overlay tunnels strip DSCP markings during encapsulation by default
Because DSCP markings in the inner header are used for tunnel selection, not QoS

15. Which adjacency type indicates packets that CEF cannot forward and must be sent to the CPU?

Glean
Null
Punt
Discard

Your Progress

Answer Explanations