Container Internals Deep Dive 03: Network Namespaces and CNI
How container networking works from veth pairs to CNI plugin chains in Kubernetes.
On this page
Container Internals Deep Dive — this post is part of a series
- Part 1: Container Internals Deep Dive 00
- Part 2: Container Internals Deep Dive 01: Cgroups
- Part 3: Container Internals Deep Dive 02: Namespaces
- Part 4: Container Internals Deep Dive 03: Network Namespaces and CNI
- Part 5: Container Internals Deep Dive 04: containerd Internals
- Part 6: Container Internals Deep Dive 05: OCI Standard
- Part 7: Container Internals Deep Dive 06: runc vs crun
- Part 8: Container Internals Deep Dive 07: Rootless Containers with Podman
- Part 9: Container Internals Deep Dive 08: Kata Containers
- Part 10: Container Internals Deep Dive 09: Firecracker microVM
Series: 4/10. In part 02 we covered namespaces. In this part we focus on network isolation and CNI.
Container networking starts with Linux primitives, then orchestration layers make it usable at scale.
The basic packet path #
- Container runs in its own network namespace.
- A
vethpair links container namespace to host namespace. - Host-side interface is attached to bridge, overlay, or routing fabric.
- NAT, policy, and routing decide packet forwarding.
Where CNI fits #
CNI (Container Network Interface) is a plugin contract used by Kubernetes runtimes.
When a Pod is created, kubelet/runtime invokes CNI plugins to:
- allocate IP
- create links/interfaces
- attach routes
- apply network policy hooks (plugin-dependent)
Useful debugging commands #
ip netns list
ip link show type veth
ip route
sudo nsenter -t <container-pid> -n ip addr
On Kubernetes nodes, also inspect CNI state:
ls /etc/cni/net.d/
ls /opt/cni/bin/
Common failure modes #
- IPAM exhaustion (no available Pod CIDR).
- MTU mismatch causing packet fragmentation issues.
- Firewall/NAT rules drift after node upgrades.
- CNI plugin mismatch across node pools.
Design choices to make early #
- Overlay vs routed network model
- eBPF-based dataplane vs iptables-heavy path
- Network policy model and default deny posture
Takeaway #
Container networking is Linux networking plus automation. If you can trace a packet, you can debug most incidents.
Next: Container Internals Deep Dive 04: containerd Internals