Skip to content

Service Mesh and NetworkPolicy: Why I Run Both

NetworkPolicy and service mesh aren’t alternatives. They’re different tools at different layers catching different threats. The question isn’t which one — it’s why both.

The defense-in-depth post covers how Lattice runs three enforcement layers. This post is about the reasoning behind the two network layers specifically: why L4 and L7 enforcement are both necessary, and why I chose Cilium and Istio to fill those roles.

“Running both Cilium and Istio sounds expensive.”

It is. Two control planes, two sets of CRDs, two things to upgrade. I wouldn’t do it if one tool could cover both layers well. Today, neither can.

L4 sees packets. Source IP, destination IP, protocol, port. It’s fast, kernel-level (with Cilium’s eBPF), and hard to bypass from userspace. But it’s blind to request content. A GET /healthz and a DELETE /admin/everything look identical at L4 — same source, same destination, same port.

L7 terminates TLS, extracts the caller’s cryptographic identity, and evaluates the request — method, path, headers. It can distinguish read from write, admin from user. But it runs in userspace, can crash, and only sees traffic enrolled in the mesh.

flowchart LR
  subgraph "L4 — Cilium (kernel)"
    P[Packet] --> F{IP + Port\nfilter}
    F -->|allowed| PASS1[Pass]
    F -->|denied| DROP1[Drop]
  end
  subgraph "L7 — Istio (userspace)"
    R[Request] --> A{Identity +\nMethod + Path}
    A -->|authorized| PASS2[Pass]
    A -->|denied| DROP2[Deny]
  end

This isn’t a shortcoming of either tool. It’s physics. Packet filtering and request authorization operate at different points in the stack, see different data, and provide different guarantees. Asking Cilium to evaluate HTTP methods is like asking a firewall to read your email. Asking Istio to enforce policy when its proxy is down is like asking a stopped traffic light to manage an intersection.

Default-deny CiliumClusterwideNetworkPolicy. Every pod has no ingress and no egress unless a per-service CiliumNetworkPolicy explicitly allows it. Enforcement in eBPF — kernel-level, not bypassable from a container.

I chose Cilium over Calico or native NetworkPolicy for three reasons: eBPF enforcement is faster than iptables, Cilium’s endpoint identity model survives pod rescheduling (numeric identities, not IP-based matching), and CiliumNetworkPolicy supports FQDN-based egress and DNS-aware rules the native API can’t express.

This is the backstop. If the mesh is down, if the waypoint proxy crashed, if someone misconfigured an AuthorizationPolicy — L4 still blocks unauthorized network paths. The packets never reach the application.

Default-deny AuthorizationPolicy. Every request is denied unless a policy permits the caller’s SPIFFE identity for the specific method and path. Ztunnel handles L4 mTLS transparently; waypoint proxies handle L7 authorization for namespaces that need it.

I chose ambient over sidecar Istio because it doesn’t inject containers into application pods. The compiler produces clean pod specs. Mesh upgrades don’t require pod restarts. Namespaces without complex authorization rules don’t pay the waypoint proxy overhead.

“Why not Cilium’s service mesh?” Two reasons. First, Cilium’s WireGuard-based mesh encryption is not FIPS-validated (Cilium also supports IPsec with FIPS-approved ciphers, but its L7 mesh capabilities use WireGuard). If your platform serves government, finance, or healthcare — or if there’s any chance it will — that’s a hard blocker. Istio uses standard TLS with pluggable crypto backends — when FIPS is required, you swap to Istio’s FIPS-validated Envoy build. That path doesn’t exist with WireGuard. Second, Istio’s AuthorizationPolicy is more expressive for request-level authorization — SPIFFE identity, method/path matching, deny-by-default semantics. Cilium’s L7 capabilities are evolving, but the FIPS issue is structural, not a maturity gap. I’m not going to weaken L7 enforcement or accept non-FIPS encryption to simplify the stack.

Two independent enforcement layers from one dependency declaration. A developer writes type: service, direction: outbound. The compiler produces both a CiliumNetworkPolicy and an Istio AuthorizationPolicy. The developer doesn’t choose between L4 and L7 security — they get both, because the platform understands that both are necessary.

The layers are independently configured through independent derivation paths. A bug in the Istio policy compiler doesn’t affect the Cilium policy compiler. If one layer fails, the other still enforces. That’s the point of running both — not redundancy for its own sake, but independent coverage of threats that no single layer can see.

  • L4 and L7 see different things. Packets and requests are different data. No single tool inspects both with kernel-level guarantees and request-level semantics.
  • Running both is expensive. Not running both is a gap. Two control planes cost operational overhead. One control plane leaves a class of threats unaddressed. Pick your tradeoff honestly.
  • Compilation makes dual enforcement practical. One service spec produces both a CiliumNetworkPolicy and an Istio AuthorizationPolicy. Developers don’t maintain two policy sets — the platform does.