After writing two articles about shifting down, I realized that I did not have a good post on installing and basic setup Cilium. Cilium is a Kubernetes networking, security, and observability platform built on eBPF. Rather than relying on traditional Linux networking primitives such as iptables, netfilter chains, and overlay routing, Cilium programs networking and security logic directly into the Linux kernel at runtime. This architectural approach allows Cilium to operate with lower latency, greater scalability, and significantly richer context than most conventional Container Network Interface (CNI) implementations.
In traditional Kubernetes networking, packet handling and policy enforcement are typically performed using iptables rules that grow in number as clusters scale and as policies become more complex. This model works, but it introduces performance bottlenecks, complicates troubleshooting, and limits policy enforcement to basic Layer 3 and Layer 4 semantics. Observability is usually bolted on through sidecars, packet mirroring, or external tooling, which further increases operational complexity. Cilium addresses these challenges by using eBPF programs attached to kernel hook points such as socket operations, XDP, and traffic control. These programs can inspect, filter, and redirect traffic while maintaining full awareness of Kubernetes identity and context.
Before deploying Cilium, the underlying cluster must meet several baseline requirements. A modern Kubernetes version is strongly recommended, along with a sufficiently recent Linux kernel that supports the necessary eBPF helpers. While Cilium can operate on kernels as old as 5.4, newer kernels generally unlock better performance and more advanced features. The cluster must also allow privileged workloads, as Cilium runs kernel-level programs, and administrators should have full access to deploy custom resource definitions and DaemonSets.
The recommended way to install Cilium is via the Cilium CLI. This tool simplifies installation, validation, and upgrades, and provides insight into cluster readiness. The CLI can be installed locally by downloading the latest release and placing the binary on the system path:
curl -L --fail --remote-name \
https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz
tar xzvf cilium-linux-amd64.tar.gz
sudo mv cilium /usr/local/bin/
Once installed, the CLI can be used to verify connectivity and version information:
cilium version
Installing Cilium into a Kubernetes cluster requires only a single command for a default configuration. This deploys the Cilium agent as a DaemonSet on every node and installs the Cilium operator as a centralized controller:
cilium install
After installation, the deployment can be validated by waiting for all components to become ready:
cilium status --wait
At this stage, Cilium is functioning as the cluster’s CNI, but many of its more advanced capabilities are optional and must be explicitly enabled.
Under the hood, Cilium consists of a per-node agent and a cluster-scoped operator. The agent is responsible for attaching and managing eBPF programs, tracking endpoints, and enforcing policy locally. The operator handles responsibilities that must be coordinated globally, such as identity allocation, IP address management, and garbage collection. eBPF maps act as shared data structures inside the kernel, allowing programs to exchange state efficiently for tasks such as connection tracking, policy lookups, and load balancing.
One of the most impactful features Cilium provides is full kube-proxy replacement. In a traditional Kubernetes cluster, kube-proxy implements service load balancing using iptables rules, which can become increasingly expensive as services and endpoints scale. Cilium replaces this mechanism with an eBPF-based load balancer that runs entirely in the kernel. This eliminates iptables rule churn, reduces latency, and enables more advanced load-balancing strategies. kube-proxy uses iptables, which was designed for firewalls, not the high-churn environment of Kubernetes. As you add more Services, the list of rules grows linearly, slowing down packet processing. Cilium’s eBPF implementation uses hash tables for lookups, meaning the time it takes to find a service remains nearly constant.
When installing Cilium with kube-proxy replacement enabled, additional flags are required to ensure that all service types are handled correctly:
cilium install \
--kube-proxy-replacement=strict \
--enable-node-port \
--enable-external-ips \
--enable-host-reachable-services
After installation, kube-proxy should be disabled or removed, and the cluster should be carefully validated. The status output confirms whether Cilium has fully assumed responsibility for service handling:
cilium status | grep KubeProxyReplacement
IP address management is another area where Cilium offers flexibility. By default, Cilium allocates pod CIDR ranges per node, similar to many other CNIs. In environments where tighter integration with Kubernetes is required, Cilium can use Kubernetes-managed IPAM instead:
cilium install --ipam=kubernetes
In cloud-native environments, Cilium can integrate directly with cloud provider networking APIs. For example, in AWS, Cilium can allocate pod IPs directly from VPC subnets using ENIs, avoiding overlays and simplifying routing:
cilium install \
--ipam=eni \
--set eni.enabled=true
Policy enforcement is where Cilium’s identity-based model becomes most visible. Rather than referencing IP addresses, policies select endpoints based on Kubernetes labels. This ensures that policies remain stable even as pods are rescheduled or IPs change. A simple ingress policy allowing traffic from frontend pods to backend pods might look like this:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-frontend
spec:
endpointSelector:
matchLabels:
app: backend
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
Cilium extends this model to Layer 7 by allowing protocol-aware rules to be enforced in the kernel. For HTTP traffic, policies can restrict access based on request methods and paths:
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
toPorts:
- ports:
- port: "80"
protocol: TCP
rules:
http:
- method: "GET"
path: "/health"
This type of enforcement typically requires sidecars or service mesh proxies, but Cilium performs it natively using eBPF. While L7 inspection adds overhead, it enables fine-grained security controls without altering application deployments.
Observability is provided through Hubble, which integrates directly with Cilium’s data plane. Because Hubble observes traffic at the same kernel hook points where policy decisions are made, it provides precise visibility into allowed and denied flows. Hubble can be enabled during installation:
cilium install \
--set hubble.enabled=true \
--set hubble.relay.enabled=true
Once enabled, operators can inspect live traffic and verify connectivity:
cilium hubble status
cilium hubble observe
As clusters scale, performance tuning becomes increasingly important. Cilium allows operators to tune eBPF map sizing to better accommodate large numbers of connections or endpoints. For example, dynamic map sizing can be adjusted through configuration:
bpf:
mapDynamicSizeRatio: 0.05
Cilium can also extend its policy model to the node itself by enabling host firewalling. This allows node-level processes to be governed by the same identity-based rules as pods:
hostFirewall:
enabled: true
From an operational standpoint, Cilium benefits from being treated as a core platform component rather than a simple networking plugin. Network policies should be managed declaratively and versioned alongside application manifests. Kernel upgrades should be coordinated with Cilium upgrades to ensure compatibility with new eBPF features. While Cilium supports standard Kubernetes NetworkPolicy resources, mixing them with Cilium-specific policies should be done deliberately, with a clear understanding of evaluation order and interaction.
Adopting Cilium represents a fundamental shift in how networking, security, and observability are implemented in Kubernetes. By moving these concerns into the kernel and grounding them in Kubernetes identity rather than infrastructure-level addressing, Cilium enables a more scalable, expressive, and observable networking model. When implemented thoughtfully, it becomes more than a CNI—it becomes a foundational layer for secure and observable cloud-native platforms.