Cilium (computing)

Cilium is a cloud native technology for networking, observability, and security. It is based on the kernel technology eBPF, originally for better networking performance, and now leverages many additional features for different use cases. The core networking component has evolved from only providing a flat Layer 3 network for containers to including advanced networking features, like BGP and Service mesh, within a Kubernetes cluster, across multiple clusters, and connecting with the world outside Kubernetes. Hubble was created as the network observability component and Tetragon was later added for security observability and runtime enforcement. Cilium runs on Linux and is one of the first eBPF applications being ported to Microsoft Windows through the eBPF on Windows project.

History
Evolution from Networking CNI (Container Network Interface)

Cilium began as a networking CNI for container workloads. It was originally IPv6 only and supported multiple container orchestrators, like Kubernetes. The original vision for Cilium was to build an intent and identity-based high-performance container networking platform. As the cloud native ecosystem expanded, Cilium added new projects and features to address new problems in the space.

The table below summarises some of the most significant milestones of this evolution:


 * December 2015 - Initial commit to the Cilium project
 * May 2016 - Network policy was added, expanding the scope beyond just networking
 * August 2016 - Cilium was initially announced during LinuxCon as a project providing fast IPv6 container networking with eBPF and XDP. Today, Cilium has been adopted by major cloud provider's Kubernetes offerings and is one of the most widely used CNIs.
 * August 2017 - ebpf-go was created as a library to read, modify, and load eBPF programs and attach them to various hooks.
 * April 2018 - Cilium 1.0 is the first stable release
 * November 2019 - Hubble was launched to provide eBPF-based observability to network flows
 * August 2020 - Chosen by Google as the basis for their Kubernetes Dataplane v2
 * September 2021 - AWS picks Cilium for Networking & Security on EKS Anywhere
 * October 2021 - Pwru was launched for tracing network packets in the Linux kernel with advanced filtering capabilities
 * October 2021 - Accepted into CNCF as an incubation level project
 * December 2021 - Cilium Service Mesh launched to help manage traffic between services
 * May 2022 - Tetragon open sourced to cover security observability and runtime enforcement
 * October 2022 - Chosen as CNI for Azure
 * April 2023 - Cilium Mesh launched to connect workloads and machines across cloud, on-prem, and edge
 * April 2023 - First CiliumCon hosted as a part of KubeCon
 * October 2023 - Cilium becomes a CNCF Graduated project

CNCF
Cilium was accepted into the Cloud Native Computing Foundation on October 13th, 2021 as an incubation-level project. It applied to become a graduated project on October 27th 2022. It became a Graduated project one year later. Cilium is one of the fastest-moving projects in the CNCF ecosystem.

Adoption
Cilium has been adopted by many large-scale production users, including over 100 that have stated it publicly, for example:


 * Datadog uses Cilium as their CNI and kube-proxy replacement
 * Ascend uses Cilium as their one CNI across multiple cloud providers
 * Bell Canada uses Cilium and eBPF for telco networking
 * Cosmonic uses Cilium for their Nomad-based PaaS
 * IKEA uses Cilium for their self-hosted bare-metal private cloud
 * S&P Global uses Cilium as its CNI
 * Sky uses Cilium as their CNI and for network security
 * The New York Times uses Cilium on EKS for multi-region multi-tenant shared clusters
 * Trip.com uses Cilium both on premise and in AWS

Cilium is the CNI for many cloud providers including Alibaba, APPUiO, Azure, AWS, DigitalOcean, Exoscale, Google Cloud, Hetzner, and Tencent Cloud.

Cilium
Cilium began as a container networking project. With the growth of Kubernetes and container orchestration, Cilium became a CNI, providing basic things like configuring container network interfaces and Pod to Pod connectivity. From the beginning, Cilium based its networking on eBPF rather than iptables or IPVS, betting that eBPF would become the future of cloud native networking.

Cilium’s eBPF based dataplane provides a simple flat Layer 3 network with the ability to span multiple clusters in either a native routing or overlay mode with Cilium Cluster Mesh. It is Layer 7-protocol aware and can enforce network policies on Layer 3 to Layer 7 and with FQDN using an identity-based security model that is decoupled from network addressing.

Cilium implements distributed load balancing for traffic between Pods and to external services, and is able to fully replace kube-proxy, using XDP, socket-based load-balancing and efficient hash tables in eBPF. It also supports advanced functionality like integrated ingress and egress gateways, bandwidth management, a stand-alone load balancer, and service mesh.

Cilium is the first CNI to support advanced kernel features such as BBR TCP congestion control and BIG TCP for Kubernetes Pods.

Hubble
Hubble is the observability, service map, and UI of Cilium which is shipped with the CNI. It can be used to observe individual network packet flows, view network policy decisions to allow or block traffic, and build up service maps showing how Kubernetes services are communicating. Hubble can export this data to Prometheus, OpenTelemetry, Grafana, and Fluentd for further analysis of Layer 3/4 and Layer 7 metrics.

Tetragon
Tetragon is the security observability and runtime enforcement project of Cilium. Tetragon is a flexible Kubernetes-aware security observability and runtime enforcement tool that applies policy and filtering directly with eBPF. It allows users to monitor and observe the complete lifecycle of every process execution on their machine, translate policies for file monitoring, network observability, container security, and more into eBPF programs, and do synchronous monitoring, filtering, and enforcement completely in the kernel.

Go eBPF Library
ebpf-go is a pure-Go library to interact with the eBPF subsystem in the Linux kernel. It has minimal external dependencies, emphasises reliability and compatibility, and is widely deployed in production.

Pwru
pwru ("Packet, where are you?") is an eBPF-based tool for tracing network packets in the Linux kernel with advanced filtering capabilities. It allows fine-grained introspection of kernel state to facilitate debugging network connectivity issues. Under the hood, pwru attaches eBPF debugging programs to all Linux kernel functions which are responsible for processing network packets.

This gives a user finer-grained view into a packet processing in the kernel than with tcpdump, Wireshark, or more traditional tools. Also, it can show packet metadata such as network namespace, processing timestamp, internal kernel packet representation fields, and more.

Networking
Cilium began as a networking project and has many features that allow it to provide a consistent connectivity experience from Kubernetes workloads to virtual machines and physical servers running in the cloud, on-premises, or at the edge. Some of these include:


 * Container Network Interface (CNI) - Provides networking for Kubernetes clusters
 * Layer 4 Load Balancer - Based on Maglev and XDP for handling north/south traffic
 * Cluster Mesh - Combines multiple Kubernetes clusters into one network
 * Bandwidth and Latency Optimization - Fair Queueing, TCP Optimization, and Rate Limiting
 * kube-proxy replacement - Replaces iptables with eBPF hash tables
 * BGP - Integrates into existing networks and provides load balancing in bare metal clusters
 * Egress Gateway - Provides a static IP for integration into external workloads
 * Service Mesh - Includes ingress, TLS termination, canary rollouts, rate limiting, and circuit breaking
 * Gateway API - Fully conformant implementation for managing ingress into Kubernetes clusters
 * SRv6 - Defines packet processing in the network as a program
 * BBR support for Pods - Allows for better throughput and latency for Internet traffic
 * NAT 46/64 Gateway - Allows IPv4 services to talk with IPv6 ones and vice versa
 * BIG TCP for IPv4/IPv6 - Enables better performance by reducing the number of packets traversing the stack
 * Cilium Mesh - Connects workloads running outside Kubernetes to ones running inside it

Observability
Being in the kernel, eBPF has complete visibility of everything that is happening on a machine. Cilium leverages this with the following features:


 * Service Map - Provides a UI for network flows and policy
 * Network Flow Logs - Provides Layer 3/4 and DNS visibility connected to identity
 * Network Protocol Visibility - Including HTTP, gRPC, Kafka, UDP, and SCTP
 * Metrics & Tracing Export - Sends data to Prometheus, OpenTelemetry, or other storage system

Security
eBPF can stop events in the kernel for security. Cilium projects leverage this through the following features:


 * Transparent Encryption - Utilizes either IPSec or WireGuard
 * Network Policy - Includes Layer 3 to Layer 7 and DNS-aware policies
 * Runtime Enforcement - Stops processes outside of policies with default policies
 * File Integrity Monitoring - Tracks modification to the system

Support windows
The chart below visualises the period for which each Cilium community maintained release is/was supported:

Community
Cilium's official website lists online forums, messaging platforms, and in-person meetups for the Cilium user and developer community.

Conferences
Conferences dedicated to Cilium development in the past have included:


 * CiliumCon EU 2023, held in conjunction with KubeCon + CloudNativeCon EU 2023
 * CiliumCon NA 2023, held in conjunction with KubeCon + CloudNativeCon NA 2023
 * CiliumCon EU 2024, held in conjunction with KubeCon + CloudNativeCon EU 2024

Annual Report
The Cilium community releases an annual report to cover how the community developed over the course of the year:


 * Cilium Annual Report 2022: Year of the CNI
 * Cilium Annual Report 2023: Year of Graduation