eBPF in 2026: From Kernel Observability to Application Security
on Ebpf, Linux, Observability, Security, Networking, Devops
When eBPF was introduced as a Linux networking feature in the mid-2010s, the goal was simple: let administrators write small, sandboxed programs that could filter packets in the kernel without modifying kernel source code. A useful-but-niche capability.
In 2026, eBPF is the substrate for some of the most important infrastructure software in the cloud-native world. Cilium runs Kubernetes networking and security policy. Falco and Tetragon use eBPF for real-time security monitoring. Pyroscope and Parca do continuous profiling without instrumentation. Beyla provides automatic observability for any language runtime without modifying application code.
The promise of eBPF was kernel programmability. The reality turned out to be something broader: zero-touch observability and security for the entire software stack.
Photo by Luke Chesser on Unsplash
What eBPF Actually Is
eBPF (extended Berkeley Packet Filter) is a virtual machine inside the Linux kernel. You write a small program in C (or Rust, or Go with the right libraries), compile it to eBPF bytecode, and load it into the kernel. The kernel verifies the bytecode is safe (no unbounded loops, no kernel crashes possible), then JIT-compiles it to native machine code and attaches it to a hook point.
Hook points are everywhere:
- Network: packet ingress/egress, socket operations, TC (traffic control)
- System calls: before and after any syscall
- Kernel functions: kprobes attach to arbitrary kernel functions
- User space: uprobes attach to specific functions in user-space binaries
- Tracepoints: stable, documented hooks for common kernel events
- Performance monitoring: hardware PMU events (CPU cycles, cache misses)
The critical property: eBPF programs run in the kernel, so they observe everything — every syscall, every network packet, every function call — with near-zero overhead. No sampling, no agent injected into processes, no kernel modules (which would require root and could crash the system).
eBPF for Observability: The No-Instrumentation Promise
The traditional approach to application observability requires code changes: add a logging library, instrument your HTTP handlers, configure a tracing SDK. This works but creates friction — new services need setup, polyglot organizations need multiple SDK versions, and vendors need to maintain SDKs for every language.
eBPF changes the model: observe at the syscall and network layer, reconstruct the application-level signals from what the kernel sees.
Automatic Distributed Tracing
Beyla (from Grafana Labs) demonstrates this most concretely. It uses eBPF uprobes to hook into HTTP/2 and gRPC handling inside any language runtime — Go, Java, Python, Node.js, Rust — without any code changes.
# Deploy Beyla as a DaemonSet in Kubernetes
# It automatically generates traces for all services on each node
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: beyla
spec:
selector:
matchLabels: {app: beyla}
template:
spec:
hostPID: true # Required for process inspection
containers:
- name: beyla
image: grafana/beyla:latest
env:
- name: BEYLA_TRACE_PRINTER
value: otel
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://otel-collector:4318
securityContext:
privileged: true # Required for eBPF program loading
volumeMounts:
- mountPath: /sys/kernel/debug
name: kernel-debug
volumes:
- name: kernel-debug
hostPath: {path: /sys/kernel/debug}
EOF
Every HTTP request to every service on every node is now traced, with spans that show HTTP method, route, status code, and latency — without touching a single line of application code.
The data quality is surprisingly good. eBPF can see the HTTP/2 frames, the TLS handshakes (some hooking required), and the syscall timing. For most observability needs, this matches what you’d get from manual instrumentation.
Continuous Profiling
eBPF enables continuous profiling — capturing CPU stack traces across all processes constantly — with overhead measured in low single-digit percentages. The tooling stack here is Parca (open source) and Pyroscope (also open source, acquired by Grafana).
The workflow change this enables: instead of profiling on-demand when you suspect a performance problem, you profile everything all the time, and correlate performance with deployments and incidents after the fact.
# Query Parca for CPU time spent in the last hour,
# grouped by function, for the payment-service
parca query \
--query 'process_cpu:cpu:nanoseconds:cpu:nanoseconds{service="payment-service"}' \
--start -1h \
--aggregate 'sum by (function)'
The flamegraph you get back shows exactly where CPU time went, across all instances of the service, for the entire hour — even if the performance issue already resolved.
eBPF for Security: Runtime Threat Detection
The security use case for eBPF might be the most impactful, and it’s still underutilized in most organizations.
Traditional runtime security tools (antivirus, HIDS) work from user space and can be evaded by an attacker who has compromised the OS. An eBPF security tool running in the kernel has a more privileged vantage point — it sees everything before user-space tools get a chance to see it.
Tetragon: Policy Enforcement at the Kernel Level
Tetragon (from Cilium project) goes beyond detection to enforcement. It can apply security policies at the kernel level, blocking malicious syscalls before they complete:
# TracingPolicy: block execve of common reverse shell tools
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
name: block-reverse-shells
spec:
kprobes:
- call: "security_bprm_check"
syscall: false
args:
- index: 0
type: "linux_binprm"
selectors:
- matchBinaries:
- operator: "In"
values:
- "/bin/nc"
- "/bin/netcat"
- "/bin/ncat"
- "/usr/bin/python3" # when called from suspicious parent process
matchActions:
- action: Sigkill # Kill the process before exec completes
This policy is enforced at the security_bprm_check kernel hook — it fires before execve completes. An attacker can’t bypass it by modifying user-space libraries or ptrace-injecting into running processes.
Real-Time Behavioral Analysis
Photo by Markus Spiske on Unsplash
The more powerful pattern is behavioral analysis: build a baseline of normal behavior for each container, alert when behavior deviates.
# Tetragon policy: alert on network connections from processes that shouldn't make them
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
name: detect-lateral-movement
spec:
kprobes:
- call: "tcp_connect"
syscall: false
args:
- index: 0
type: "sock"
selectors:
- matchNamespaces:
- namespace: Mnt
operator: In
values: ["container_mnt_ns_id"] # inject at deployment time
matchArgs:
- index: 0
operator: "NotIn"
values:
- "10.0.0.0/8" # internal
- "172.16.0.0/12" # internal
matchActions:
- action: Post # Log and alert; don't block (adjust after baselining)
This fires any time a container process makes a TCP connection to an external IP. For a container that’s supposed to only talk to internal services, this is an immediate signal of potential compromise or misconfiguration.
eBPF Networking: Cilium and the End of kube-proxy
Cilium replaced kube-proxy as the recommended CNI for high-performance Kubernetes clusters in 2024. In 2026, it’s the default for most major managed Kubernetes offerings.
The performance argument is straightforward. kube-proxy uses iptables to implement Service load balancing. iptables is sequential — each packet traverses every rule. In a large cluster with thousands of Services and tens of thousands of Endpoints, iptables tables grow to hundreds of thousands of rules, and latency grows proportionally.
Cilium’s XDP (eXpress Data Path) and TC eBPF programs implement Service routing with O(1) hash lookups. Latency doesn’t grow with cluster size.
The operational argument is equally strong. eBPF programs can be updated atomically. Cilium can change routing rules across an entire cluster in milliseconds, with no packet loss during the transition. IPtables updates are not atomic — you can have brief windows of inconsistent state during rule flushes.
# Check Cilium's eBPF program attachment
cilium bpf policy list
# Shows all eBPF programs attached for network policy enforcement
# Visualize service load balancing
cilium service list
# Shows all Services and their backend Endpoints as eBPF maps
# Performance comparison
cilium metrics | grep bpf_lb
# bpf_lb_service_lookups_total: 1,234,567 (successful hash lookups)
# bpf_lb_service_lookups_failed_total: 0 (no rule misses)
Getting Started Without Going Deep
You don’t need to write eBPF programs to benefit from eBPF. The ecosystem has matured to the point where you consume eBPF through high-level tools:
For observability: Deploy Beyla or Pixie for automatic distributed tracing. Deploy Parca or Pyroscope for continuous profiling. Neither requires application changes.
For security: Deploy Falco for anomaly detection. Deploy Tetragon if you want enforcement capabilities. Both have Helm charts and work on any Kubernetes cluster.
For networking: Migrate to Cilium. The migration from Flannel or Calico is documented and typically takes a few hours. The performance and operational improvements are immediate.
For custom observability: Start with BCC (Python-based eBPF scripting) or bpftrace (a high-level tracing language). Write a 10-line bpftrace script to trace a specific syscall, see the data. That’s usually enough to get hooked.
# bpftrace: trace all read() syscalls and their sizes, for the nginx process
bpftrace -e '
tracepoint:syscalls:sys_enter_read
/comm == "nginx"/
{
@read_sizes = hist(args->count);
}
interval:s:5 { print(@read_sizes); clear(@read_sizes); }
'
This runs without modifying nginx, requires no restart, and gives you a histogram of read sizes every 5 seconds. The data quality is perfect — it’s what the kernel actually sees, not what the application reports.
eBPF is probably the most important Linux technology of the last decade. The tools built on it are already better than what came before, and they’re still improving rapidly.
The eBPF.io foundation has the canonical documentation and a curated list of projects. Liz Rice’s book Learning eBPF (free PDF) is the best technical introduction available.
이 글이 도움이 되셨다면 공감 및 광고 클릭을 부탁드립니다 :)
