eBPF: The Superpower Behind Modern Cloud-Native Security and Observability
on Ebpf, Kubernetes, Security, Observability, Cloud native, Devops
eBPF: The Superpower Behind Modern Cloud-Native Security and Observability
Inside every Linux kernel running your containers, there’s a programmable hook point that can observe and intercept any system call, network packet, or kernel function call — in microseconds, with near-zero overhead, without modifying application code or restarting services.
That technology is eBPF (extended Berkeley Packet Filter), and in 2026 it powers the most important tools in the cloud-native stack: Cilium, Falco, Tetragon, Pixie, Parca, and more. If you’re running Kubernetes and you don’t understand eBPF, you’re operating in the dark.
Photo by Franck on Unsplash
What is eBPF?
eBPF is a virtual machine inside the Linux kernel that lets you run sandboxed programs in response to kernel events — without writing kernel modules, without risking system stability, without rebooting.
An eBPF program:
- Is written in a restricted subset of C (or Rust/Go with newer toolchains)
- Is compiled to eBPF bytecode
- Is verified by the kernel’s eBPF verifier (safety check — no infinite loops, no illegal memory access)
- Is JIT-compiled to native CPU instructions
- Is attached to a kernel hook point (system call, network event, function entry/exit)
The result: you can observe or modify kernel behavior in real-time with sub-microsecond overhead.
Application Code (Python, Go, Java...)
↓ syscall
Linux Kernel
├── kprobes (function entry/return)
├── tracepoints (predefined instrumentation points)
├── socket filters (network packets)
├── XDP (eXpress Data Path — packet processing before kernel networking)
└── LSM hooks (security decisions)
↑
eBPF Programs attach here — observe, filter, enforce
Why eBPF Changes Everything
Before eBPF
Observing what a process was doing required:
- Sidecar proxies: Inject Envoy into every pod, adds ~10ms latency per hop, doubles container count
- Application instrumentation: Add SDK code to every service, language-specific, invasive
- Kernel modules: Risky, tied to kernel version, crashes take the whole node down
- ptrace/strace: Massive overhead, not production-safe
For security:
- Audit logging: Noisy, post-hoc, can’t block threats in real-time
- Seccomp: Static, hard to update, requires restart to change
After eBPF
A single eBPF program loaded once can:
- Observe every network connection from every container on the node
- Trace every system call from every process
- Block a syscall pattern in microseconds
- Measure CPU profiles with <1% overhead
- Enforce network policies without iptables or sidecars
Zero code changes in your applications. No restarts. No sidecars.
eBPF in Practice: The Tool Landscape
Cilium: eBPF-Native Kubernetes Networking
Cilium replaces kube-proxy and provides L3-L7 network policies using eBPF — no iptables, no sidecars.
# Install Cilium (replaces kube-proxy)
helm repo add cilium https://helm.cilium.io/
helm install cilium cilium/cilium \
--namespace kube-system \
--set kubeProxyReplacement=true \
--set hubble.enabled=true \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true
Cilium Network Policy (L7-aware):
# Block all traffic except authenticated API calls
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: api-policy
spec:
endpointSelector:
matchLabels:
app: api-server
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
toPorts:
- ports:
- port: "8080"
protocol: TCP
rules:
http:
- method: "GET"
path: "/api/v1/.*"
- method: "POST"
path: "/api/v1/orders"
headers:
- "Authorization: Bearer .*"
This policy enforces at L7: HTTP method, path, and headers — directly in the kernel via eBPF, with no Envoy sidecar.
Performance comparison: | Metric | iptables + kube-proxy | Cilium (eBPF) | |—|—|—| | Policy lookup complexity | O(n) per packet | O(1) via BPF maps | | Connection latency (p99) | 8ms | 2ms | | CPU overhead (10K pods) | 15% | 3% | | Node memory | 200MB | 80MB |
Tetragon: eBPF-Powered Security Enforcement
Tetragon (from Cilium project) takes eBPF security beyond observation — it can enforce security policies in real-time, killing processes or blocking syscalls before they complete.
helm install tetragon cilium/tetragon -n kube-system
kubectl apply -f https://github.com/cilium/tetragon/releases/latest/download/tetragon-crds.yaml
TracingPolicy: Detect and kill privilege escalation:
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
name: block-privilege-escalation
spec:
kprobes:
- call: "security_task_setuid"
syscall: false
return: false
args:
- index: 0
type: "int"
selectors:
- matchArgs:
- index: 0
operator: "Equal"
values:
- "0" # setuid(0) — becoming root
matchActions:
- action: Sigkill # kill the process immediately
TracingPolicy: Alert on sensitive file access:
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
name: detect-secret-access
spec:
kprobes:
- call: "security_file_open"
syscall: false
args:
- index: 0
type: "file"
selectors:
- matchArgs:
- index: 0
operator: "Prefix"
values:
- "/etc/shadow"
- "/var/run/secrets/kubernetes.io"
matchActions:
- action: Post # send alert to Tetragon gRPC API
These policies are enforced in the kernel — there’s no window between detection and enforcement for an attacker to exploit.
Falco: Runtime Security Detection
Falco uses eBPF to monitor syscalls and detect suspicious behavior patterns:
helm install falco falcosecurity/falco \
--namespace falco \
--create-namespace \
--set driver.kind=ebpf
Custom Falco rules:
- rule: Unexpected Network Connection from Container
desc: Detect outbound network connections from containers that shouldn't have network access
condition: >
evt.type = connect
and container.id != host
and not container.name in (allowed_network_containers)
and fd.typechar = 4 # IPv4
output: >
Unexpected outbound connection from container
(user=%user.name container=%container.name image=%container.image.repository
dest=%fd.rip:%fd.rport command=%proc.cmdline)
priority: WARNING
- rule: Shell Spawned in Container
desc: A shell was spawned inside a container (possible interactive attack or breakout)
condition: >
spawned_process
and container.id != host
and shell_procs
and not container.name in (debug_containers)
output: >
Shell opened in container (user=%user.name container=%container.name
shell=%proc.name parent=%proc.pname image=%container.image.repository)
priority: CRITICAL
Pixie: Zero-Instrumentation Observability
Pixie uses eBPF to automatically capture request/response data for all services — HTTP, gRPC, MySQL, PostgreSQL, Redis, Kafka — without any application changes.
# Install Pixie
px deploy
px run px/http_data # instantly see all HTTP traffic across the cluster
What Pixie captures automatically:
- Full request/response bodies (HTTP, gRPC)
- SQL queries with latency and row counts
- Redis commands
- Kafka produce/consume events
- CPU flamegraphs per process
- Network flow data
px run px/cluster # cluster overview
px run px/service_stats # per-service latency, error rate, throughput
px run px/mysql_stats # MySQL query analysis
px run px/cpu_flamegraph # continuous CPU profiling
No SDK. No OTEL collector. No sidecar. Just eBPF.
Writing an eBPF Program (Using libbpf-go)
Here’s a minimal Go program that traces every execve syscall (process execution):
BPF Program (C)
// execve_tracer.bpf.c
#include <linux/bpf.h>
#include <linux/ptrace.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
struct event {
__u32 pid;
__u32 uid;
char comm[16];
char filename[256];
};
struct {
__uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
__uint(key_size, sizeof(__u32));
__uint(value_size, sizeof(__u32));
} events SEC(".maps");
SEC("tracepoint/syscalls/sys_enter_execve")
int trace_execve(struct trace_event_raw_sys_enter *ctx) {
struct event e = {};
e.pid = bpf_get_current_pid_tgid() >> 32;
e.uid = bpf_get_current_uid_gid() & 0xFFFFFFFF;
bpf_get_current_comm(&e.comm, sizeof(e.comm));
bpf_probe_read_user_str(&e.filename, sizeof(e.filename),
(void *)ctx->args[0]);
bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU, &e, sizeof(e));
return 0;
}
char LICENSE[] SEC("license") = "GPL";
Go Userspace (libbpf-go)
// main.go
package main
import (
"encoding/binary"
"fmt"
"os"
"os/signal"
"github.com/cilium/ebpf/link"
"github.com/cilium/ebpf/perf"
"github.com/cilium/ebpf/rlimit"
)
//go:generate go run github.com/cilium/ebpf/cmd/bpf2go tracer execve_tracer.bpf.c
func main() {
// Remove RLIMIT_MEMLOCK restriction (required for eBPF maps)
if err := rlimit.RemoveMemlock(); err != nil {
panic(err)
}
// Load compiled eBPF program
objs := tracerObjects{}
if err := loadTracerObjects(&objs, nil); err != nil {
panic(err)
}
defer objs.Close()
// Attach to tracepoint
tp, err := link.Tracepoint("syscalls", "sys_enter_execve", objs.TraceExecve, nil)
if err != nil {
panic(err)
}
defer tp.Close()
// Read events
rd, err := perf.NewReader(objs.Events, os.Getpagesize())
if err != nil {
panic(err)
}
defer rd.Close()
fmt.Println("Tracing execve syscalls... Ctrl+C to stop")
sig := make(chan os.Signal, 1)
signal.Notify(sig, os.Interrupt)
for {
select {
case <-sig:
return
default:
record, err := rd.Read()
if err != nil {
continue
}
pid := binary.LittleEndian.Uint32(record.RawSample[0:4])
comm := string(record.RawSample[8:24])
filename := string(record.RawSample[24:])
fmt.Printf("PID: %d, COMM: %s, FILE: %s\n", pid, comm, filename)
}
}
}
eBPF Security Model
eBPF programs require CAP_BPF (or CAP_SYS_ADMIN on older kernels). The kernel verifier enforces:
- No infinite loops (bounded loop iterations only)
- No uninitialized memory reads
- No out-of-bounds memory access
- Stack size limit (512 bytes)
- Program complexity limit (1M instructions)
Important: eBPF is a privileged API. Any process that can load eBPF programs effectively has deep kernel observability. In Kubernetes, eBPF-based tools run as DaemonSets with elevated privileges — protect them accordingly.
The eBPF Tool Map in 2026
| Category | Tool | Key Feature |
|---|---|---|
| Networking | Cilium | eBPF CNI, L7 policy, no iptables |
| Security | Tetragon | Runtime enforcement, process kill |
| Security Detection | Falco | Syscall-based anomaly detection |
| Observability | Pixie | Auto-instrumentation, no SDK needed |
| Profiling | Parca | Continuous CPU profiling |
| Tracing | Beyla | Auto-instrumentation for OTEL traces |
| Performance | bpftop | Real-time eBPF program monitor |
| Network Debug | retis | Packet flow tracing through kernel |
Conclusion
eBPF is not just a technology — it’s a paradigm shift. The ability to observe and enforce anything in the kernel, with no application changes, no sidecars, no restarts, and near-zero overhead is fundamentally changing how we build cloud-native infrastructure.
In 2026, you’re probably already using eBPF whether you know it or not: Cilium is the default CNI for many Kubernetes distributions, Falco ships with major cloud security platforms, and the biggest observability vendors are building on eBPF backends.
Understanding eBPF means understanding why Cilium is faster than kube-proxy, why Tetragon can block an attack in microseconds, and why Pixie can show you SQL queries without touching your application. That understanding makes you a better platform engineer — and a better operator.
The kernel has always been the source of truth. eBPF finally lets you read it.
Photo by Luke Chesser on Unsplash
이 글이 도움이 되셨다면 공감 및 광고 클릭을 부탁드립니다 :)
