Home Kubernetes Cluster
Home Kubernetes Cluster
Section titled “Home Kubernetes Cluster”A 4-node Kubernetes cluster built on Orange Pi 5 single-board computers. It runs real workloads — an AI agent platform, home automation, graph databases, and security scanning — not as a learning exercise alone, but as infrastructure I depend on daily.
Why Build This
Section titled “Why Build This”Three reasons converged:
Learning by doing. I wanted vanilla Kubernetes experience — not a managed service where the hard parts are abstracted away, and not a simplified distribution that hides the operational reality. Setting up kubeadm on ARM64 SBCs means confronting every layer: CNI networking, storage provisioning, certificate rotation, etcd health, kubelet configuration. The kind of understanding you can’t get from documentation alone.
A platform for AI agents. I’m building OpenClaw, an AI agent platform that needs always-on infrastructure with persistent storage, network policies, and the ability to run multiple interconnected services. A home cluster gives me full control over the stack without cloud costs that scale with experimentation.
Production-grade homelab. Home Assistant controls physical systems in my house. That demands reliability — not “hobby project” reliability, but actual operational discipline. Running it on Kubernetes forces me to think about high availability, rolling updates, and failure recovery for something that matters.
Architecture
Section titled “Architecture”graph TB subgraph tailscale["Tailscale Mesh Network"] direction TB
subgraph cluster["Kubernetes Cluster — v1.28.2 (kubeadm)"] direction LR
subgraph node1["Node 1 — Control Plane"] api["API Server / etcd"] cilium1["Cilium Agent"] end
subgraph node2["Node 2"] openclaw["OpenClaw Platform"] mcp["MCP Gateway"] signal["Signal CLI"] cilium2["Cilium Agent"] end
subgraph node3["Node 3"] ha["Home Assistant"] falkor["FalkorDB (Graphiti)"] tools["MCP Tool Servers"] cilium3["Cilium Agent"] end
subgraph node4["Node 4"] workloads["Additional Workloads"] cilium4["Cilium Agent"] end end
subgraph storage["Longhorn Distributed Storage"] vol1["Replicated Volumes (2x)"] end end
cluster --> storage tailscale -.->|"Encrypted Overlay"| internet["Remote Access"]Each node is an Orange Pi 5 — Rockchip RK3588S, 8 ARM Cortex cores, 16 GB RAM. Total cluster capacity: 32 cores, 64 GB RAM. Modest by cloud standards, substantial for a homelab.
Technology Choices
Section titled “Technology Choices”Kubernetes via kubeadm
Section titled “Kubernetes via kubeadm”Not K3s, not MicroK8s. Vanilla Kubernetes deployed with kubeadm. The tradeoff is more operational overhead in exchange for a cluster that behaves exactly like production Kubernetes everywhere else. When I troubleshoot an issue here, the knowledge transfers directly to any enterprise or cloud deployment.
Version: v1.28.2
Cilium CNI
Section titled “Cilium CNI”Cilium replaces kube-proxy and handles all networking via eBPF programs attached directly to the Linux kernel. Two reasons this matters on resource-constrained nodes:
- Performance. eBPF avoids the iptables chains that scale poorly with service count. On nodes with 16 GB RAM running multiple workloads, efficiency matters.
- Network policies.
CiliumNetworkPolicyresources provide L3-L7 policy enforcement. Every namespace has explicit ingress/egress rules. Home Assistant, which controls physical devices, gets locked down to only the traffic it needs.
Longhorn Storage
Section titled “Longhorn Storage”Distributed block storage across all four nodes with 2x replication. When a node goes down for maintenance, volumes remain available. Longhorn was chosen because it’s designed for commodity hardware — it doesn’t assume enterprise SSDs or dedicated storage networks. It runs on the same disks the OS uses, which is exactly the constraint SBCs impose.
Version: v1.10.1, 2x replication factor
Tailscale
Section titled “Tailscale”Every node joins a Tailscale mesh network. No ports exposed to the public internet. Remote access works through WireGuard tunnels with identity-based authentication. This is the only path into the cluster from outside the local network.
GitOps with Helm
Section titled “GitOps with Helm”All workloads are defined as Helm charts and deployed through a GitOps workflow. Infrastructure changes go through version control. This isn’t optional when you’re running a cluster you can’t physically access half the time — you need to know exactly what’s deployed and why.
What Runs On It
Section titled “What Runs On It”| Workload | Purpose |
|---|---|
| OpenClaw | AI agent platform — long-running autonomous agents with tool access, memory, and inter-agent communication |
| Home Assistant | Home automation — thermostat, lighting, presence detection, physical device control |
| MCP Gateway | Model Context Protocol router — connects AI agents to tools and data sources |
| Signal CLI | Messaging integration — agents can send and receive Signal messages |
| FalkorDB | Graph database backing Graphiti — episodic and semantic memory for AI agents |
| MCP Tool Servers | Various tool servers — web search, file access, calendar, and custom integrations |
| Security Scanning | Trivy and kube-bench for vulnerability scanning and CIS benchmark compliance |
Deep Dives
Section titled “Deep Dives”Each major infrastructure domain has a dedicated page with architecture details, configuration specifics, and operational lessons:
- Networking — Cilium eBPF dataplane, CiliumNetworkPolicy patterns, MetalLB L2 load balancing, ingress-nginx routing
- Storage — Longhorn distributed block storage, 2x replication strategy, PVC patterns, backup roadmap
- GitOps — Flux v2 reconciliation, Helm chart management, deployment workflows, secrets with Vault and External Secrets Operator
Challenges and Lessons
Section titled “Challenges and Lessons”ARM64 Compatibility
Section titled “ARM64 Compatibility”Not everything publishes ARM64 container images. Every new tool requires checking multi-arch support before adoption. Some projects publish linux/amd64 only, which means either finding alternatives, building from source, or contributing ARM64 support upstream. This is getting better year over year, but it’s still a real constraint.
Vendor Kernels
Section titled “Vendor Kernels”The Orange Pi 5 runs a Rockchip vendor kernel (6.1.115-vendor-rk35xx). This isn’t mainline Linux — it includes proprietary patches for hardware support. The practical impact: kernel features like BTF (BPF Type Format) may not be available, which affects tools that depend on CO-RE (Compile Once, Run Everywhere) eBPF. Falco, for example, needs to fall back to its kernel module driver instead of modern eBPF on these nodes.
Resource Constraints
Section titled “Resource Constraints”16 GB of RAM per node sounds generous until you’re running a graph database, an AI agent platform, a home automation system, and Kubernetes system components on the same hardware. Resource requests and limits aren’t aspirational here — they’re load-bearing. Every workload gets explicit CPU and memory bounds, and I’ve learned exactly what happens when you get them wrong.
Running Production on SBCs
Section titled “Running Production on SBCs”Single-board computers aren’t designed for 24/7 server workloads. Thermal management matters — sustained CPU load on passively cooled boards will thermal-throttle. Storage I/O on eMMC or SD cards has different reliability characteristics than enterprise SSDs. Power supplies need to be reliable; an unstable 5V rail takes out a node. These are infrastructure problems, not Kubernetes problems, but they’re inseparable when your data center is a shelf in your office.
Operational Reality
Section titled “Operational Reality”This cluster has taught me more about Kubernetes operations than any managed service could. Certificate expiration, etcd compaction, node drain procedures, storage rebalancing after a node failure — these are things you read about in documentation but internalize only when they happen at 11 PM and Home Assistant stops working.
Infrastructure Specs
Section titled “Infrastructure Specs”| Component | Detail |
|---|---|
| Nodes | 4× Orange Pi 5 (RK3588S) |
| CPU | 8 cores per node (32 total) — ARM Cortex-A76/A55 |
| RAM | 16 GB per node (64 GB total) |
| Kubernetes | v1.28.2 via kubeadm |
| CNI | Cilium with eBPF |
| Storage | Longhorn v1.10.1, 2× replication |
| Networking | Tailscale mesh (WireGuard) |
| Deployment | Helm charts, GitOps |
| Kernel | 6.1.115-vendor-rk35xx (Rockchip) |
| Architecture | ARM64 (aarch64) |