Controller-Runtime Deployment & Usage Guide
A comprehensive guide for building and deploying Kubernetes controllers using the controller-runtime library.
1. Prerequisites
Required Tools
- Go: Version 1.24+ (required for controller-runtime v0.22.x)
- Kubernetes Cluster: v1.28+ recommended (compatibility matrix below)
- kubectl: Configured to communicate with your cluster
- Docker: For containerizing controllers (if deploying to cluster)
Optional Tools
- Kubebuilder or Operator SDK: Recommended for scaffolding new projects
- kustomize: For managing Kubernetes manifests
Version Compatibility Matrix
| controller-runtime | k8s.io/*, client-go | Minimum Go |
|---|---|---|
| v0.22.x | v0.34 | 1.24 |
| v0.21.x | v0.33 | 1.24 |
| v0.20.x | v0.32 | 1.23 |
2. Installation
As a Library Dependency
Initialize your Go module and add controller-runtime:
go mod init example.com/my-controller
go get sigs.k8s.io/controller-runtime@latest
Using Kubebuilder (Recommended)
For new projects, use Kubebuilder to scaffold a complete controller:
# Install Kubebuilder
curl -L -o kubebuilder "https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go env GOARCH)"
chmod +x kubebuilder && mv kubebuilder /usr/local/bin/
# Create project
mkdir my-controller && cd my-controller
kubebuilder init --domain example.com --repo example.com/my-controller
kubebuilder create api --group apps --version v1 --kind MyResource
3. Configuration
Manager Configuration
The Manager coordinates controllers, clients, and caches. Key configuration options from pkg/manager/manager.go:
import (
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/metrics/server"
)
func main() {
mgr, err := manager.New(config.GetConfigOrDie(), manager.Options{
// Leader Election
LeaderElection: true,
LeaderElectionID: "my-controller-leader-election",
LeaseDuration: 15 * time.Second, // default: 15s
RenewDeadline: 10 * time.Second, // default: 10s
RetryPeriod: 2 * time.Second, // default: 2s
// Health Probes
HealthProbeBindAddress: ":8081",
ReadinessEndpointName: "/readyz", // default
LivenessEndpointName: "/healthz", // default
// Metrics
Metrics: server.Options{
BindAddress: ":8080",
},
// Graceful Shutdown
GracefulShutdownTimeout: 30 * time.Second, // default
// Cache Configuration
Cache: cache.Options{
SyncPeriod: ptr.To(10 * time.Hour), // default: 10h
},
})
}
Client Configuration
From pkg/client/client.go, configure the Kubernetes client:
import "sigs.k8s.io/controller-runtime/pkg/client"
// Create client with options
c, err := client.New(config, client.Options{
Scheme: scheme,
Mapper: mapper,
DryRun: ptr.To(true), // Dry run mode
FieldOwner: "my-controller", // Server-side apply field manager
Cache: &client.CacheOptions{
Reader: cache,
},
})
Cache and Informer Options
From pkg/cache/cache.go:
import "sigs.k8s.io/controller-runtime/pkg/cache"
cacheOpts := cache.Options{
ByObject: map[client.Object]cache.ByObject{
&corev1.ConfigMap{}: {
Field: fields.SelectorFromSet(fields.Set{
"metadata.namespace": "kube-system",
}),
},
},
}
4. Build & Run
Local Development (Out-of-Cluster)
Use your local kubeconfig (automatically detected):
// main.go
import (
"sigs.k8s.io/controller-runtime/pkg/client/config"
"sigs.k8s.io/controller-runtime/pkg/manager"
)
func main() {
// Automatically uses $KUBECONFIG or ~/.kube/config
cfg, err := config.GetConfig()
if err != nil {
panic(err)
}
mgr, err := manager.New(cfg, manager.Options{})
// ... setup controllers ...
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
panic(err)
}
}
Build and run:
# Build
go build -o bin/manager main.go
# Run locally (uses current kubectl context)
export KUBECONFIG=/path/to/config
./bin/manager
In-Cluster Development
When running inside a Kubernetes pod, the manager automatically uses the service account token:
// No additional configuration needed - automatic when running in-cluster
cfg, err := config.GetConfig() // Uses /var/run/secrets/kubernetes.io/serviceaccount
Makefile Targets
Standard Makefile for controller projects:
# Build binary
build:
go build -o bin/manager main.go
# Run against configured cluster
run: manifests generate fmt vet
go run ./main.go
# Install CRDs
install: manifests
kustomize build config/crd | kubectl apply -f -
# Uninstall CRDs
uninstall: manifests
kustomize build config/crd | kubectl delete -f -
# Deploy controller to cluster
deploy: manifests
cd config/manager && kustomize edit set image controller=${IMG}
kustomize build config/default | kubectl apply -f -
5. Deployment
Containerization
Multi-stage Dockerfile:
# Build stage
FROM golang:1.24 AS builder
WORKDIR /workspace
COPY go.mod go.mod
COPY go.sum go.sum
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -o manager main.go
# Runtime stage
FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /workspace/manager .
USER 65532:65532
ENTRYPOINT ["/manager"]
Build and push:
docker build -t ${IMG} .
docker push ${IMG}
Kubernetes Deployment
Required RBAC for leader election and resource access:
# config/rbac/leader_election_role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: leader-election-role
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
# config/manager/manager.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
spec:
replicas: 1
selector:
matchLabels:
control-plane: controller-manager
template:
metadata:
labels:
control-plane: controller-manager
spec:
serviceAccountName: controller-manager
containers:
- command:
- /manager
image: controller:latest
name: manager
ports:
- containerPort: 8080
name: metrics
- containerPort: 8081
name: health
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 10m
memory: 64Mi
Webhook Deployment (Optional)
If using webhooks, deploy with cert-manager:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: webhook-cert
spec:
secretName: webhook-server-cert
dnsNames:
- webhook-service.namespace.svc
issuerRef:
name: selfsigned-issuer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
spec:
template:
spec:
containers:
- name: manager
ports:
- containerPort: 9443
name: webhook-server
protocol: TCP
volumeMounts:
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
volumes:
- name: cert
secret:
secretName: webhook-server-cert
6. Troubleshooting
RBAC Permission Denied
Symptom: User "system:serviceaccount:default:default" cannot get resource "leases" in API group "coordination.k8s.io"
Solution: Ensure the ServiceAccount has proper RBAC for leader election:
kubectl create clusterrolebinding controller-manager \
--clusterrole=manager-role \
--serviceaccount=system:serviceaccount:default:default
Cache Not Synced
Symptom: Controller starts but doesn't react to resource changes.
Solution: Check cache sync status. From pkg/cache/cache.go, use BlockUntilSynced:
// Wait for cache sync before starting controllers
mgr.GetCache().WaitForCacheSync(ctx)
Or increase sync period (default 10 hours):
Cache: cache.Options{
SyncPeriod: ptr.To(5 * time.Minute),
}
Webhook Certificate Errors
Symptom: x509: certificate signed by unknown authority
Solution:
- Ensure cert-manager is installed
- Check certificate secret exists:
kubectl get secret webhook-server-cert - Verify DNS names match service name in Certificate resource
Leader Election Stuck
Symptom: Controller pod stuck after restart, logs show leader election conflicts.
Solution:
- Check for stale leases:
kubectl get leases -n default kubectl delete lease my-controller-leader-election -n default - Adjust lease durations if network latency is high:
LeaseDuration: 30 * time.Second, RenewDeadline: 20 * time.Second,
Client Dry-Run Issues
Symptom: Changes not persisting when using DryRun client option.
Solution: From pkg/client/client.go, DryRun is for validation only. Remove for production:
// Development only
client.Options{DryRun: ptr.To(true)}
// Production
client.Options{}
Memory Leaks in Cache
Symptom: OOMKilled or high memory usage.
Solution: Limit cache to specific namespaces or objects:
Cache: cache.Options{
ByObject: map[client.Object]cache.ByObject{
&corev1.Secret{}: {
Field: fields.SelectorFromSet(fields.Set{
"type": "Opaque",
}),
},
},
DefaultNamespaces: map[string]cache.Config{
"production": {},
},
}
Getting Help
- Slack: #controller-runtime
- Issues: GitHub Issues
- Documentation: pkg.go.dev