Kubernetes RBAC: Securing Your Cluster
Implement Role-Based Access Control in Kubernetes with practical examples for users, service accounts, and CI/CD pipelines.
A default Kubernetes cluster gives too much access to too many things. RBAC (Role-Based Access Control) lets you define who can do what, where. Get it wrong, and you have a security nightmare. Get it right, and you have defense in depth.
RBAC Building Blocks
RBAC has four key objects:
| Object | Scope | Purpose |
|---|---|---|
| Role | Namespace | Defines permissions in a namespace |
| ClusterRole | Cluster-wide | Defines permissions across all namespaces |
| RoleBinding | Namespace | Grants Role/ClusterRole to users in a namespace |
| ClusterRoleBinding | Cluster-wide | Grants ClusterRole to users cluster-wide |
Roles and Permissions
Namespace-Scoped Role
# developer-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: developer
namespace: development
rules:
# Full access to pods
- apiGroups: [""]
resources: ["pods", "pods/log", "pods/exec"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
# Read-only for services and configmaps
- apiGroups: [""]
resources: ["services", "configmaps"]
verbs: ["get", "list", "watch"]
# Manage deployments
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
# View secrets (no create/update)
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
ClusterRole for Cluster-Wide Access
# cluster-viewer-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-viewer
rules:
# Read-only access to most resources
- apiGroups: [""]
resources: ["namespaces", "nodes", "persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments", "daemonsets", "statefulsets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods", "services", "configmaps"]
verbs: ["get", "list", "watch"]
# No access to secrets cluster-wide
Available Verbs
get - Read a specific resource
list - List resources
watch - Watch for changes
create - Create new resources
update - Update existing resources (full replacement)
patch - Partial update
delete - Delete resources
deletecollection - Delete multiple resources
API Groups Reference
# Core API (v1) - empty string
apiGroups: [""]
resources: ["pods", "services", "configmaps", "secrets", "namespaces", "nodes"]
# Apps
apiGroups: ["apps"]
resources: ["deployments", "replicasets", "statefulsets", "daemonsets"]
# Batch
apiGroups: ["batch"]
resources: ["jobs", "cronjobs"]
# Networking
apiGroups: ["networking.k8s.io"]
resources: ["ingresses", "networkpolicies"]
# Storage
apiGroups: ["storage.k8s.io"]
resources: ["storageclasses", "persistentvolumeclaims"]
# RBAC
apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]
Role Bindings
Binding Role to User
# developer-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: developer-binding
namespace: development
subjects:
# Bind to a user
- kind: User
name: [email protected]
apiGroup: rbac.authorization.k8s.io
# Bind to a group
- kind: Group
name: developers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: developer
apiGroup: rbac.authorization.k8s.io
ClusterRoleBinding
# cluster-viewer-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-viewer-binding
subjects:
- kind: Group
name: sre-team
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-viewer
apiGroup: rbac.authorization.k8s.io
Binding ClusterRole to Namespace (Reusable Roles)
# Use ClusterRole but bind to specific namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: developer-binding
namespace: staging # Only applies to staging namespace
subjects:
- kind: User
name: [email protected]
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole # ClusterRole, not Role!
name: developer # Can reuse across namespaces
apiGroup: rbac.authorization.k8s.io
Service Accounts
Creating Service Accounts
# app-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-service-account
namespace: production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-role
namespace: production
rules:
# App can read its own configmaps
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list"]
# App can read secrets it needs
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["app-secrets", "db-credentials"] # Specific secrets only!
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-role-binding
namespace: production
subjects:
- kind: ServiceAccount
name: app-service-account
namespace: production
roleRef:
kind: Role
name: app-role
apiGroup: rbac.authorization.k8s.io
Using Service Account in Pods
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
template:
spec:
serviceAccountName: app-service-account
automountServiceAccountToken: true # Mount token for API access
containers:
- name: app
image: my-app:latest
Disabling Default Service Account
# Disable token mounting for pods that don't need API access
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: production
automountServiceAccountToken: false
CI/CD Pipeline Access
Deployment Service Account
# ci-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ci-deployer
namespace: kube-system # Or dedicated CI namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ci-deployer-role
rules:
# Deploy to any namespace
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "configmaps"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
# View pods for deployment status
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
# Manage ingress
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ci-deployer-binding
subjects:
- kind: ServiceAccount
name: ci-deployer
namespace: kube-system
roleRef:
kind: ClusterRole
name: ci-deployer-role
apiGroup: rbac.authorization.k8s.io
Generate Kubeconfig for CI
#!/bin/bash
# generate-ci-kubeconfig.sh
SA_NAME=ci-deployer
NAMESPACE=kube-system
CLUSTER_NAME=my-cluster
# Get the secret name (K8s 1.24+ needs manual token)
kubectl create token $SA_NAME -n $NAMESPACE --duration=8760h > /tmp/token
# Get cluster info
CLUSTER_SERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
CLUSTER_CA=$(kubectl config view --minify --flatten -o jsonpath='{.clusters[0].cluster.certificate-authority-data}')
# Create kubeconfig
cat > ci-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
clusters:
- name: $CLUSTER_NAME
cluster:
server: $CLUSTER_SERVER
certificate-authority-data: $CLUSTER_CA
users:
- name: $SA_NAME
user:
token: $(cat /tmp/token)
contexts:
- name: $SA_NAME@$CLUSTER_NAME
context:
cluster: $CLUSTER_NAME
user: $SA_NAME
current-context: $SA_NAME@$CLUSTER_NAME
EOF
rm /tmp/token
echo "Kubeconfig saved to ci-kubeconfig.yaml"
Common RBAC Patterns
Read-Only Viewer
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: namespace-viewer
rules:
- apiGroups: ["", "apps", "batch", "networking.k8s.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
# Explicitly deny secrets
- apiGroups: [""]
resources: ["secrets"]
verbs: [] # No permissions
Namespace Admin
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: namespace-admin
rules:
# Full access to namespace resources
- apiGroups: ["", "apps", "batch", "networking.k8s.io", "autoscaling"]
resources: ["*"]
verbs: ["*"]
# Cannot modify RBAC
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["*"]
verbs: ["get", "list", "watch"] # View only
Secret Reader (Specific Secrets Only)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: secret-reader
namespace: production
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["db-password", "api-key"] # Only these secrets
verbs: ["get"]
Pod Exec Access
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-exec
namespace: staging
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec", "pods/attach"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
Aggregated ClusterRoles
Combine multiple roles automatically:
# Base role with aggregation label
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring-view
labels:
rbac.example.com/aggregate-to-monitoring: "true"
rules:
- apiGroups: ["monitoring.coreos.com"]
resources: ["prometheuses", "alertmanagers"]
verbs: ["get", "list", "watch"]
---
# Aggregating role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring-admin
aggregationRule:
clusterRoleSelectors:
- matchLabels:
rbac.example.com/aggregate-to-monitoring: "true"
rules: [] # Rules are auto-populated from aggregated roles
Auditing RBAC
Check User Permissions
# What can I do?
kubectl auth can-i --list
# What can I do in a specific namespace?
kubectl auth can-i --list -n production
# Can specific user do something?
kubectl auth can-i create deployments --as [email protected] -n production
# Can service account do something?
kubectl auth can-i get secrets --as system:serviceaccount:production:app-service-account
Find Overly Permissive Bindings
# Find all cluster-admin bindings (dangerous!)
kubectl get clusterrolebindings -o json | jq '.items[] | select(.roleRef.name=="cluster-admin") | .metadata.name'
# List all rolebindings in a namespace
kubectl get rolebindings -n production -o wide
# Describe role to see permissions
kubectl describe role developer -n development
RBAC Audit Tools
# Use kubectl-who-can
kubectl who-can create pods -n production
kubectl who-can delete secrets --all-namespaces
# Use rbac-tool
kubectl rbac-tool lookup [email protected]
kubectl rbac-tool analysis
Common Mistakes
Mistake 1: Using cluster-admin for Everything
# BAD: Giving cluster-admin to CI
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ci-admin
subjects:
- kind: ServiceAccount
name: ci-deployer
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin # DO NOT DO THIS
apiGroup: rbac.authorization.k8s.io
Mistake 2: Wildcard Resources
# BAD: Wildcard everything
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
# GOOD: Explicit permissions
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "update"]
Mistake 3: Not Using resourceNames
# BAD: Can read ALL secrets
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
# GOOD: Can only read specific secrets
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["app-config", "tls-cert"]
verbs: ["get"]
When NOT to Use Custom RBAC
- Single-user clusters: Default roles are fine
- Development clusters: Too much overhead for experimentation
- Managed K8s with IAM integration: Use cloud IAM (EKS IRSA, GKE Workload Identity)
Key Takeaways
- Start with least privilege — grant minimum permissions needed
- Use Roles over ClusterRoles when possible — namespace-scoped is safer
- Use resourceNames to restrict access to specific resources
- Disable default service account tokens — pods don’t need API access by default
- Audit regularly — check for cluster-admin bindings and wildcards
- Use aggregated roles for complex permission sets
- Generate short-lived tokens for CI/CD instead of long-lived secrets
RBAC is your last line of defense. A compromised pod with minimal permissions causes minimal damage. A compromised pod with cluster-admin takes down everything.