Container Security: Scanning and Runtime Protection
Implement comprehensive container security from image scanning in CI/CD to runtime protection with Falco, seccomp, and AppArmor.
Container security isn’t a single tool—it’s a strategy that spans from build time to runtime. A vulnerability in your base image, an overprivileged container, or a compromised workload can all lead to breach. Here’s how to secure every layer.
The Container Security Stack
| Layer | Phase | Tools |
|---|---|---|
| Image Scanning | Build | Trivy, Grype, Snyk |
| Image Signing | Build/Deploy | Cosign, Notary |
| Registry Scanning | Storage | Harbor, ECR scanning |
| Admission Control | Deploy | OPA Gatekeeper, Kyverno |
| Runtime Security | Run | Falco, Sysdig, Tetragon |
| Network Policies | Run | Calico, Cilium |
Image Scanning
Trivy: The Swiss Army Knife
# Scan local image
trivy image my-app:latest
# Scan with severity filter
trivy image --severity HIGH,CRITICAL my-app:latest
# Scan and fail on critical
trivy image --exit-code 1 --severity CRITICAL my-app:latest
# Output as JSON for CI/CD
trivy image --format json --output results.json my-app:latest
# Scan filesystem (Dockerfile context)
trivy fs --security-checks vuln,config .
CI/CD Integration
# .github/workflows/security.yml
name: Container Security
on: [push, pull_request]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t my-app:${{ github.sha }} .
- name: Scan for vulnerabilities
uses: aquasecurity/trivy-action@master
with:
image-ref: my-app:${{ github.sha }}
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
exit-code: '1'
- name: Upload scan results
uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: 'trivy-results.sarif'
Dockerfile Security Scanning
# Scan Dockerfile for misconfigurations
trivy config Dockerfile
# Common issues detected:
# - Running as root
# - Using latest tag
# - Exposing unnecessary ports
# - Missing healthcheck
# BAD Dockerfile
FROM node:latest
COPY . /app
RUN npm install
EXPOSE 22 80 443
CMD ["node", "server.js"]
# GOOD Dockerfile
FROM node:20-alpine@sha256:abc123...
RUN addgroup -g 1001 appgroup && \
adduser -u 1001 -G appgroup -D appuser
WORKDIR /app
COPY --chown=appuser:appgroup package*.json ./
RUN npm ci --only=production
COPY --chown=appuser:appgroup . .
USER appuser
EXPOSE 8080
HEALTHCHECK --interval=30s CMD wget -q --spider http://localhost:8080/health || exit 1
CMD ["node", "server.js"]
Image Signing with Cosign
Sign Images
# Generate keypair
cosign generate-key-pair
# Sign image
cosign sign --key cosign.key my-registry/my-app:v1.0.0
# Verify signature
cosign verify --key cosign.pub my-registry/my-app:v1.0.0
Keyless Signing with OIDC
# GitHub Actions - sign with GitHub OIDC
- name: Sign image
env:
COSIGN_EXPERIMENTAL: 1
run: |
cosign sign \
--oidc-issuer https://token.actions.githubusercontent.com \
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
Kubernetes Security Context
Pod Security Context
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: my-app:v1.0.0
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
privileged: false
volumeMounts:
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /app/.cache
volumes:
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
Pod Security Standards
# Enforce restricted policy on namespace
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
Admission Control
Kyverno Policies
# Require non-root containers
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-non-root
spec:
validationFailureAction: Enforce
rules:
- name: check-runAsNonRoot
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Running as root is not allowed"
pattern:
spec:
securityContext:
runAsNonRoot: true
containers:
- securityContext:
runAsNonRoot: true
allowPrivilegeEscalation: false
# Require image from allowed registries
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: allowed-registries
spec:
validationFailureAction: Enforce
rules:
- name: validate-registries
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Images must be from allowed registries"
pattern:
spec:
containers:
- image: "gcr.io/my-project/* | docker.io/my-org/*"
# Require resource limits
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-resource-limits
spec:
validationFailureAction: Enforce
rules:
- name: check-limits
match:
any:
- resources:
kinds:
- Pod
validate:
message: "CPU and memory limits are required"
pattern:
spec:
containers:
- resources:
limits:
memory: "?*"
cpu: "?*"
OPA Gatekeeper
# Constraint Template
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
validation:
openAPIV3Schema:
properties:
labels:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("Missing required labels: %v", [missing])
}
---
# Constraint
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: require-team-label
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
parameters:
labels: ["team", "environment"]
Runtime Security with Falco
Install Falco
# Helm installation
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm install falco falcosecurity/falco \
--namespace falco \
--create-namespace \
--set falcosidekick.enabled=true \
--set falcosidekick.webui.enabled=true
Custom Falco Rules
# falco-custom-rules.yaml
customRules:
custom-rules.yaml: |
- rule: Detect Shell in Container
desc: Detect shell execution in a container
condition: >
spawned_process and
container and
shell_procs and
not user_shell_container_exclusions
output: >
Shell spawned in container
(user=%user.name container=%container.name shell=%proc.name
parent=%proc.pname cmdline=%proc.cmdline)
priority: WARNING
tags: [container, shell, process]
- rule: Detect Crypto Mining
desc: Detect cryptocurrency mining processes
condition: >
spawned_process and
(proc.name in (crypto_miners) or
proc.cmdline contains "stratum+tcp" or
proc.cmdline contains "mining" or
proc.cmdline contains "xmrig")
output: >
Crypto mining detected
(user=%user.name command=%proc.cmdline container=%container.name)
priority: CRITICAL
tags: [cryptomining, process]
- rule: Detect Sensitive File Read
desc: Detect access to sensitive files
condition: >
open_read and
container and
(fd.name startswith /etc/shadow or
fd.name startswith /etc/passwd or
fd.name contains id_rsa or
fd.name contains .kube/config)
output: >
Sensitive file opened for reading
(user=%user.name file=%fd.name container=%container.name)
priority: WARNING
tags: [filesystem, sensitive_files]
Falco Alerts to Slack
# falcosidekick config
falcosidekick:
config:
slack:
webhookurl: "https://hooks.slack.com/services/xxx"
minimumpriority: warning
outputformat: all
# Also send to SIEM
elasticsearch:
hostport: "https://elasticsearch:9200"
index: "falco"
Seccomp Profiles
Default Profile
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
seccompProfile:
type: RuntimeDefault # Docker/containerd default profile
containers:
- name: app
image: my-app:v1.0.0
Custom Seccomp Profile
// /var/lib/kubelet/seccomp/profiles/strict.json
{
"defaultAction": "SCMP_ACT_ERRNO",
"architectures": ["SCMP_ARCH_X86_64"],
"syscalls": [
{
"names": [
"accept4", "access", "arch_prctl", "bind", "brk",
"close", "connect", "epoll_create1", "epoll_ctl",
"epoll_pwait", "exit_group", "fcntl", "fstat",
"futex", "getpeername", "getpid", "getsockname",
"getsockopt", "listen", "mmap", "mprotect",
"nanosleep", "openat", "read", "recvfrom",
"rt_sigaction", "rt_sigprocmask", "sendto",
"setsockopt", "socket", "write"
],
"action": "SCMP_ACT_ALLOW"
}
]
}
apiVersion: v1
kind: Pod
metadata:
name: strict-pod
spec:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/strict.json
containers:
- name: app
image: my-app:v1.0.0
Network Policies
Default Deny All
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Applies to all pods
policyTypes:
- Ingress
- Egress
Allow Specific Traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-network-policy
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
- Egress
ingress:
# Allow from ingress controller
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
- podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
ports:
- protocol: TCP
port: 8080
egress:
# Allow to database
- to:
- podSelector:
matchLabels:
app: postgresql
ports:
- protocol: TCP
port: 5432
# Allow DNS
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
Registry Security
Private Registry with Harbor
# Scan images on push
# harbor-values.yaml
trivy:
enabled: true
notary:
enabled: true # Image signing
# Vulnerability scanning policy
portal:
vulnerabilityScanningPolicy: |
{
"prevent_vul_images": true,
"severity": "high"
}
Image Pull Secrets
# Create secret
kubectl create secret docker-registry regcred \
--docker-server=registry.company.com \
--docker-username=user \
--docker-password=pass \
-n production
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
imagePullSecrets:
- name: regcred
containers:
- name: app
image: registry.company.com/my-app:v1.0.0
Security Checklist
Build Time
- Use minimal base images (distroless, Alpine)
- Pin image versions with digests
- Scan images for vulnerabilities
- Scan Dockerfiles for misconfigurations
- Sign images
- Don’t run as root
- Don’t include secrets in images
Deploy Time
- Use admission controllers (Kyverno/Gatekeeper)
- Enforce Pod Security Standards
- Require resource limits
- Require security contexts
- Verify image signatures
Runtime
- Enable audit logging
- Use runtime security (Falco)
- Implement network policies
- Use read-only root filesystems
- Drop all capabilities
- Use seccomp profiles
Key Takeaways
- Shift left — catch vulnerabilities in CI before they reach production
- Use admission control — prevent insecure workloads from deploying
- Apply least privilege — non-root, read-only filesystem, dropped capabilities
- Monitor runtime — Falco detects attacks that passed other defenses
- Layer defenses — no single tool catches everything
- Network policies — default deny, explicit allow
Container security is defense in depth. Each layer catches what the previous one missed. A vulnerability that makes it past image scanning might be blocked by admission control. An attack that bypasses admission control gets detected by Falco.