Keamanan Container: Memperkuat Docker & Kubernetes
Container merevolusi cara kita melakukan deployment aplikasi, tapi juga membuka celah baru bagi penyerang. Berikut cara mengamankan infrastruktur container Anda, berdasarkan pengalaman nyata di lingkungan produksi.
Model Keamanan Container
Keamanan container terdiri dari beberapa lapisan:
- Keamanan Image – Base image, dependency, dan secrets
- Keamanan Runtime – Isolasi proses, batasan resource
- Keamanan Jaringan – Segmentasi, enkripsi, network policy
- Keamanan Orkestrasi – RBAC, admission controller, policy enforcement
Dasar Keamanan Docker
Gunakan Base Image yang Aman
# ❌ Tidak aman – image besar & usang
FROM ubuntu:18.04
# ✅ Lebih aman – minimal & up-to-date
FROM alpine:3.19
# ✅ Produksi – gunakan distroless
FROM gcr.io/distroless/nodejs:18
# Build multi-stage untuk minimalisasi attack surface
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM gcr.io/distroless/nodejs:18
COPY --from=builder /app/node_modules ./node_modules
COPY . .
USER 1001
EXPOSE 3000
CMD ["server.js"]
Jalankan Sebagai Non-Root
# Buat user baru
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
# Set pemilik direktori
COPY --chown=nodejs:nodejs . /app
# Jalankan sebagai non-root
USER nodejs
# Verifikasi user
RUN whoami # Output: nodejs
Pemindaian Vulnerability
# Scan images sebelum deploy
docker scout cves my-app:latest
# Scan dengan Trivy
trivy image --severity HIGH,CRITICAL my-app:latest
# Scan dengan Snyk
snyk container test my-app:latest
Konfigurasi Docker Daemon
{
"icc": false,
"userns-remap": "default",
"log-driver": "syslog",
"log-opts": {
"syslog-address": "tcp://log-server:514"
},
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 64000,
"Soft": 64000
}
}
}
Keamanan Kubernetes
Pod Security Standard
# Konsep penting meskipun sudah deprecated
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
volumes:
- "configMap"
- "emptyDir"
- "projected"
- "secret"
- "downwardAPI"
- "persistentVolumeClaim"
runAsUser:
rule: "MustRunAsNonRoot"
seLinux:
rule: "RunAsAny"
fsGroup:
rule: "RunAsAny"
Security Context
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1001
runAsGroup: 1001
fsGroup: 1001
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: my-app:latest
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
resources:
limits:
memory: "512Mi"
cpu: "500m"
requests:
memory: "256Mi"
cpu: "250m"
volumeMounts:
- name: tmp
mountPath: /tmp
- name: var-run
mountPath: /var/run
volumes:
- name: tmp
emptyDir: {}
- name: var-run
emptyDir: {}
RBAC Konfigurasi
# Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-service-account
namespace: production
---
# Role dengan minimal akses
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: app-role
rules:
- apiGroups: [""]
resources: ["configmaps", "secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
# Role Binding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-role-binding
namespace: production
subjects:
- kind: ServiceAccount
name: app-service-account
namespace: production
roleRef:
kind: Role
name: app-role
apiGroup: rbac.authorization.k8s.io
Network Policies
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-app-traffic
namespace: production
spec:
podSelector:
matchLabels:
app: my-app
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 3000
egress:
- to:
- namespaceSelector:
matchLabels:
name: database
ports:
- protocol: TCP
port: 5432
- to: {} # Izinkan DNS
ports:
- protocol: UDP
port: 53
Manajemen Secrets
External Secrets Operator
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: vault-backend
namespace: production
spec:
provider:
vault:
server: "https://vault.example.com"
path: "secret"
version: "v2"
auth:
kubernetes:
mountPath: "kubernetes"
role: "app-role"
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app-secrets
namespace: production
spec:
refreshInterval: "10m"
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: app-secret
creationPolicy: Owner
data:
- secretKey: database-password
remoteRef:
key: database
property: password
Sealed Secrets
# Install kubeseal
curl -OL https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.18.0/kubeseal-0.18.0-linux-amd64.tar.gz
# Buat sealed secret
echo -n mypassword | kubectl create secret generic mysecret --dry-run=client --from-file=password=/dev/stdin -o yaml | kubeseal -f - -w mysealedsecret.yaml
Runtime Security
Falco for Runtime Detection
# Custom Falco rule
- rule: Shell spawned in container
desc: A shell was spawned in a container
condition: >
spawned_process and
container and
shell_procs and
proc.pname exists and
not proc.pname in (shell_binaries)
output: >
Shell spawned in container
(user=%user.name container_id=%container.id container_name=%container.name
shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline)
priority: WARNING
tags: [container, shell, mitre_execution]
OPA Gatekeeper Policies
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequiredsecuritycontext
spec:
crd:
spec:
names:
kind: K8sRequiredSecurityContext
validation:
openAPIV3Schema:
type: object
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredsecuritycontext
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := "Container must run as non-root user"
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
container.securityContext.allowPrivilegeEscalation != false
msg := "Container must not allow privilege escalation"
}
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredSecurityContext
metadata:
name: must-run-as-nonroot
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
namespaces: ["production"]
Image Supply Chain Security
Sigstore/Cosign Integration
# Sign container image
cosign sign --key cosign.key my-registry/my-app:v1.0.0
# Verify signature
cosign verify --key cosign.pub my-registry/my-app:v1.0.0
# Admission controller to enforce signed images
apiVersion: v1
kind: ConfigMap
metadata:
name: cosign-verification
data:
policy.yaml: |
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: verify-image
spec:
validationFailureAction: enforce
background: false
rules:
- name: verify-signature
match:
any:
- resources:
kinds:
- Pod
verifyImages:
- image: "my-registry/*"
key: "cosign.pub"
Monitoring & Compliance
Security Metrics Dashboard
# Prometheus rules for security metrics
groups:
- name: security
rules:
- alert: UnauthorizedAPIAccess
expr: increase(apiserver_audit_total[5m]) > 10
for: 2m
labels:
severity: warning
annotations:
summary: "High number of unauthorized API access attempts"
- alert: PrivilegedContainerDetected
expr: kube_pod_container_status_running{container!=""} and on(pod) kube_pod_spec_containers_security_context_privileged == 1
for: 0m
labels:
severity: critical
annotations:
summary: "Privileged container detected in {{ $labels.namespace }}/{{ $labels.pod }}"
Security Scanning Pipeline
# GitHub Actions security pipeline
name: Container Security Scan
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t test-image .
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: "test-image"
format: "sarif"
output: "trivy-results.sarif"
- name: Upload Trivy scan results
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: "trivy-results.sarif"
- name: Hadolint Dockerfile
uses: hadolint/hadolint-action@v2.0.0
with:
dockerfile: Dockerfile
failure-threshold: error
Checklist Keamanan
Docker
- ✅ Gunakan base image minimal
- ✅ Jalankan sebagai user non-root
- ✅ Aktifkan content trust
- ✅ Scan image untuk vulnerability
- ✅ Gunakan multi-stage builds
- ✅ Batasi resource container
Kubernetes
- ✅ Aktifkan RBAC
- ✅ Gunakan network policies
- ✅ Implementasikan pod security standards
- ✅ Amankan etcd dengan enkripsi
- ✅ Update keamanan berkala
- ✅ Aktifkan audit logging
Runtime
- ✅ Pantau dengan Falco
- ✅ Gunakan admission controller
- ✅ Implementasikan kebijakan OPA
- ✅ Pemindaian vulnerability berkala
- ✅ Rencana respons insiden
Keamanan container bukan hal yang sekali jalan—ia butuh perhatian terus-menerus. Praktik-praktik ini telah membantu saya menjaga klaster Kubernetes yang menangani jutaan request per hari.
Punya tantangan keamanan container? Yuk diskusi di LinkedIn.