Container Security Best Practices
Container security encompasses the technical controls, configuration standards, and operational practices that govern the integrity of containerized workloads from image build through runtime execution and decommission. This reference covers the structural mechanics of container security, the regulatory and standards frameworks that apply to containerized environments, classification boundaries between image-level and orchestration-level controls, and the documented tradeoffs that security and engineering teams encounter when securing container deployments at scale.
- Definition and scope
- Core mechanics or structure
- Causal relationships or drivers
- Classification boundaries
- Tradeoffs and tensions
- Common misconceptions
- Checklist or steps (non-advisory)
- Reference table or matrix
- References
Definition and scope
Container security refers to the set of practices, policies, and controls applied to protect software containers — discrete, portable execution units that package application code alongside runtime dependencies — across their full lifecycle. The scope spans four distinct layers: the container image itself, the container runtime engine (such as containerd or CRI-O), the orchestration platform (most commonly Kubernetes), and the underlying host infrastructure.
The National Institute of Standards and Technology (NIST) addresses container security directly in NIST SP 800-190, "Application Container Security Guide", which identifies five primary risk areas: image vulnerabilities, image configuration defects, embedded secrets, runtime threats, and orchestrator misconfigurations. These five categories define the operational perimeter of the discipline.
Container security sits at the intersection of application security, cloud infrastructure security, and DevSecOps engineering. The Cloud Security Alliance (CSA) addresses container security within its Cloud Controls Matrix (CCM), specifically under the Application and Interface Security (AIS) and Infrastructure and Virtualization Security (IVS) domains. Federal environments subject to FedRAMP authorization requirements must address container controls as part of the broader system security plan, drawing on the control families defined in NIST SP 800-53 Rev. 5.
Core mechanics or structure
Container security operates across four structural layers, each requiring discrete controls that interact with the layers above and below.
Image layer controls address what is built into a container before it ever runs. Secure base images, minimal package footprints, and the absence of embedded credentials are foundational requirements. Reproducible builds and cryptographic signing — supported by tools operating under the Sigstore open-source framework, which is a Linux Foundation project — allow verification of image provenance throughout the supply chain.
Registry layer controls govern storage and distribution. Private registries with access controls, vulnerability scanning of stored images, and enforced image signing policies determine what images are eligible for deployment. The Open Container Initiative (OCI), a Linux Foundation project, defines the image specification and distribution specification that underpin registry interoperability and integrity guarantees.
Runtime layer controls operate at execution time. Seccomp profiles restrict the Linux kernel system calls a container may invoke. AppArmor and SELinux mandatory access control policies constrain file system and network access. Linux namespaces and cgroups provide process isolation and resource limits that are the foundational primitives of container security. Running containers as non-root users and enforcing read-only file systems at runtime reduce the blast radius of any exploitation.
Orchestration layer controls apply to Kubernetes and equivalent platforms. Role-Based Access Control (RBAC) limits which identities can create, modify, or delete workloads. Pod Security Admission (PSA) — the successor to the deprecated Pod Security Policy (PSP) — enforces security profiles at namespace level across three policy tiers: Privileged, Baseline, and Restricted, as defined in the Kubernetes documentation. Network policies segment pod-to-pod communication. The Kubernetes API server's audit logging provides the forensic record necessary for incident reconstruction.
For organizations using managed Kubernetes services in federal or regulated commercial environments, the Center for Internet Security (CIS) Kubernetes Benchmark provides 270+ specific configuration checks organized by component.
Causal relationships or drivers
Container security failures trace to a discrete set of causal patterns rather than random operational events. Understanding these causal chains informs where controls have the highest leverage.
Supply chain provenance gaps are the primary driver of image-level compromise. When build pipelines pull base images from unverified sources or do not pin image digests, a compromised upstream image propagates automatically to production. The SLSA (Supply-chain Levels for Software Artifacts) framework, governed by the OpenSSF under the Linux Foundation, formalizes four provenance levels that directly address this causal chain.
Excessive privilege accumulation in orchestration configurations drives lateral movement after initial container compromise. A container running as root with hostPID, hostNetwork, or privileged flags enabled has near-equivalent access to the host. The MITRE ATT&CK for Containers matrix documents 34 techniques and sub-techniques across tactics including Privilege Escalation and Container Escape that exploit these misconfigurations.
Secret sprawl — the embedding of API keys, database credentials, and certificates directly in images or environment variables — enables credential harvesting when image contents are exposed. The driver is the convenience trade-off in CI/CD pipelines where secrets management integration adds pipeline complexity. Dedicated secrets management systems (such as those conforming to the patterns described in NIST SP 800-57 for key management) are the structural countermeasure.
Drift between build-time and runtime configurations arises when images are scanned at build time but not re-evaluated against newly published CVEs before deployment. The time between a CVE disclosure and active exploitation in the wild has shortened measurably; the CISA Known Exploited Vulnerabilities (KEV) Catalog tracks this operationally.
Classification boundaries
Container security controls divide along two primary axes: lifecycle phase and abstraction layer.
By lifecycle phase:
- Build-time controls — static analysis, dependency scanning, Dockerfile linting, image signing
- Registry controls — access policy, vulnerability scanning of stored layers, admission gating
- Deploy-time controls — admission controllers, policy enforcement (OPA/Gatekeeper, Kyverno), RBAC verification
- Runtime controls — behavioral monitoring, anomaly detection, syscall filtering, network segmentation
By abstraction layer:
- Host-level — kernel hardening, OS-level namespaces, node access controls
- Container-level — image content, runtime process isolation, seccomp/AppArmor
- Orchestration-level — RBAC, network policy, admission control, audit logging
- Application-level — mTLS between services, identity federation, secret injection patterns
These two axes produce an 8-cell matrix that organizations use to identify control gaps. The CSA Container Security specification maps this matrix to cloud service model responsibilities (IaaS, CaaS, PaaS), which is particularly relevant when selecting services from cloud security providers for containerized deployment environments.
Tradeoffs and tensions
Security versus image size minimization. Distroless and scratch-based images reduce attack surface by eliminating shell access and package managers, but they complicate debugging and incident response. Security teams favoring minimal images accept reduced operational visibility as a deliberate tradeoff.
Immutable infrastructure versus operational agility. The security model of treating containers as immutable — never patching in place, only replacing — conflicts with operational pressure to apply hotfixes rapidly. Mature container security programs address this through automated rebuild-and-redeploy pipelines, but these require CI/CD infrastructure investments that smaller teams may not have in place.
Admission controller strictness versus developer velocity. Enforcing the Kubernetes Restricted pod security profile blocks a meaningful percentage of workloads that rely on elevated permissions for legitimate reasons (monitoring agents, log collectors, service meshes). Organizations must define policy exception workflows that do not become permanent bypasses.
Vulnerability scanning thresholds versus deployment frequency. Setting a zero-critical-CVE gate on container images in high-frequency deployment pipelines can block deployments for vulnerabilities that have no applicable exploit path in the specific runtime context. The NIST National Vulnerability Database (NVD) CVSS scoring provides severity ratings, but exploitability context requires additional analysis that automated scanners do not always provide.
Shared kernel risk. Unlike virtual machines, containers share the host kernel. A kernel vulnerability is a container escape vulnerability. This architectural characteristic — not a misconfiguration — is a structural tension between the operational efficiency of containers and the stronger isolation boundary of hardware virtualization.
Common misconceptions
Misconception: Kubernetes namespaces provide security isolation.
Namespaces provide organizational and resource-quota isolation, not security boundaries. Network policies, RBAC, and pod security admission are the mechanisms that enforce security — namespaces alone do not restrict pod-to-pod network communication or limit privilege escalation. The Kubernetes documentation explicitly notes that namespaces are not a security mechanism.
Misconception: Scanning images once at build time is sufficient.
A container image that passes vulnerability scanning at build time may contain 0-day or newly disclosed CVEs by the time it is deployed or during its runtime lifespan. Continuous scanning of registry-resident images against updated vulnerability databases is a distinct requirement from build-time scanning.
Misconception: Running as non-root eliminates container escape risk.
Non-root execution raises the bar for exploitation but does not eliminate container escape vectors. Linux capabilities assigned to a non-root process, writable host path mounts, and vulnerable container runtimes can all enable privilege escalation from a non-root user. NIST SP 800-190 identifies runtime threats as a category independent of user privilege level.
Misconception: Container security is solely a DevOps responsibility.
Regulatory frameworks including HIPAA Security Rule (45 CFR §164.312) and PCI DSS v4.0 impose technical safeguard requirements on organizations regardless of how their compute workloads are packaged. Container orchestration platforms used to process covered health information or cardholder data are within regulatory scope. Understanding the full scope of the clarifies how service providers in this space are structured relative to compliance obligations.
Checklist or steps (non-advisory)
The following sequence describes the structural phases of a container security implementation program, drawn from the control areas identified in NIST SP 800-190 and the CIS Kubernetes Benchmark.
Phase 1: Image hardening
- [ ] Select a minimal, verified base image (distroless or official vendor image with active CVE maintenance)
- [ ] Pin base image references by digest (SHA-256), not by mutable tags
- [ ] Run static analysis (Dockerfile linting) against a defined ruleset during CI
- [ ] Execute image vulnerability scanning against a current CVE database before registry push
- [ ] Remove all secrets, tokens, and credentials from image layers and build arguments
- [ ] Sign images using a cryptographic signing mechanism compliant with OCI Image Spec
Phase 2: Registry controls
- [ ] Enforce registry access via role-based authentication
- [ ] Enable continuous vulnerability scanning of images stored in the registry
- [ ] Configure admission policies to reject unsigned or unscanned images at deploy time
- [ ] Define and enforce image retention and deletion policies
Phase 3: Orchestration hardening (Kubernetes)
- [ ] Apply CIS Kubernetes Benchmark configuration checks to API server, etcd, scheduler, and kubelet
- [ ] Enable Pod Security Admission at Restricted or Baseline tier for all production namespaces
- [ ] Configure RBAC with least-privilege principles; audit service account token permissions
- [ ] Implement network policies to restrict pod-to-pod and pod-to-external traffic by default
- [ ] Enable Kubernetes API server audit logging with defined retention policy
Phase 4: Runtime controls
- [ ] Apply seccomp profiles to restrict available syscalls to the minimum required set
- [ ] Enforce AppArmor or SELinux mandatory access control profiles on container processes
- [ ] Configure containers with read-only root file systems where the application permits
- [ ] Set CPU and memory resource limits on all pods to prevent resource exhaustion attacks
- [ ] Deploy runtime threat detection tooling that monitors syscall behavior and anomalous process activity
Phase 5: Secrets and supply chain
- [ ] Integrate a dedicated secrets management system; eliminate environment-variable secret injection
- [ ] Implement SLSA provenance level 2 or higher for build pipeline artifacts
- [ ] Maintain a software bill of materials (SBOM) per image in a machine-readable format (CycloneDX or SPDX)
- [ ] Establish a container image update cadence tied to upstream CVE disclosure feeds
For organizations evaluating third-party services to support these phases, the cloud security providers reference provides structured service provider categories aligned to container and cloud security functions. The how to use this cloud security resource reference describes how service categories on this site are defined.
Reference table or matrix
| Control Domain | Lifecycle Phase | Primary Standard Reference | Regulatory Relevance |
|---|---|---|---|
| Image vulnerability scanning | Build + Registry | NIST SP 800-190, §3.1 | FedRAMP, PCI DSS v4.0 Req. 6.3 |
| Image signing and provenance | Build + Registry | OCI Image Spec; SLSA Framework | NIST SP 800-53 SA-12 (Supply Chain) |
| Secrets management | Build + Runtime | NIST SP 800-57 | HIPAA §164.312; PCI DSS v4.0 Req. 3 |
| Pod Security Admission | Deploy-time | Kubernetes PSA (Restricted/Baseline) | CIS Kubernetes Benchmark §5 |
| RBAC configuration | Deploy-time | CIS Kubernetes Benchmark §5.1 | FedRAMP AC-2, AC-3; NIST SP 800-53 |
| Network policy segmentation | Deploy-time + Runtime | CSA CCM IVS-09 | HIPAA §164.312(e); PCI DSS v4.0 Req. 1 |
| Seccomp/AppArmor profiles | Runtime | NIST SP 800-190, §4.4 | FedRAMP SC-39 |
| Audit logging (API server) | Runtime | CIS Kubernetes Benchmark §3.2 | NIST SP 800-53 AU-2; FedRAMP |
| SBOM generation | Build | NTIA SBOM Minimum Elements (2021) | EO 14028 (May 2021) |
| Runtime behavioral monitoring | Runtime | MITRE ATT&CK for Containers | FedRAMP SI-4; NIST SP 800-53 |
References
- National Institute of Standards and Technology (NIST)
- NIST SP 800-190
- Federal Risk and Authorization Management Program (FedRAMP)
- NIST SP 800-53 Rev 5
- NIST SP 800-53 — Security and Privacy Controls
- Cybersecurity and Infrastructure Security Agency
- CIS Critical Security Controls
- ISO/IEC 27001 — Information Security Management