Docker Security Best Practices for Self-Hosters in 2026
Docker makes self-hosting feel effortless. Pull an image, write a compose file, run docker compose up -d, and you have a production service in minutes. That's exactly how I built the entire ByteGuard stack — Ghost, Nginx Proxy Manager, and Uptime Kuma on a single Hetzner VPS.
But here's what most "how to self-host X" guides never tell you: the defaults are not secure. Docker out of the box runs containers as root, puts every container on the same network, exposes ports to the entire internet, and gives containers more Linux capabilities than they need. If you hardened your VPS at the OS level but left Docker wide open, you locked the front door and left the windows up.
This post covers 10 Docker security practices I use on the same Hetzner box that runs this blog. Every snippet is real, every recommendation is tested.
Prerequisites
Before you start, you should have:
- A Linux VPS with Docker and Docker Compose v2 installed (here's how I set mine up)
- Basic familiarity with
docker-compose.ymlsyntax - SSH access to your server (hardened, ideally)
- A running Docker stack you want to secure (even a single container counts)
1. Never Run Containers as Root
This is the single highest-impact change you can make. By default, the process inside a Docker container runs as root — UID 0. If an attacker exploits a vulnerability in your application and escapes the container, they land on the host as root. Game over.
The fix is straightforward. In your docker-compose.yml, set the user field:
services:
ghost:
image: ghost:5
user: "1000:1000"
volumes:
- ghost_data:/var/lib/ghost/content
This tells Docker to run the Ghost process as UID 1000 instead of root. The container still starts, Ghost still works — but a compromised process now has unprivileged access.
A few things to watch for:
- File permissions on volumes. If your volume data was created by root, a non-root container can't write to it. Fix this with
chown 1000:1000on the host directory before switching. - Some images expect root. Official Nginx, for example, needs root to bind to port 80. Inside a compose stack where a reverse proxy handles external traffic, your backend containers don't need to bind privileged ports at all.
- Rootless Docker mode goes further — the Docker daemon itself runs without root. This is a bigger architectural change and adds complexity around networking and storage drivers. For most self-hosters, running containers as non-root (the
user:field) gives you 90% of the security benefit with 10% of the friction.
2. Use Read-Only Filesystems
A container with a writable filesystem lets an attacker drop binaries, modify config files, install tools, and persist across restarts. Remove that option entirely:
services:
uptime-kuma:
image: louislam/uptime-kuma:1
read_only: true
tmpfs:
- /tmp
volumes:
- kuma_data:/app/data
With read_only: true, the container's root filesystem is mounted read-only. The container can only write to explicitly mounted volumes and tmpfs mounts. If an attacker gets code execution, they can't modify the application, can't install packages, can't drop a reverse shell binary into /usr/local/bin.
The tmpfs mount for /tmp gives the application a writable scratch space in memory — many apps need this for temporary files, PID files, or socket files. It disappears on container restart, so nothing persists.
Most self-hosted applications work fine with read-only filesystems once you identify which directories actually need writes. Ghost needs /var/lib/ghost/content. Uptime Kuma needs /app/data. NPM needs its data and letsencrypt directories. Everything else can be locked down.
3. Set Resource Limits
Without resource limits, a single misbehaving container can consume all available RAM and CPU, taking down every other service on your VPS. This isn't theoretical — a memory leak, a log file growing without bounds, or a fork bomb in a compromised container will OOM-kill your entire host.
services:
ghost:
image: ghost:5
deploy:
resources:
limits:
memory: 512M
cpus: "1.0"
pids_limit: 100
Here's what each limit does:
memory: 512M— the container gets killed if it tries to use more than 512 MB of RAM. Docker sends a SIGKILL, not a gentle shutdown.cpus: "1.0"— the container can use at most one CPU core. Prevents a single container from starving everything else.pids_limit: 100— caps the number of processes inside the container. This is your fork bomb insurance.
For reference, here's what the ByteGuard stack actually uses on our Hetzner CPX22 (8 GB RAM):
| Service | Typical RAM | Suggested Limit |
|---|---|---|
| Ghost | ~350 MB | 512M |
| Nginx Proxy Manager | ~120 MB | 256M |
| Uptime Kuma | ~80 MB | 256M |
Note: Set limits based on observation, not guesswork. Run docker stats for a few days to see what your containers actually consume, then set the limit at roughly 1.5x the peak.4. Manage Secrets Properly
Secrets in Docker stacks are one of those things everyone knows they should handle correctly and almost nobody does. Here's the hierarchy from worst to best:
Worst — hardcoded in docker-compose.yml:
# DON'T DO THIS
environment:
- database__connection__password=mysecretpassword123
This password is in your compose file, probably in a git repo, possibly public.
Better — .env file:
# docker-compose.yml
environment:
- database__connection__password=${GHOST_DB_PASSWORD}
# .env
GHOST_DB_PASSWORD=a-real-strong-password-here
The .env file keeps secrets out of the compose file. But it's still plaintext on disk. Lock it down:
chmod 600 .env
chown root:root .env
Add .env to your .gitignore and .dockerignore so it never ends up in a repo or inside an image.
Note: Environment variables set this way are visible via docker inspect. Anyone with access to the Docker socket can read every environment variable in every running container.Best for single-host self-hosting: The .env approach with proper file permissions is honestly fine for most self-hosters. Docker Secrets (the swarm-mode feature) adds encryption and mounts secrets as files inside containers, but it requires swarm mode — overhead most single-server setups don't need. A locked-down .env file is the pragmatic choice.
5. Scan Your Images
Every time you pull a Docker image, you're running code that someone else built. You trust that ghost:5 is safe because it's an official image — but official images contain operating system packages, and those packages have CVEs.
Docker Scout (built into Docker CLI):
docker scout cves ghost:5
This scans the image and lists known CVEs by severity.
Trivy (open source, more thorough):
# Install
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin
# Scan
trivy image ghost:5
Pin your image versions. This is as much a security practice as a reliability one:
# Don't do this — you get whatever "latest" means today
image: ghost:latest
# Do this — you know exactly what you're running
image: ghost:5.118.0
Pinned versions mean you choose when to update. latest means Docker pulls whatever the maintainer pushed most recently — and if that image has a supply-chain compromise, you've auto-deployed it.
6. Isolate Your Docker Networks
Docker's default bridge network puts every container on the same subnet. Any container can reach any other container by IP. If an attacker compromises one service, they pivot to everything else without touching the external network.
Create purpose-specific networks and only connect containers that need to talk to each other:
services:
ghost:
image: ghost:5
networks:
- frontend
nginx-proxy-manager:
image: jc21/nginx-proxy-manager:latest
networks:
- frontend
ports:
- "80:80"
- "443:443"
uptime-kuma:
image: louislam/uptime-kuma:1
networks:
- monitoring
networks:
frontend:
driver: bridge
monitoring:
driver: bridge
internal: true
Key patterns:
Only the reverse proxy exposes ports. Ghost doesn't need port 2368 open to the internet — NPM proxies traffic to it internally.
Bind to localhost when you need host access:
ports:
- "127.0.0.1:3001:3001" # Only accessible from the host
Warning: Docker manipulates iptables directly, which means UFW and firewalld rules don't apply to Docker-published ports. You can have a perfectly configured firewall and Docker will punch right through it. Binding to localhost is the reliable fix.
internal: true creates a network with no outbound internet access. Use this for backend services that have no reason to make outbound connections.
7. Keep Docker and Images Updated
Docker itself — the engine, containerd, runc — has had serious CVEs. CVE-2024-21626 (runc container escape) and CVE-2024-23651 (BuildKit race condition) are recent examples where an unpatched Docker installation was directly exploitable.
Update the Docker engine:
sudo apt update && sudo apt upgrade docker-ce docker-ce-cli containerd.io
Update your images manually:
# Pull new versions
docker compose pull
# Recreate containers with new images
docker compose up -d
# Clean up old images
docker image prune -f
Watchtower automates image updates:
services:
watchtower:
image: containrrr/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_SCHEDULE=0 0 4 * * *
The convenience is real, but so is the risk: Watchtower will auto-deploy a broken update at 4 AM while you're asleep. For a personal blog, that's probably fine. For anything you can't afford downtime on, update manually. At minimum, pin major versions (ghost:5 not ghost:latest) so you get patch updates but not breaking major bumps.
8. Limit Container Capabilities
Linux capabilities split root's power into pieces like NET_BIND_SERVICE (bind to ports below 1024), SYS_ADMIN (mount filesystems), and NET_RAW (use raw sockets). Docker grants about 14 capabilities by default. Most containers don't need most of them.
Drop everything and add back only what's required:
services:
ghost:
image: ghost:5
cap_drop:
- ALL
cap_add:
- CHOWN
- SETUID
- SETGID
security_opt:
- no-new-privileges:true
cap_drop: ALLremoves every capability.cap_addgives back only what the application needs. You find out which ones by dropping all and reading the error messages.no-new-privileges: trueprevents any process inside the container from gaining additional privileges through setuid binaries. One of the highest-value single lines you can add.
Warning: Never mount the Docker socket (/var/run/docker.sock) into a container unless you absolutely must. Access to the Docker socket is equivalent to root access on the host. If you must mount it (Watchtower requires it), treat that container as part of your trusted computing base.9. Configure Logging and Monitoring
If a container gets compromised and you have no logs, you'll never know. Docker's default logging driver (json-file) writes to JSON files on the host — until those files grow unbounded and fill your disk.
Configure log rotation:
services:
ghost:
image: ghost:5
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
This caps each container's log at 30 MB total. You can also set this globally in /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Add health checks so you monitor application health, not just port availability:
services:
ghost:
image: ghost:5
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:2368/ghost/api/admin/site/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
This pings Ghost's API every 30 seconds. If it fails three times, Docker marks the container as unhealthy. Uptime Kuma can then alert you based on health status rather than just TCP connectivity.
Watch container events for anything unexpected:
docker events --filter type=container
On a self-hosted VPS where you're the only operator, any container event you didn't initiate is worth investigating.
10. The Compose Security Checklist
Here's everything above condensed into a checklist for every new service you deploy:
□ Container runs as non-root (user: field or USER in Dockerfile)
□ Filesystem is read-only (read_only: true + explicit volume mounts)
□ Memory and CPU limits set (deploy.resources.limits)
□ PID limit set (pids_limit)
□ Capabilities dropped and selectively added (cap_drop: ALL)
□ no-new-privileges enabled (security_opt)
□ Secrets in .env with 600 permissions, not in compose file
□ Image version pinned (tag, not :latest)
□ Image scanned for CVEs (docker scout or trivy)
□ Container on a purpose-specific network, not default bridge
□ Only necessary ports exposed, bound to 127.0.0.1 if host-only
□ Log rotation configured (max-size + max-file)
□ Health check defined
□ Docker socket NOT mounted (unless required and justified)
A fully hardened compose service looks like this:
services:
myapp:
image: myapp:1.2.3
user: "1000:1000"
read_only: true
tmpfs:
- /tmp
volumes:
- app_data:/data
environment:
- SECRET_KEY=${APP_SECRET_KEY}
networks:
- backend
deploy:
resources:
limits:
memory: 256M
cpus: "0.5"
pids_limit: 50
cap_drop:
- ALL
security_opt:
- no-new-privileges:true
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 15s
networks:
backend:
driver: bridge
internal: true
Compare that to the typical self-hosting tutorial compose file — image, ports, volumes, done. The difference is about 15 lines of YAML and a dramatically smaller attack surface.
Troubleshooting
Container won't start after adding user: "1000:1000" - Cause: The volume data is owned by root and the non-root user can't write to it. - Fix: Run sudo chown -R 1000:1000 /path/to/volume on the host before restarting.
Container crashes with read_only: true - Cause: The application tries to write to a directory that isn't mounted as a volume or tmpfs. - Fix: Check the container logs (docker logs <container>) for "read-only file system" errors. Add the needed path as a tmpfs mount or a named volume.
cap_drop: ALL breaks the application - Cause: The app needs specific Linux capabilities you haven't added back. - Fix: Start with cap_drop: ALL, then add capabilities one at a time based on the error messages. Common ones: CHOWN, SETUID, SETGID, NET_BIND_SERVICE.
Docker bypasses UFW — port is open despite firewall rules - Cause: Docker manipulates iptables directly, bypassing UFW/firewalld. - Fix: Bind ports to localhost (127.0.0.1:3001:3001 instead of 3001:3001), or configure Docker to respect iptables by setting "iptables": false in /etc/docker/daemon.json (but this breaks Docker networking unless you add manual rules).
Health check keeps failing - Cause: The health check command runs inside the container, which may not have curl installed. - Fix: Use wget -q --spider instead of curl, or for minimal images use a language-native health endpoint check.
Conclusion
Docker security isn't a separate project — it's a layer in the same stack you're already building. If you hardened your Linux VPS with SSH keys, firewalls, and automatic updates, these ten practices are the natural next step: lock down the containers that actually run your services.
Start with the three highest-impact changes:
- Run containers as non-root — eliminates the most dangerous default.
- Isolate your networks — stops lateral movement between services.
- Scan your images — catches known vulnerabilities before they're running in production.
The stack powering this blog — the same one from Post #1 — runs with these practices in place. It's not paranoia. It's the difference between self-hosting and self-pwning.
If you're setting up a new Docker stack and need a VPS, I run all my projects on Hetzner — here's how the three major providers compare.