Docker revolutionized how we develop and deploy software, but it comes with a hidden cost: disk space. If you've been using Docker for a while, you might be surprised to learn how much storage it's consuming.
Let's explore where that space goes and how to reclaim it safely.
The Docker Space Problem
Check your current Docker disk usage:
docker system df
You'll see something like:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 45 12 18.5GB 12.3GB (66%)
Containers 23 3 2.1GB 1.9GB (90%)
Local Volumes 12 5 5.2GB 3.1GB (59%)
Build Cache - - 8.4GB 8.4GB
In this example, there's 25GB of reclaimable space—and this is a modest Docker installation.
Understanding Docker Storage
Images
Docker images are layered. When you pull node:18, you get:
- Base OS layer
- Node.js runtime layer
- npm layer
- Any additional tools
Each image version you've ever pulled stays on disk until you remove it.
Containers
When you create a container, Docker creates a writable layer on top of the image. Even stopped containers retain this layer.
Volumes
Named volumes persist data between container runs. They're often forgotten after the project is done.
Build Cache
When you build images with docker build, Docker caches each layer. Over time, this cache grows substantially.
Safe Cleanup Commands
Remove Dangling Images
Dangling images are layers with no tag—leftovers from builds:
docker image prune
This is always safe and often recovers several GB.
Remove Unused Images
Images not used by any container:
docker image prune -a
Caution: This removes ALL unused images. You'll need to re-pull them later.
Remove Stopped Containers
docker container prune
Removes all containers not currently running.
Remove Unused Volumes
docker volume prune
Warning: This deletes data! Make sure you don't need the data in these volumes.
Remove Build Cache
docker builder prune
Clears the build cache. Your next docker build will be slower.
Nuclear Option
Remove everything not currently in use:
docker system prune -a --volumes
This frees maximum space but requires re-downloading images and rebuilding caches.
What's Actually Safe to Delete?
| Resource | Safe? | Notes |
|---|---|---|
| Dangling images | ✅ Always | No downsides |
| Unused images | ⚠️ Mostly | Will need to re-pull |
| Stopped containers | ✅ Usually | Check logs first if needed |
| Unused volumes | ⚠️ Careful | Contains data! |
| Build cache | ✅ Yes | Rebuilds slower temporarily |
Automating Docker Cleanup
Cron Job
# Weekly cleanup of dangling resources
0 0 * * 0 docker system prune -f >> /var/log/docker-prune.log
Docker's Built-in Pruning
Configure Docker to auto-prune in daemon.json:
{
"storage-driver": "overlay2",
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
This limits log size but doesn't auto-prune images.
Best Practices for Less Bloat
1. Use Specific Tags
# Bad - accumulates versions
FROM node:latest
# Good - explicit version
FROM node:18.19-slim
2. Use Slim/Alpine Images
# Full image: ~900MB
FROM python:3.11
# Slim image: ~150MB
FROM python:3.11-slim
# Alpine image: ~50MB
FROM python:3.11-alpine
3. Multi-stage Builds
# Build stage
FROM node:18 AS builder
WORKDIR /app
COPY . .
RUN npm ci && npm run build
# Production stage - much smaller
FROM node:18-slim
COPY /app/dist ./dist
CMD ["node", "dist/index.js"]
4. Clean Up in Dockerfiles
RUN apt-get update && apt-get install -y \
build-essential \
&& npm ci \
&& npm run build \
&& apt-get purge -y build-essential \
&& apt-get autoremove -y \
&& rm -rf /var/lib/apt/lists/*
5. Use .dockerignore
node_modules
.git
*.log
.env
coverage
dist
This prevents large files from being included in build context.
Monitoring Docker Storage
Set Up Alerts
Monitor Docker storage and alert when it gets high:
#!/bin/bash
THRESHOLD=80
USAGE=$(docker system df --format '{{.Size}}' | head -1)
# Add alerting logic
Regular Audits
Schedule monthly reviews of:
- Unused images (
docker images --filter "dangling=false") - Stopped containers (
docker ps -a --filter "status=exited") - Volumes (
docker volume ls)
How Cluttered Helps
Cluttered makes Docker cleanup visual and safe:
- See all Docker resources: Images, containers, volumes at a glance
- Identify unused resources: Automatic detection of what's safe to clean
- Preview before delete: Know exactly what will be removed
- Project awareness: Understand which images belong to which projects
Unlike docker system prune -a, Cluttered lets you selectively clean while preserving images you want to keep.
Space Recovery Expectations
| Docker Usage | Typical Waste | Recoverable |
|---|---|---|
| Light (5-10 images) | 5-10GB | 3-5GB |
| Medium (20-30 images) | 15-30GB | 10-20GB |
| Heavy (50+ images) | 40-80GB | 25-50GB |
Most developers can recover 10-30GB from Docker alone.
Common Mistakes
1. Deleting Active Volumes
Always check volume contents before pruning:
docker volume inspect my-volume
2. Forgetting About Registries
Private registry caches can grow large:
# Check registry usage if self-hosting
du -sh /var/lib/registry
3. Not Cleaning Build Context
Large build contexts slow down builds and waste space:
# Check what Docker is sending
docker build --no-cache -t test . 2>&1 | head -5
Conclusion
Docker storage management is a ongoing task, not a one-time fix. Between dangling images, stopped containers, orphaned volumes, and build cache, Docker can easily consume 20-50GB of disk space.
Regular cleanup—whether through Docker's built-in commands, scheduled scripts, or tools like Cluttered—should be part of every developer's routine.
The key is finding the balance between keeping useful caches (for fast rebuilds) and removing accumulated waste (for disk space). Start with docker system prune for safe cleanup, and escalate to more aggressive pruning when needed.
Your SSD will thank you.