Updating Containers
When a new image version is available, you need to replace the running container with the updated one. Unlike traditional servers where you update software in-place, Docker replaces the entire container. This lesson covers three strategies from simple to zero-downtime.
Strategy Comparison
| Strategy | Downtime | Complexity | Best For |
|---|---|---|---|
| Manual stop/start | Seconds to minutes | Low | Quick fixes, single containers |
| Compose recreate | Milliseconds | Low | Most production Compose stacks |
| Blue/Green deploy | None | High | Critical zero-downtime services |
| Auto-update (Watchtower) | Brief | Low | Home labs, non-critical tools |
Strategy 1: Manual (Downtime)
Stop, remove, pull, and restart. Simple but causes downtime:
docker stop my-app # 1. Stop the running container
docker rm my-app # 2. Remove it
docker pull my-app:v2 # 3. Pull the new image
docker run -d \ # 4. Start the new version
--name my-app \
-p 80:80 \
my-app:v2
The service is offline during steps 1-4. Fine for non-critical services or maintenance windows.
Strategy 2: Compose Recreate (Recommended)
Compose handles the stop/remove/start cycle automatically:
# 1. Update the image tag in compose.yaml
# image: my-app:v2
# 2. Pull the new image
docker compose pull
# 3. Recreate only the changed services
docker compose up -d
flowchart LR
A["Pull new image"] --> B["Stop old container"]
B --> C["Start new container"]
C --> D["Verify health"]
style A fill:#e3f2fd,stroke:#1565c0
style B fill:#fff3e0,stroke:#ef6c00
style C fill:#e8f5e9,stroke:#2e7d32
Compose only recreates containers whose configuration has changed. Downtime is typically milliseconds (the time between stopping the old process and starting the new one).
After Update
# Verify all services are healthy
docker compose ps
# Check logs for errors
docker compose logs --tail=50
# Clean up old images
docker image prune -f
Strategy 3: Blue/Green (Zero Downtime)
For services that cannot tolerate any downtime, use a reverse proxy with two container instances:
flowchart TD
User -->|"Traffic"| Proxy["Reverse Proxy<br/>(Nginx/Traefik)"]
Proxy -->|"Active"| Blue["Blue (v1)<br/>Running"]
Proxy -.->|"Standby"| Green["Green (v2)<br/>Starting"]
Green -->|"Health check passes"| Switch["Switch traffic to Green"]
Switch --> StopBlue["Stop Blue"]
style Blue fill:#e3f2fd,stroke:#1565c0
style Green fill:#e8f5e9,stroke:#2e7d32
style Switch fill:#fff3e0,stroke:#ef6c00
- Blue (v1) is running and receiving all traffic
- Start Green (v2) alongside it. Wait for its health check to pass
- Update the reverse proxy to route traffic to Green
- Stop and remove Blue
This requires a reverse proxy (Nginx, Traefik, Caddy) and is more complex to set up, but provides true zero downtime.
Strategy 4: Automatic Updates (Watchtower)
For home labs or non-critical services, Watchtower automatically monitors registries and updates containers when new images are pushed:
docker run -d \
--name watchtower \
-v /var/run/docker.sock:/var/run/docker.sock \
containrrr/watchtower
Watchtower gives up control over when updates happen. A breaking change could deploy while you are asleep. Use it only for tools and services where automatic updates are acceptable.
Keeping Data Safe During Updates
Named volumes are not affected by container recreation:
docker compose down # Containers removed, volumes preserved
docker compose up -d # New containers, same data
As long as your data is on a named volume (not in the container filesystem), updates are safe.
Key Takeaways
- Docker updates replace the entire container, not just the software inside it. Your data must be on a volume.
- For most stacks,
docker compose pull && docker compose up -dis the right approach -- minimal downtime, simple process. - Always verify after updating: check
docker compose ps, review logs, test endpoints. - Use Blue/Green deployments for zero-downtime requirements and a reverse proxy.
- Watchtower is convenient for home labs but trades control for automation -- avoid it in production.
What's Next
- Return to the Operations and Maintenance module overview.