Multi-Stage Builds
A single-stage Dockerfile includes everything needed to build your application -- compilers, package managers, build caches, and test tools. All of that ends up in the final image, even though none of it is needed at runtime. Multi-stage builds solve this by letting you use multiple FROM instructions, each creating a separate stage.
The Problem with Single-Stage Builds
flowchart LR
subgraph single["Single-Stage Image"]
direction TB
A["Base OS + Runtime"]
B["Compiler / Build Tools"]
C["Dev Dependencies"]
D["Build Cache"]
E["Application Binary"]
end
subgraph multi["Multi-Stage: Runtime Image"]
direction TB
F["Minimal Base OS"]
G["Application Binary"]
end
single -.->|"~800 MB"| X[" "]
multi -.->|"~50 MB"| Y[" "]
style single fill:#ffebee,stroke:#c62828
style multi fill:#e8f5e9,stroke:#2e7d32
style X fill:none,stroke:none
style Y fill:none,stroke:none
Everything in the single-stage image (compilers, build tools, dev dependencies) increases the image size and the attack surface in production.
How Multi-Stage Works
A multi-stage Dockerfile has multiple FROM instructions. Each one starts a new stage. You can copy files from one stage to another using COPY --from=:
flowchart LR
subgraph build["Build Stage"]
direction TB
A1["Full SDK / Compiler"]
A2["Source Code"]
A3["Compile / Package"]
A4["Built Artifact"]
end
subgraph runtime["Runtime Stage"]
direction TB
B1["Minimal Base Image"]
B2["Copied Artifact"]
B3["Run Application"]
end
A4 -->|"COPY --from=build"| B2
style build fill:#e3f2fd,stroke:#1565c0
style runtime fill:#e8f5e9,stroke:#2e7d32
The final image only contains what is in the last stage. Everything from earlier stages is discarded.
Complete Examples
Go (Compiled Language)
Go compiles to a static binary, so the runtime image can be extremely minimal:
# Build stage: full Go SDK
FROM golang:1.23-alpine AS build
WORKDIR /src
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o app .
# Runtime stage: just the binary
FROM alpine:3.20
RUN adduser -D -u 10001 appuser
COPY /src/app /usr/local/bin/app
USER appuser
ENTRYPOINT ["/usr/local/bin/app"]
| Stage | Base Image | Contains | Size |
|---|---|---|---|
| Build | golang:1.23-alpine (~300 MB) | Go compiler, source, dependencies | Not shipped |
| Runtime | alpine:3.20 (~7 MB) | Only the compiled binary | ~12 MB final |
Node.js (Interpreted Language)
Node.js needs the runtime in the final image, but you can still skip dev dependencies and build tools:
# Dependencies stage
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
# Build stage
FROM node:20-alpine AS build
WORKDIR /app
COPY /app/node_modules ./node_modules
COPY . .
RUN npm run build
# Runtime stage: only production dependencies + built output
FROM node:20-alpine AS runtime
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --omit=dev
COPY /app/dist ./dist
USER node
CMD ["node", "dist/server.js"]
Python
# Build stage: install dependencies into a separate prefix
FROM python:3.12-slim AS build
WORKDIR /app
COPY requirements.txt .
RUN pip install --prefix=/install --no-cache-dir -r requirements.txt
# Runtime stage
FROM python:3.12-slim AS runtime
WORKDIR /app
COPY /install /usr/local
COPY . .
USER 10001
CMD ["python", "app.py"]
Java
# Build stage: full Maven SDK
FROM maven:3.9-eclipse-temurin-21 AS build
WORKDIR /src
COPY pom.xml .
RUN mvn -q dependency:go-offline
COPY . .
RUN mvn -q -DskipTests package
# Runtime stage: only the JRE + compiled JAR
FROM eclipse-temurin:21-jre
WORKDIR /app
COPY /src/target/app.jar app.jar
ENTRYPOINT ["java", "-jar", "app.jar"]
Choosing a Runtime Base Image
| Base Image | Size | Shell | Package Manager | Best For |
|---|---|---|---|---|
scratch | 0 MB | No | No | Statically compiled Go/Rust binaries |
| Distroless | ~2-20 MB | No | No | Maximum security, minimal attack surface |
| Alpine | ~7 MB | Yes (sh) | Yes (apk) | Most workloads, good debugging access |
| Slim variants | ~50-150 MB | Yes (bash) | Yes (apt) | When Alpine's musl libc causes issues |
If you use scratch or Distroless and need to debug, you can use docker build --target build to get a shell in the build stage, or use docker debug (available in Docker Desktop) to attach a debug shell to a running container.
Building Specific Stages
You can build a specific stage using --target. This is useful for debugging or running tests in CI:
# Build only the build stage (includes compilers, source, etc.)
docker build --target build -t app:build-stage .
# Inspect the build output
docker run --rm -it app:build-stage sh
# Build the full image (default: last stage)
docker build -t app:1.0.0 .
In a CI pipeline, you can use this to run tests in a dedicated test stage before building the production image:
# Run tests first
docker build --target test -t app:test .
# If tests pass, build the production image
docker build -t app:1.0.0 .
Key Takeaways
- Multi-stage builds let you use full build tools during compilation while shipping only the minimal runtime in the final image.
- Use
COPY --from=<stage>to copy only the artifacts you need into the runtime stage. - Name your stages with
ASfor readability (AS build,AS runtime). - The final image only contains the last stage -- everything from earlier stages is discarded.
- Use
--targetto build specific stages for debugging or testing. - Choose the smallest runtime base image that supports your application (Alpine for most cases,
scratchfor static binaries).
What's Next
- Continue to Tagging, Versioning, and Registries to learn how to name and publish your images.