Multi-Stage Builds + Distroless: The Dockerfile Pattern I Use for Every Rust Service
The Day My VPS Ran Out of Disk Space
I still remember the moment clearly. I had just deployed a new Axum microservice to my self-hosted VPS, ran docker images, and stared at the output in disbelief.
REPOSITORY TAG IMAGE ID CREATED SIZE
my-api latest a3f9d2c1b4e7 2 minutes ago 1.73GB1.73 gigabytes. For a REST API that accepted a JSON body and returned another JSON body.
I was pulling images over a metered VPS connection, running multiple services side-by-side, and watching my disk fill up one deployment at a time. Something had to change.
That’s what sent me down the rabbit hole of multi-stage builds and distroless images — and I haven’t looked back since. This post is a full walkthrough of the Dockerfile pattern I now use for every Rust service I deploy, including the exact setup I use for my Axum APIs.
Why Rust Docker Images Balloon Out of Control
If you’ve ever run cargo build inside a container, you know the pain. The official rust base image ships with the full compiler toolchain, standard library sources, rustup, and a pile of build dependencies — all of which you need to compile your binary, but absolutely none of which you need to run it.
A compiled Rust binary is statically linkable and self-contained. The runtime image has no reason to know that Rust even exists.
The culprit is usually one of two naive Dockerfile patterns:
Pattern A — The everything-in-one Dockerfile:
FROM rust:latest
WORKDIR /app
COPY . .
RUN cargo build --release
CMD ["./target/release/my-api"]This ships the entire Rust toolchain to production. Size: 1.5–2GB.
Pattern B — The slim-but-still-bloated Dockerfile:
FROM rust:slim
WORKDIR /app
COPY . .
RUN cargo build --release
CMD ["./target/release/my-api"]Better, but still ships the compiler. Size: 700MB–1GB.
Both are wrong for production. Let’s fix this properly.
The Solution: Multi-Stage Builds + Distroless
The idea is simple but powerful: use two separate Docker stages.
- Builder stage — a full Rust environment that compiles your binary
- Runtime stage — a minimal image that only contains what’s needed to run the binary
For the runtime stage, instead of debian:slim or alpine, we use Google’s distroless images — container images that contain only your application and its runtime dependencies. No shell. No package manager. No apt, no bash, no nothing that isn’t strictly needed.
Here’s what our final image sizes look like with this approach:
| Approach | Image Size |
|---|---|
rust:latest (naive) | ~1.73 GB |
rust:slim | ~700 MB |
Multi-stage → debian:slim | ~90 MB |
Multi-stage → alpine | ~15 MB |
| Multi-stage → distroless | ~8 MB |
Eight megabytes. That’s the kind of number that makes a difference when you’re self-hosting and pulling images over a real network connection.
The Dockerfile, Line by Line
Here’s the full pattern I use for my Axum services. I’ll walk through every decision below.
# ── Stage 1: Builder ─────────────────────────────────────────────────────────
FROM rust:1.77-slim-bookworm AS builder
WORKDIR /app
# Install only what's needed for compilation
RUN apt-get update && apt-get install -y \
pkg-config \
libssl-dev \
&& rm -rf /var/lib/apt/lists/*
# Cache dependencies separately from source code
COPY Cargo.toml Cargo.lock ./
RUN mkdir src && echo "fn main() {}" > src/main.rs
RUN cargo build --release
RUN rm -f target/release/deps/my_api*
# Now copy real source and build
COPY src ./src
RUN cargo build --release
# ── Stage 2: Runtime ─────────────────────────────────────────────────────────
FROM gcr.io/distroless/cc-debian12
WORKDIR /app
# Copy only the compiled binary from the builder
COPY --from=builder /app/target/release/my-api .
# Distroless images run as non-root by default (UID 65532)
USER nonroot:nonroot
EXPOSE 3000
CMD ["/app/my-api"]Why rust:1.77-slim-bookworm and not rust:latest?
Pin your Rust version. latest will silently update on your next CI run and potentially break a build due to a compiler warning becoming an error, a dependency change, or an MSRV issue. slim-bookworm gives us Debian 12 without the extra documentation and locale packages.
The dependency caching trick
This is the most important build performance optimization in the whole file:
COPY Cargo.toml Cargo.lock ./
RUN mkdir src && echo "fn main() {}" > src/main.rs
RUN cargo build --release
RUN rm -f target/release/deps/my_api*Docker builds layer by layer and caches each layer. If we just COPY . . and then cargo build, Docker invalidates the entire build cache every time any source file changes — even a comment in main.rs — and recompiles all 200+ dependencies from scratch.
By copying only Cargo.toml and Cargo.lock first, creating a dummy main.rs, and building that, we compile all dependencies in a cacheable layer. The next time we build, if Cargo.lock hasn’t changed, Docker skips straight to compiling our actual source code. This turns a 4-minute build into a 20-second one.
Why gcr.io/distroless/cc-debian12 and not gcr.io/distroless/static?
There are two common distroless variants for Rust:
distroless/static— truly static, no C library. Works perfectly if you compile withmuslor use--target x86_64-unknown-linux-musl.distroless/cc— includes glibc and libgcc. Works with standard Rust compilation against glibc (the default).
Since I’m compiling with the standard Rust toolchain (which links against glibc by default), I use distroless/cc. If I were cross-compiling for musl, I’d switch to distroless/static.
Note for SSL users: If your Axum service makes outbound HTTPS calls, you need CA certificates in the runtime image. Use
gcr.io/distroless/cc-debian12which includes them, or explicitly copy/etc/ssl/certs/ca-certificates.crtfrom the builder stage.
Non-root by default
Distroless images come with a built-in nonroot user (UID 65532). Unlike a full Debian image where you have to manually create a user and chown directories, distroless handles this for you. Setting USER nonroot:nonroot means your production process never runs as root — a key security principle that’s easy to forget when you’re rushing to ship.
Axum-Specific Configuration
Since Axum binds to a socket address, there’s one thing to make sure: your service doesn’t hardcode 127.0.0.1 as the bind address. Inside a container, you need to bind to 0.0.0.0 to accept traffic from outside the container.
#[tokio::main]
async fn main() {
let app = Router::new()
.route("/health", get(health_handler))
.route("/api/v1/data", post(data_handler));
// Read from environment, default to 0.0.0.0 for container compatibility
let host = std::env::var("HOST").unwrap_or_else(|_| "0.0.0.0".to_string());
let port = std::env::var("PORT").unwrap_or_else(|_| "3000".to_string());
let addr = format!("{}:{}", host, port);
let listener = tokio::net::TcpListener::bind(&addr).await.unwrap();
println!("Listening on {}", addr);
axum::serve(listener, app).await.unwrap();
}Configuring via environment variables also plays nicely with Docker Compose and any secrets management setup.
The Full docker-compose.yml for Self-Hosting
Since I deploy everything to a self-hosted VPS, here’s the docker-compose.yml I pair with this Dockerfile:
services:
api:
build:
context: .
dockerfile: Dockerfile
image: my-axum-api:latest
container_name: my-api
restart: unless-stopped
ports:
- "127.0.0.1:3000:3000" # Only expose to localhost, reverse proxy handles TLS
environment:
- HOST=0.0.0.0
- PORT=3000
- DATABASE_URL=${DATABASE_URL}
- RUST_LOG=info
healthcheck:
test: ["/app/my-api", "--health-check"] # or use a curl alternative
interval: 30s
timeout: 10s
retries: 3
networks:
- internal
networks:
internal:
driver: bridgeA few things worth noting:
127.0.0.1:3000:3000— I only bind the port to localhost on the host machine. My Nginx reverse proxy handles TLS termination and forwards traffic to this port. Exposing0.0.0.0:3000directly would make the API publicly reachable without TLS.restart: unless-stopped— Ensures the container comes back up after a VPS reboot.- Environment variables via
.env— Secrets likeDATABASE_URLare loaded from a.envfile that is never committed to git.
Build It and See the Difference
# Build the image
docker build -t my-axum-api:latest .
# Check the size
docker images my-axum-api
# REPOSITORY TAG IMAGE ID CREATED SIZE
# my-axum-api latest b7e2a1d9c3f4 12 seconds ago 8.2MB8.2MB. For a production Axum API.
You can verify what’s actually inside the image:
# List files in the distroless image (no shell, so use dive or docker export)
docker export $(docker create my-axum-api:latest) | tar -t | head -40You’ll find your binary, the glibc libraries, CA certificates, and almost nothing else. There’s no /bin/bash, no /usr/bin, no package manager. An attacker who finds a remote code execution vulnerability in your application has virtually no tools available to them inside the container.
Security Scanning Before and After
One of the side benefits of this approach — beyond size — shows up when you run a vulnerability scanner like Trivy:
# Scan the distroless image
trivy image my-axum-api:latestWith a full Debian base, Trivy routinely flags dozens of CVEs across the system packages. With distroless, most of those packages simply don’t exist. Fewer packages = smaller attack surface = fewer CVEs = happier security teams.
This matters even for solo developers self-hosting on a VPS. Smaller images pull faster, use less disk, and give you a better security posture for essentially zero extra effort once the pattern is in place.
When Distroless Doesn’t Work
To be fair, distroless isn’t always the right choice. Here are a few situations where you might need a different approach:
You need a shell for debugging. Distroless has no shell, which means no docker exec -it my-container bash. For debugging production issues, you’ll want to rely on structured logging and tracing (I use my own otel-rs for this). Alternatively, keep a separate debug image built on debian:slim for local use.
You have complex runtime dependencies. If your binary dynamically loads shared libraries beyond glibc (like native database drivers or image processing libraries), you’ll need to explicitly COPY those into the runtime stage. It’s doable, but adds complexity.
You’re targeting ARM/IoT edge devices. Distroless images are available for linux/arm64 but have limited linux/arm/v7 support. For 32-bit ARM targets, alpine is often the better choice.
The Template, Ready to Copy
Here’s the clean, copy-pasteable final version without the educational comments:
FROM rust:1.77-slim-bookworm AS builder
WORKDIR /app
RUN apt-get update && apt-get install -y \
pkg-config \
libssl-dev \
&& rm -rf /var/lib/apt/lists/*
COPY Cargo.toml Cargo.lock ./
RUN mkdir src && echo "fn main() {}" > src/main.rs
RUN cargo build --release
RUN rm -f target/release/deps/my_api*
COPY src ./src
RUN cargo build --release
# ---
FROM gcr.io/distroless/cc-debian12
WORKDIR /app
COPY --from=builder /app/target/release/my-api .
USER nonroot:nonroot
EXPOSE 3000
CMD ["/app/my-api"]Replace my-api with your binary name (matching the name field in your Cargo.toml), and you’re done.
Wrapping Up
The shift from a naive Rust Dockerfile to a multi-stage distroless build is one of the highest-value, lowest-effort improvements you can make to your deployment pipeline. Here’s the summary:
- Start with
rust:slimin the builder stage, notrust:latest - Cache dependencies by copying
Cargo.toml/Cargo.lockfirst and building a dummy binary - Use
gcr.io/distroless/cc-debian12as your runtime base (ordistroless/staticfor musl builds) - Run as
nonroot— it’s free and it matters - Bind to
0.0.0.0in Axum and let your reverse proxy handle TLS
The result: an 8MB production image, a minimal attack surface, and deployments that pull in seconds over your VPS connection instead of minutes.
I’ve been using this pattern across all my Rust services, and it’s saved me from a lot of unnecessary disk management headaches. If you’re running self-hosted infrastructure, the cumulative effect across multiple services is very noticeable.
Enjoyed this post? You might also like From PyTorch to Burn: Why I’m Training Models in Rust Now.
If you have questions or want to share how you’ve adapted this for your own stack, feel free to reach out — qcynaut@gmail.com or find me on GitHub at @qcynaut.