Setting Up Docker CI for Rust Projects

Learn how to configure a complete Docker CI pipeline for Rust projects with multi-architecture support and automated releases.

Wayne Lau

  ·  3 min read

Rust CI #

CI #

There are a few stages to get a Docker CI pipeline for Rust

dist #

This is usually the starting point, its quite easy to get started.

The main guide is here.

release.yml #

# the rest comes from dist generate
  custom-docker-publish:
    needs:
      - plan
      - announce
    uses: ./.github/workflows/docker-publish.yml
    with:
      plan: ${{ needs.plan.outputs.val }}
      # i added this so that its easier to automate and apply to other repos
      binary_name: mock-openai
    secrets: inherit
    permissions:
      "contents": "read"
      "packages": "write"

dist-workspace.toml #

Post-announce-jobs does what it says, after announcement of release, trigger the docker yaml.

github-custom-job-permission was due to an error having not enough permissions on my docker workflow.

post-announce-jobs = ["./docker-publish"]
# Permissions for docker-publish workflow
github-custom-job-permissions = { "docker-publish" = { packages = "write", contents = "read" } }
# found this needed to add the binary name
allow-dirty = ["ci"]

docker workflow #

I think the pattern for this is quite generic, but use the announcement tags to help with the versioning.

name: Docker Publish

on:
  workflow_call:
    inputs:
      plan:
        required: true
        type: string
        description: "The dist plan JSON"
      binary_name:
        required: true
        type: string
        description: "The name of the binary produced by cargo-dist"
      target_triple_suffix:
        required: false
        type: string
        default: "unknown-linux-musl"
        description: "The target triple suffix used in artifact names (e.g. unknown-linux-musl or unknown-linux-gnu)"

jobs:
  docker-publish:
    runs-on: ubuntu-22.04
    # Only run for actual releases
    if: ${{ fromJson(inputs.plan).announcement_tag != '' }}
    steps:
      - uses: actions/checkout@v4
        with:
          persist-credentials: false

      # these steps are quite generic
      - name: Set up QEMU
        uses: docker/setup-qemu-action@v3

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Log in to GHCR
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      # need the version as ARG for the Docker build
      - name: Extract version from plan
        id: version
        run: |
          TAG='${{ fromJson(inputs.plan).announcement_tag }}'
          echo "version=${TAG}" >> "$GITHUB_OUTPUT"
          echo "tag=${TAG}" >> "$GITHUB_OUTPUT"

      # semver tags and latest tag
      - name: Docker metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ghcr.io/${{ github.repository }}
          tags: |
            type=semver,pattern={{version}},value=${{ steps.version.outputs.tag }}
            type=semver,pattern={{major}}.{{minor}},value=${{ steps.version.outputs.tag }}
            type=semver,pattern={{major}},value=${{ steps.version.outputs.tag }},enable=${{ !startsWith(steps.version.outputs.tag, 'v0.') && !startsWith(steps.version.outputs.tag, '0.') }}
            type=raw,value=latest,enable=${{ !fromJson(inputs.plan).announcement_is_prerelease }}

      - uses: actions/download-artifact@v4
        with:
          name: artifacts-build-local-x86_64-${{ inputs.target_triple_suffix }}
          path: artifacts/amd64

      - uses: actions/download-artifact@v4
        with:
          name: artifacts-build-local-aarch64-${{ inputs.target_triple_suffix }}
          path: artifacts/arm64

      - name: Extract and Normalize Artifacts
        run: |
          # Extract
          tar -xJf artifacts/amd64/*.tar.xz -C artifacts/amd64/
          tar -xJf artifacts/arm64/*.tar.xz -C artifacts/arm64/

          # Move
          mv artifacts/amd64/**/${{ inputs.binary_name }} artifacts/amd64/${{ inputs.binary_name }}
          mv artifacts/arm64/**/${{ inputs.binary_name }} artifacts/arm64/${{ inputs.binary_name }}

          # Insurance policy: Mark as executable on the runner host
          # This 'mode' is preserved by the Docker COPY command
          chmod +x artifacts/amd64/${{ inputs.binary_name }}
          chmod +x artifacts/arm64/${{ inputs.binary_name }}
      - name: Build and push
        uses: docker/build-push-action@v6
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          build-args: BINARY_NAME=${{ inputs.binary_name }}

Dockerfile #

So this is entirely dependent on the resources you need.

The list of images are from here: GoogleContainerTools/distroless

Other than manually testing which container to use, a good way is using ldd

For example the below binary looks like this:

linux-vdso.so.1 (0x00007ffdfb764000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x000077a805e1a000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x000077a805119000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x000077a804e00000)
/lib64/ld-linux-x86-64.so.2 (0x000077a805e50000)

So I need a container with cc.

FROM gcr.io/distroless/static-debian13:nonroot

ARG TARGETARCH
ARG BINARY_NAME

COPY artifacts/${TARGETARCH}/${BINARY_NAME} /usr/local/bin/${BINARY_NAME}

EXPOSE 8000

USER nonroot:nonroot

ENTRYPOINT ["${BINARY_NAME}"]