Build and use a custom Docker image

The pre-built GPU Docker Stacks images cover most PyTorch and TensorFlow workflows. Build your own image when you need:

  • a Python package not included in the pre-built images (e.g. torch-geometric, faiss, open3d, detectron2),

  • a system-level apt dependency, or

  • pinned package versions for strict reproducibility.

A custom image works in both JupyterHub and batch jobs — build it once, use it everywhere.

Prerequisites

  • Docker installed on your local machine.

  • An account on a container registry where you can push images. Any of these work:

    • GitHub — provides the free GitHub Container Registry (ghcr.io) for both public and private images.

    • GitLab.com — every project has a built-in container registry at registry.gitlab.com.

    • Docker Hub — the default public registry; free for public images.

    Note

    IDLab users

    If you have access to the IDLab GitLab instance, you can use its container registry at gitlab.ilabt.imec.be:4567. It is co-located with the Slices AI infrastructure, which makes image pulls slightly faster. The steps below are identical; just substitute gitlab.ilabt.imec.be:4567/<namespace>/<project>/<image>:<tag> wherever a registry path appears.

Step 1 — Choose a base image

The right base image depends on how you intend to use the custom image:

For JupyterHub — You must extend a Jupyter Docker Stacks compatible image. For example, when starting from the PyTorch image, your Dockerfile could use the following base image:

jupyter/pytorch-notebook:cuda12-latest

These images are publicly readable from any machine — no login required to use them as a FROM base.

For batch jobs only — Any CUDA-capable image works; it does not need JupyterLab. Official NVIDIA runtime images are a lightweight option:

nvidia/cuda:12.8.1-cudnn-runtime-ubuntu24.04

See Choosing a Docker image for a full comparison of all available image sources.

Step 2 — Write the Dockerfile

A Dockerfile is a recipe that extends the base image with your additional dependencies.

Example — PyTorch image extended with graph neural network packages:

# Start from the GPU-enabled PyTorch base image
FROM gitlab.ilabt.imec.be:4567/ilabt/gpu-docker-stacks/pytorch-notebook:cuda12.6-latest

# System-level packages require root
USER root
RUN apt-get update && \
    apt-get install -y libgraphviz-dev && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

# Install Python packages
RUN pip install --no-cache-dir torch-geometric pygraphviz

# Always restore the notebook user when extending a JupyterHub image
USER jovyan

Tip

The above example uses some “best practices” to keep the Docker image as small as possible. This improves pull times and reduces storage needs.

For more patterns (conda packages, custom Jupyter extensions, multi-stage builds), see the Jupyter Docker Stacks — extending images documentation.

Step 3 — Build the image

From the directory containing your Dockerfile:

❯ docker build -t <registry>/<your-image>:v1 .

Replace <registry>/<your-image> with the full image path for your registry (examples in Step 4 below). Use a meaningful version tag (:v1, :v2, …) so you can roll back.

Apple Silicon / ARM users — cross-compile for x86_64

All Slices AI clusters run on x86_64 (linux/amd64) hardware. On an Apple M-series Mac (or any other ARM machine), docker build produces an ARM image by default, which will not run on the Slices AI infrastructure.

Use docker buildx to cross-compile. One-time setup:

❯ docker buildx create --name slices-builder --use

Then replace docker build with:

❯ docker buildx build --platform linux/amd64 -t <registry>/<your-image>:v1 --push .

buildx builds and pushes in one step; the --push flag is required because cross-compiled images cannot be loaded into the local Docker daemon. Skip Step 4 — the image is already in the registry.

Step 4 — Push to the registry

Log in to your registry and push the image.

GitHub Container Registry (ghcr.io)

  1. Create a Personal Access Token (classic) with the write:packages scope.

  2. Log in and push:

    ❯ echo "<your-PAT>" | docker login ghcr.io -u <github-username> --password-stdin
    ❯ docker push ghcr.io/<github-username>/<image-name>:v1
    

GitLab.com

  1. Create a Personal Access Token with the read_registry and write_registry scopes.

  2. Log in and push:

    ❯ docker login registry.gitlab.com
    ❯ docker push registry.gitlab.com/<namespace>/<project>/<image-name>:v1
    

Docker Hub

Warning

Docker Hub has very strict rate limits. We strongly recommend against using Docker Hub to host images.

  1. Create an account at https://hub.docker.com and log in:

    ❯ docker login
    ❯ docker push <dockerhub-username>/<image-name>:v1
    

Step 4 — Use the image

In JupyterHub

On the JupyterHub spawn page, paste the full image path into the Docker image field. For public images, no further configuration is needed:

Registry

Path to paste

GitHub Container Registry

ghcr.io/<github-username>/<image-name>:v1

GitLab.com

registry.gitlab.com/<namespace>/<project>/<image-name>:v1

Docker Hub

<dockerhub-username>/<image-name>:v1

In a batch job — public image

Set the image field in your job definition to the full image path:

{
  "request": {
    "docker": {
      "image": "ghcr.io/<github-username>/<image-name>:v1"
    }
  }
}

In a batch job — private image

For private images, embed credentials directly in the image path using the format <username>:<token>@<registry>/<image>:<tag>:

Registry

image field value

GitHub Container Registry

<github-username>:<PAT>@ghcr.io/<github-username>/<image-name>:v1

GitLab.com (deploy token)

<token-username>:<token>@registry.gitlab.com/<namespace>/<project>/<image-name>:v1

Docker Hub

<dockerhub-username>:<access-token>@index.docker.io/<dockerhub-username>/<image-name>:v1

Warning

Never use your personal account password. Use a scoped access token with the minimum required permission (registry read only):

  • GitHub: a PAT with read:packages scope (or a fine-grained token scoped to the specific repository with Packages: read permission).

  • GitLab.com: a Deploy Token (Settings → Repository → Deploy tokens) with read_registry scope.

  • Docker Hub: an Access Token (Account Settings → Personal access tokens) with Public Repo Read-only or Repository Read permission.

See image in the Job Definition reference for the full image field syntax.