The Slices AI infrastructure Command Line Interface

Installation

The Slices AI infrastructure CLI requires Python 3.10 or higher. To check if you have a recent enough Python, run python3 –version.

The Slices AI infrastructure CLI is available as the slices-cli-ai package and can be installed using pip. To install, run:

pip install slices-cli-ai --extra-index-url=https://doc.slices-ri.eu/pypi/

Basic CLI usage

After installation, the slices ai command is available:

 ❯ slices ai

 Usage: slices ai [OPTIONS] COMMAND [ARGS]...

 AI Infrastructure Service Commands.

╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --site,--site-id          TEXT  AI Infrastructure Service site ID. [env var: SLICES_AI_SITE, SLICES_AI_SITE_ID] [default: be-gent1]          │
│ --help            -h            Show this message and exit.                                                                                  │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Auxiliary commands ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ wait      Wait until the requested status has been reached (or can never be reached).                                                        │
│ modify    Change maxDuration, minDuration or notAfter of a job.                                                                              │
│ debug     Show Internal Job debug logs.                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Lifecycle commands ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ submit    Submit a job described in a file.                                                                                                  │
│ cancel    Cancel a job.                                                                                                                      │
│ rm        Delete a job.                                                                                                                      │
│ halt      Halt a job. Halted jobs may be automatically re-QUEUED later.                                                                      │
│ hold      Hold a QUEUED job, preventing it from running.                                                                                     │
│ requeue   Requeue/Release a held job, QUEUEing it, so it waits to run.                                                                       │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Information commands ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ clusters  Show available clusters and their resources.                                                                                       │
│ list      List your jobs.                                                                                                                    │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Job interaction commands ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ show      Show Job Details.                                                                                                                  │
│ output    Show Job output.                                                                                                                   │
│ ssh       Connect to a job using SSH.                                                                                                        │
│ scp       Transfer files to/from a job using SCP.                                                                                            │
│ sftp      Transfer files to/from a job using SFTP.                                                                                           │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

To get a list of currently running jobs:

 slices ai jobs
                             Jobs in Project myproject (3/3)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ID                                   ┃ Name                  ┃ Status    ┃ Created At            ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━┩
│ 1b615fdd-e6d1-470e-8b75-778e2c8aa3c7 │ JupyterHub-singleuser │ FINISHED  │ 2025-12-24 08:24 CEST │
│ 720559d6-b11c-4c92-bc7a-a68c33bad727 │ JupyterHub-singleuser │ FAILED    │ 2025-08-05 13:04 CEST │
│ 372a31ea-7753-4c2a-8221-6e187c4f9b53 │ NVIDIA SMI            │ FINISHED  │ 2025-03-30 11:20 CEST │
└──────────────────────────────────────┴───────────────────────┴───────────┴───────────────────────┘

Submitting a Job

A job on the Slices AI infrastructure is defined by a JSON job definition, which looks as follows:

my-first-jobRequest.json
{
    "name": "nvidia-smi",
    "description": "Print output of nvidia-smi command",
    "request": {
        "resources": {
            "cpus": 2,
            "gpus": 1,
            "cpuMemoryGb": 2,
            "clusterId": 4
        },
        "docker": {
            "image": "nvidia/cuda:12.8.1-cudnn-runtime-ubuntu24.04",
            "command": "nvidia-smi"
        }
    }
}

To submit the job, the command is:

❯ slices ai submit my-first-jobRequest.json
✨ Created Job 4e92940d-cc87-4133-8cc1-f2ff33a7db9a

A hash representing the job ID is returned.

Getting information on a job

Status

You can query the status of this job using (an unique prefix of) the job ID:

❯ slices ai job 4e92940d
      Job ID: 4e92940d-cc87-4133-8cc1-f2ff33a7db9a
        Name: nvidia-smi
 Description: Print output of nvidia-smi command
     Project: proj_account.ilabt.imec.be_59z3qackp19veshw8yr8kws0yz
     User ID: user_account.ilabt.imec.be_0ma4rks06s9kxahxrgjhg6y41b
Docker image: nvidia/cuda:12.8.1-cudnn-runtime-ubuntu24.04
     Command: nvidia-smi
      Status: STARTING
  Cluster ID: -
   Worker ID: -
  Machine ID: -

               Timing:
      Created: 2026-04-07T15:22:26+02:00 (38 seconds ago)
       Queued: 2026-04-07T15:22:26+02:00 (less than 1 second after job creation)
     Assigned: 2026-04-07T15:22:26+02:00 (less than 1 second after QUEUED)
     Starting: 2026-04-07T15:22:31+02:00 (5 seconds after ASSIGNED)
      Running: -
        Ended: -
     Duration: -
State Updated: 2026-04-07T15:22:31+02:00 (33 seconds ago)

Output logs

You can view the command line output of the job using output:

$ slices ai output 4e92940d
==========
== CUDA ==
==========

CUDA Version 12.8.1

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

Tue Apr  7 13:23:37 2026
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.82.07              Driver Version: 580.82.07      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1080 Ti     On  |   00000000:01:00.0 Off |                  N/A |
|  0%   33C    P8             10W /  280W |       3MiB /  11264MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+
+-----------------------------------------------------------------------------------------+

Interactive access to a job

Important

You should only use these interactive functions for debugging and developing your jobs. Our fair use policy requires that jobs can run without manual intervention.

To achieve this, you can setup the environment for your job by creating a custom Docker container and/or by running a startup script.

Console access via SSH

The Slices AI infrastructure injects an SSH-server into the Docker container running your job.

This allows you to gain SSH access to the container, which is useful for debugging and development purposes. You can use the ssh-subcommand of the CLI to do this.

Example:

❯ slices ai ssh c873c137
ssh -p 5003 root@4a.gpulab.ilabt.imec.be
The authenticity of host '[4a.gpulab.ilabt.imec.be]:5003 (<no hostip for proxy command>)' can't be established.
ED25519 key fingerprint is SHA256:vTH1IxlybBfLLjMDCVX/2zrnTjiyZPJMFgrvoqtwlkE.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[4a.gpulab.ilabt.imec.be]:5003' (ED25519) to the list of known hosts.
root@4c4b2eebf31c:~#

If you want to know the underlying SSH command, you can use the --show command flag of slices ai ssh. This can be useful to connect to the job using a different tool, for example the “Remote SSH” extension of Visual Studio Code, or to use rsync to transfer files to/from the job.

❯ slices ai ssh --no-exec --show command  c873c137
ssh -p 5003 -J ffftwlocal@bastion.ilabt.imec.be root@4a.gpulab.ilabt.imec.be

Note

This functionality is also available on the Slices AI website: in the job detail view, the Console tab gives you a terminal directly inside the running container.

Transferring files using SFTP

You can access (read/write) files in the job’s container using SFTP. This includes files in the root filesystem, and files in the mounted project dirs. The Slices AI infrastructure CLI offers built-in support for several SFTP GUIs, and the sftp CLI, and handles the SSH proxy for you if needed.

Example:

❯ slices ai sftp --proxy off c873c137
sftp -P 5003 root@4a.gpulab.ilabt.imec.be
Pseudo-terminal will not be allocated because stdin is not a terminal.
Connected to 4a.gpulab.ilabt.imec.be.
sftp> pwd
Remote working directory: /root
sftp> put example-file.txt
Uploading example-file.txt to /root/example-file.txt
example-file.txt                                                                                              100%    0     0.0KB/s   00:00
sftp>

Manipulating a job

Stopping

If you want to stop a job that is still running (or queued), you can use the cancel-subcommand of the CLI:

❯ slices ai cancel c873c137
♻ Cancelled Job c873c137-73c0-4eb4-9dec-3c728096aac7

Extending the lifetime of a job

The default maximum duration of a job is 8 hours. If you want to extend the maximum duration of a job, you can use the modify-subcommand of the CLI:

❯ slices ai modify 643ff32d  --max-duration "12 hours"
🕙 Extended experiment.
❌ Could not modify job: ConflictException (for PUT
https://ai.slices-ri.eu/apis/ai.slices-ri.eu/v1/jobs/643ff32d-2313-4798-9555-3805c356b8fa/request/scheduling/maxDuration): maxDuration '12
hours' conflicts experiment expires_at: new maxDurati

Diagnostics

Inspecting the Slices AI infrastructure event log

You can view the internal event log of the Slices AI infrastructure. This is mostly useful for debugging purposes. It can contain error messages thrown by the Slices AI infrastructure code which allow you to find the error in your job request, or to submit a bugreport.

❯ slices ai debug 4e92940d
2026-04-07 13:22:26+00:00: Status to ASSIGNED
2026-04-07 13:22:26+00:00: Status to QUEUED
2026-04-07 13:22:26+00:00: DEBUG: Job requested_cluster_id_list=[4] cluster_id=4
2026-04-07 13:22:26+00:00: DEBUG: Can assign job 4e92940d-cc87-4133-8cc1-f2ff33a7db9a to any of ['gpulab4B/jobsd', 'gpulab4C/jobsd',
'gpulab4A/jobsd']
2026-04-07 13:22:26+00:00: DEBUG: Job assigned to gpulab4B/jobsd
2026-04-07 13:22:26+00:00: DEBUG: Updated scheduler_info to
{"assignedClusterId":4,"assignedInstanceId":"jobsd","assignedSlaveName":"gpulab4B","queuedExplanations":[],"tallyIncrement":null,"haltEvents":[]
,"schedulerSeenBase":"2026-04-07T13:22:26Z","schedulerSeenScoreHalt":null,"schedulerSeenPrioHalt":null,"ignoringReservationIds":[]}
2026-04-07 13:22:31+00:00:  INFO: A job was assigned to us by the master
2026-04-07 13:22:31+00:00: Status to STARTING
2026-04-07 13:22:31+00:00:  INFO: SSH pubkey access step 1 done: auto added port forwarding for 22
2026-04-07 13:22:31+00:00:  INFO: Claimed resources for job
2026-04-07 13:22:31+00:00:  INFO: Fetching latest version of image 'nvidia/cuda:12.8.1-cudnn-runtime-ubuntu24.04' (no auth)
2026-04-07 13:23:28+00:00:  INFO: Fetched Docker image 'nvidia/cuda:12.8.1-cudnn-runtime-ubuntu24.04' with hash
'sha256:2189eb90b6f7a93003344a5e9d45aeed7cd6158bffb41d9fbe8b1b1a624533af'.
2026-04-07 13:23:28+00:00:  INFO: GPUs used by the job: 0
2026-04-07 13:23:28+00:00:  INFO: CPUs used by the job: 9,25
2026-04-07 13:23:28+00:00:  INFO: Network ports used by this job: 22 -> 5000
2026-04-07 13:23:28+00:00:  INFO: 'nofile' soft limit=1024, hard limit=31775
2026-04-07 13:23:37+00:00:  INFO: Created container 2625986d5f7688e5657291e26056c232c12c5f729849c3dd9b13d588e9846a36
2026-04-07 13:23:37+00:00:  INFO: Started container 2625986d5f7688e5657291e26056c232c12c5f729849c3dd9b13d588e9846a36
2026-04-07 13:23:37+00:00: Status to RUNNING
2026-04-07 13:23:38+00:00: ERROR: SSH Step 2 failed. Continuing without SSH support.
2026-04-07 13:23:38+00:00:  INFO: Job container exited successfully with exit code '0'
2026-04-07 13:23:38+00:00:  INFO: Reporting job state resources
2026-04-07 13:23:38+00:00: Status to FINISHED
2026-04-07 13:23:38+00:00: DEBUG: Job.state.resources has been updated
2026-04-07 13:23:38+00:00:  INFO: Fetched GPU details for job
2026-04-07 13:23:38+00:00: DEBUG: Job.state.resources.gpu_details has been updated