AI Infrastructure service¶
The Slices AI infrastructure is a distributed system for running jobs in GPU-enabled Docker containers. The Slices AI infrastructure consists of a set of heterogeneous clusters, each with their own characteristics (GPU model, CPU speed, memory, bus speed, …), allowing you to select the most appropriate hardware. Each job runs isolated within a Docker container with dedicated CPUs, GPUs and memory for maximum performance.
This documentation contains more info on what the Slices AI infrastructure is and how to use it.
Hint
Looking for a quick introduction? Have a look at our 'JupyterHub introduction for the Slices AI infrastructure' slidedeck.
For bugreports, questions and feedback:
E-mail us at gpulab@ilabt.imec.be
Table of Contents