[Solved] Baobab gpu021 problem

Hi there,

FYI, the split GPUs on gpu021.baobab are going to be merged soon (cf. Nvidia A100 Ampere architecture with MIG - #5 by Yann.Sagon ).

Yes, requiring all GPU nodes but gpu021.baobab at submission time, via the --nodelist option (cf. Slurm Workload Manager - sbatch ):

capello@login2:~$ scontrol show hostlist \
    "$(sinfo -h -N -p shared-gpu | \
        awk '! /gpu021/ {print $1}' | \
        tr '\n' ' ')"
gpu[002,004-020]
capello@login2:~$ 

Thx, bye,
Luca

1 Like