Dear HPC users,
During last maintenance, we have introduced a simplified naming scheme for Slurm partitions.
We hope the new naming will make it easier to understand partitions, especially for beginners. This is also the same logic that we are using on the new cluster Yggdrasil (opening very soon!).
Please update your scripts and sbatch as soon as possible to avoid nasty surprises in the future, as we will remove the old partitions during next maintenance (February 2021).
You can find updated information in the new documentation, including a table to help you choose the new partition name if you feel lost : please see partitions
Cheers,
HPC team
1 Like
Dear all,
For those of you who are still using the former partition names, please upgrade your sbatch scripts from the old name to the new one as soon as possible.
Today, we are starting to drain those old partitions (anything that contains “-EL7”), so you won’t be able to submit any new jobs on those. Please note that already submitted jobs will still run until the next maintenance at the end of February, even if the partitions won’t appear in the list (sinfo
).
The following tables should help you understand how to edit your scripts.
Public partitions:
Old name |
New name |
debug-EL7 |
debug-cpu |
mono-EL7 |
public-cpu |
parallel-EL7 |
public-cpu |
bigmem-EL7 |
public-bigmem |
mono-shared-EL7 |
shared-cpu |
shared-EL7 |
shared-cpu |
shared-bigmem-EL7 |
shared-bigmem |
shared-gpu-EL7 |
shared-gpu |
Private partitions:
Old name |
New name |
askja-EL7 |
private-askja-cpu |
biosc-EL7 |
private-biosc-cpu |
cisa-EL7 |
private-cisa-cpu |
cui-EL7 |
private-cui-cpu |
cui-gpu-EL7 |
private-cui-gpu |
dpnc-EL7 |
private-dpnc-cpu |
dpnc-gpu-EL7 |
private-dpnc-gpu |
dpt-bigmem-EL7 |
private-dpt-bigmem |
dpt-EL7 |
private-dpt-cpu |
dpt-gpu-EL7 |
private-dpt-gpu |
fpse-EL7 |
private-fpse-cpu |
gap-EL7 |
private-gap-cpu |
gervasio-gpu-EL7 |
private-gervasio-gpu |
giacobia-bigmem-EL7 |
private-giacobia-bigmem |
giacobia-EL7 |
private-giacobia-cpu |
gonzalez-gaitan-EL7 |
private-gonzalez-gaitan-cpu |
gsem-EL7 |
private-gsem-cpu |
hepia-EL7 |
private-hepia-cpu |
hepia-gpu-EL7 |
private-hepia-gpu |
kalousis-gpu-EL7 |
private-kalousis-gpu |
kruse-EL7 |
private-kruse-cpu |
kruse-gpu-EL7 |
private-kruse-gpu |
lehmann-EL7 |
private-lehmann-cpu |
pawlowski-EL7 |
private-pawlowski-cpu |
schaer-gpu-EL7 |
private-schaer-gpu |
simed-EL7 |
private-simed-cpu |
stoll-EL7 |
private-stoll-cpu |
wesolowski-EL7 |
private-wesolowski-cpu |
Also, please check the documentation first if you have any question:
https://doc.eresearch.unige.ch/hpc/slurm#partitions
All the best,
Massimo
We also take this opportunity to remind you of the two new partitions we introduced during the last maintenance:
|Partition |Time Limit|
|—|—|—|
|public-interactive-cpu |8 hours|
|public-longrun-cpu |14 Days|
The public-longrun-cpu
partition is for CPU jobs that don’t need much resources, but need a longer runtime time. It allows you to run up to 2 cores for up to 14 days.
The public-interactive-cpu
partition is for interactive CPU jobs. It allows you to run up to 6 cores for up to 8h.
N.B.: The waiting time should be shorter than requesting a job on a normal partition, which is important for interactive jobs as you don’t want your job to start in the middle of the night.
More information here:
https://doc.eresearch.unige.ch/hpc/slurm#partitions