Baobab scheduled maintenance: 26-27 August 2020

Dear users,

as just announced on the baobab-announce@ mailing list, we will do a software and hardware maintenance of the Baobab HPC cluster on Wednesday 26 August 2020 and Thursday 27 August 2020.

The maintenance will start at 08:00 +0100 and you will receive an email when the maintenance will be over.

The cluster will be totally unavailable during this period, with no access at all (not even to retrieve files).

If you submit a job in the meantime, be sure that the expected wall time (duration) does not overlap with the start of the maintenance or your job will be scheduled after the maintenance.

What should be done during this maintenance:

  1. hardware maintenance (electrical power and network)
  2. software upgrades (OS, Slurm plugins, etc.)

Thanks for your understanding.

Best regards,
the HPC team

1 Like

Hi there,

As just announced on the baobab-announce@ mailing list, at first I forgot a fundamental change in the Slurm configuration which will cause the lost of all PENDING jobs.

Slurm provides the --gpus options to select more GPUs than a single node has and this options is provided by the select/cons_tres plugin (cf. Slurm Workload Manager - Generic Resource (GRES) Scheduling and Slurm Workload Manager - slurm.conf ).

However, Baobab is currently using the select/cons_res plugin and we thus must change the current configuration.

Sorry for the inconvenience.

Best regards,
the HPC team

1 Like

Dear all,

The maintenance is now over ! And you can use Baobab again.

What is new on Baobab and what kept us busy during this maintenance :

Please also note that no pending jobs have been lost during this maintenance, despite what was announced :

I forgot in the first announcement that there will be a fundamental
change in the Slurm configuration which will cause the lost of all
PENDING jobs.

We will now keep working on the installation of Yggdrasil and we will keep you posted when it will be open for tests (we hope in the coming weeks) !

We wish you all the best,

Massimo, for The HPC team