How to change compiler (gcc)

Hello,

I am trying to follow the installation instructions here (Installation — detectron2 0.6 documentation) for PyTorch detectron2. The compilation fails at the command:

python -m pip install -e detectron2

because the gcc version is too old.
gcc: error: unrecognized command line option ‘-std=c++17’
error: command ‘/usr/bin/gcc’ failed with exit code 1

I have tried to load a newer version using:

module load GCC/11.2.0

but this doesn’t seem to take effect.

which gcc

still results in:

/usr/bin/gcc

I hope there is a simple solution here that will get this working soon!

Thanks for any suggestions,
~ Erica

Hi,

it seems to work:

(baobab)-[root@login2 ~]$ su - lastufka
Last login: Thu Mar 16 09:14:19 CET 2023 from 10.70.128.50 on pts/642
(base) (baobab)-[lastufka@login2 ~]$ which gcc
/opt/ebsofts/GCCcore/11.2.0/bin/gcc

(base) (baobab)-[lastufka@login2 ~]$ module load GCC/11.2.0
(base) (baobab)-[lastufka@login2 ~]$ which gcc
/opt/ebsofts/GCCcore/11.2.0/bin/gcc

Show us the real output of your commands please.

Best

(py38) (yggdrasil)-[lastufka@cpu007 ~]$ which gcc
/usr/bin/gcc
(py38) (yggdrasil)-[lastufka@cpu007 ~]$ module load GCC/11.2.0
(py38) (yggdrasil)-[lastufka@cpu007 ~]$ which gcc
/usr/bin/gcc

Any update on this now that server maintenance is over? This should also work on yggdrasil (I would prefer not to have to move everything to Baobab over such a small issue).

Thanks & best regards

Hi @Erica.Lastufka

The difference between your case and @Yann.Sagon’s example is where you executed your command lines.

# Working with srun (load module before then the job)
(baobab)-[alberta@login2 ~]$ ml GCC/12.2.0
(baobab)-[alberta@login2 ~]$ srun which gcc
srun: job 610387 queued and waiting for resources
srun: job 610387 has been allocated resources
/opt/ebsofts/GCCcore/12.2.0/bin/gcc

# Not working with salloc and gcc command directly
(baobab)-[alberta@login2 ~]$ salloc
salloc: Pending job allocation 610400
salloc: job 610400 queued and waiting for resources
salloc: job 610400 has been allocated resources
salloc: Granted job allocation 610400
salloc: Waiting for resource configuration
salloc: Nodes cpu001 are ready for job
(baobab)-[alberta@cpu001 ~]$ which gcc
/usr/bin/gcc
(baobab)-[alberta@cpu001 ~]$ gcc --version
gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-16)
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

(baobab)-[alberta@cpu001 ~]$ ml GCC/12.2.0
(baobab)-[alberta@cpu001 ~]$ gcc --version
gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-16)
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

# Working using the full path
(baobab)-[alberta@cpu001 ~]$ /opt/ebsofts/GCCcore/12.2.0/bin/gcc --version
gcc (GCC) 12.2.0
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

I don’t know why it’s not working with the salloc, but you should be able to compile your soft using srun instead of salloc

Or sbatch but be careful about the current issue on Boabab:

PS: I had some free time for an exceptional answer. Due to May 1st, the HPC service is not available :wink:

@Erica.Lastufka

Update:

It appears that the PATH environment variable is not exported when using salloc . Therefore, attempting to load a module before running salloc will not work. To ensure that the desired module is properly loaded, you should first run salloc and then load the module (or run salloc followed by purging any existing modules and then loading the desired module).

(baobab)-[alberta@login2 ~]$ ml purge
(baobab)-[alberta@login2 ~]$ salloc
salloc: Pending job allocation 621668
salloc: job 621668 queued and waiting for resources
salloc: job 621668 has been allocated resources
salloc: Granted job allocation 621668
salloc: Nodes cpu001 are ready for job
(baobab)-[alberta@cpu001 ~]$  ml GCC/12.2.0
(baobab)-[alberta@cpu001 ~]$ gcc --version
gcc (GCC) 12.2.0
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

We open a ticket to schedmd for this case.

Hi again,

we found the bug and it is now corrected on Baobab and Yggdrasil. In fact this has probably never worked, so this is not technically a bug but a new feature, enjoy:)

(baobab)-[sagon@login2 ~]$ ml GCC
(baobab)-[sagon@login2 ~]$ salloc
salloc: Pending job allocation 633768
salloc: job 633768 queued and waiting for resources
salloc: job 633768 has been allocated resources
salloc: Granted job allocation 633768
salloc: Nodes cpu001 are ready for job
(baobab)-[sagon@cpu001 ~]$ which gcc
/opt/ebsofts/GCCcore/12.2.0/bin/gcc
1 Like