Cannot run the alias setupATLAS

Hello,

Since the host-name was changed to baobab.hpc I can’t run the setupATLAS.

Previously I used :

$ export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
$ alias setupATLAS=‘source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh’
$ setupATLAS

What could be the problem?

Thanks in advance.

Hi all,

it looks like this is a general cvmfs issue and not limited to ATLAS. Even just listing the paths available under /cvmfs hangs for me (though I can send and receive packets from grid03 which is where I believe the squid server is located).

Cheers,
Johnny

Hi there,

As explained by @Massimo.Brero , baobab2.unige.ch was deprecated more than a year ago (cf. Ssh: Could not resolve hostname baobab2.unige.ch - #2 by Massimo.Brero), please update now any scripts relying on it.

NB , baobab2.hpc.unige.ch works flawlessly.

Indeed (cf. Cannot access /dpnc/ directory - #9 by Luca.Capello ).

Thx, bye,
Luca

Hi Luca,

Dalila has not had a problem with baobab2.hpc.unige.ch, and no script is reliant on it. The reported problem is solely with cvmfs.

Also, I thought the idea behind the squid server (set up by Yann S and Gianfranco S from UniBe) was in reply to our request to have cvmfs mounted directly on baobab, and not through a link to the dpnc cluster.

If this is not the case I will reopen the issue of a direct mount of cvmfs on baobab such that this doesn’t happen again in future.
Furthermore, the dpnc link seems to be working again, so it is likely the squid server which is problematic (set up by HPC not Yann M).

Thanks in advance for your help on this issue.
Johnny

Hi John,
Just to confirm – yes, that’s right, the dpnc link works fine now as updated in cannot access dpnc directory. It is, like you said, a problem with mounting cvmfs on baobab and we’ve been facing this for ~5 days now.
Cheers,
AR

Hi Arshia,

I’ve had a follow up from HPC and this only effects the login node (where cvmfs hadn’t been changed to point to the new squid server and still went through grid03).
However there are no issues with using cvmfs on the worker nodes.
Hopefully a fix will be found for the login node, but with the worker nodes being unaffected this is workable.
For the login node, the problem extends back to February at least (which was the last time cvmfs on the grid03 machine on the dpnc cluster was updated). The squid server was set up in April.

Johnny

Hi Johnny,

If I understood you correctly, you are saying that if one will submit jobs to Baobab2 everything should be fine. And the problem appear only in case if one will try to mount the CVMFS on Baobab2 in interactive mode, right?

Hi,

yes. Any jobs you submit with sbatch to a worker node, or interactive session you request on a worker node with salloc, will work with cvmfs.
Only if you try to do something with cvmfs directly on the login node (where you land after ssh, hostname login2) will you encounter errors.

Johnny

Hi there,

/cvmfs/ is now accessible again on login2 as well, exactly as it was already the case for the computational nodes.

To make a long story short: when we migrated from grid03.unige.ch to the CVMFS squid server in preparation for ARC-CE, for whatever reason login2 was not updated and instead still kept mounting /cvmfs/ from grid03.unige.ch .

Sorry again for the inconvenience.

Thx, bye,
Luca

1 Like

Hi Luca,

Yes, now I may confirm that CVMFS is working.

Thank you!

Best regards,
Max