Singularity version 3.2.1

Hi,
In investigating Update on Baobab Maintenance Aug 15-16? + Persistent Post Maintenance GPU Issue - #3 by Pablo.Strasser bugs, I discovered that now singularity is available without module load.

[strassp6@login2 ~]$ which singularity
/usr/bin/singularity
[strassp6@login2 ~]$ singularity --version
singularity version 3.2.1-1.1.el7
[strassp6@login2 ~]$

However this version is not available with module load.

[strassp6@login2 ~]$ module spider singularity
Versions:
Singularity/2.4.2
Singularity/2.4.5

Would it be possible to have this singularity version added as module by having the conflicting environment variable removed?

Secondly, a dependency of singularity 3.2.1 is missing

[strassp6@login2 ~]$ singularity build pytorch2.simg docker://nvcr.io/nvidia/pytorch:19.07-py3
INFO: Starting build…
Getting image source signatures
Skipping fetch of repeat blob sha256:5b7339215d1d5f8e68622d584a224f60339f5bef41dbd74330d081e912f0cddd

Copying config sha256:48e4d6a0579c770d51448436223fafcbb1a2ab96c0c8a12db13af47def8c888f
27.95 KiB / 27.95 KiB [====================================================] 0s
Writing manifest to image destination
Storing signatures
INFO: Creating SIF file…
FATAL: While performing build: While searching for mksquashfs: exec: “mksquashfs”: executable file not found in $PATH

A google search show a package called squashfs-tools ( https://centos.pkgs.org/7/centos-x86_64/squashfs-tools-4.3-0.21.gitaae0aff4.el7.x86_64.rpm.html ).

When the other problem with singularity are fixes, it would be nice to have version 3.2.1 fully installed.

Thanks in advance.

Pablo

Squashfs-tools is installed on login2 as described on post Problem on login node 2 for building singularity images.

However the needed binary is in /usr/sbin . After adding /usr/sbin into path I was able to fully use singularity 3.2.1. Ie for building images and running code.

It would be nice having a module loadable by module load for people not familliar with path manipulation that may want to upgrade.

I will do more test on the new version later.

Thanks.

Hi there,

Indeed, the rpmpkg:singularity was there since 2019-03-13, IIRC simply because the module:Singularity/2.4.2 was not compatible with CentOS 7:

  1. you offered us to test CentOS 7 with Singularity (cf. WIP: Migrate Baobab to CentOS7 - #2 by Pablo.Strasser )
  2. indeed module:Singularity/2.4.5 was added to Baobab only after, on 2019-04-19

For sure, as a remainder the idea is to have as much software as possible via module , and not rpmpkg .

rpmpkg:squashfs-tools is installed since you reported the first issue back on 2019-05-08 (cf. Problem on login node 2 for building singularity images - #2 by Luca.Capello ), the problem is that the mksqausfhs binary is installed in a privileged PATH:

[root@login2 ~]# which mksquashfs 
/usr/sbin/mksquashfs
[root@login2 ~]# 

It is the duty of Singularity (and not the package manager nor the system administrator) to allow using a tool which is privileged. Indeed, the module:Singularity/2.4.5 allows building images as an unprivileged user:

  1. your docker://pytorch:latest (cf. Issue with GPU on CentOS7 - #3 by Pablo.Strasser ):
capello@login2:~$ module load GCC/5.4.0-2.26
capello@login2:~$ module laod Singularity/2.4.5
capello@login2:~$ singularity build pytorch_pablostrasser.simg docker://pablostrasser/pytorch:latest
[...]
WARNING: Building container as an unprivileged user. If you run this container as root
WARNING: it may be missing some functionality.
Building Singularity image...
Singularity container built: pytorch_pablostrasser.simg
Cleaning up...
capello@login2:~$ 
  1. one from the official Singularity documentation (cf. http://singularity.lbl.gov/docs-build-container#downloading-a-existing-container-from-singularity-hub ):
capello@login2:~$ singularity build lolcow.simg shub://GodloveD/lolcow
Cache folder set to /home/users/c/capello/.singularity/shub
Progress |===================================| 100.0% 
Building from local image: /home/users/c/capello/.singularity/shub/GodloveD-lolcow-master-latest.simg
WARNING: Building container as an unprivileged user. If you run this container as root
WARNING: it may be missing some functionality.
Building Singularity image...
Singularity container built: lolcow.simg
Cleaning up...
capello@login2:~$ 

Please stick to module:Singularity/2.4.5 , I will remove the rpmpkg:singularity soon and start checking how to build a more recent version with module , despite upstream supporting only up to version 2.4.2 (cf. Redirecting... ).

Thx, bye,
Luca

I just tested the singularity 3.2.1 on gpu and it works.

Here the result of the output of the test script:

cat slurm-19747453.out
I: full hostname: gpu006.cluster
I: CUDA_VISIBLE_DEVICES: 0
=====
singularity version 3.2.1-1.1.el7
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], device=‘cuda:0’)

In conclusion, all version of singularity seem to currently work on the cluster (2.4.2 and 2.4.5) with module load and 3.2.1 without module load.

For having a module load version of singularity you may want to contact the cluster admin of cscs as they have a module load version of singularity 3.2.1 in their cluster.

Thanks for your help.

Pablo

Dear Pablo,

I have installed Singularity 3.4 on Baobab and you can use it through module.

Example:

module load GCCcore/8.2.0 Singularity/3.4.0-Go-1.12
PATH=$PATH:/sbin/ singularity build lolcow.simg docker://godlovedc/lolcow
[...]
INFO:    Creating SIF file...
INFO:    Build complete: lolcow.simg

[sagon@login2 ~] $ singularity run lolcow.simg
 _________________________________________
/ Perfect day for scrubbing the floor and \
\ other exciting things.                  /
 -----------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
[sagon@login2 ~] $

Best

1 Like

Hi there,

Thank you, as announced (cf. Singularity version 3.2.1 - #3 by Luca.Capello ), rpmpkg:singularity removed.

Thx, bye,
Luca

1 Like

Thanks for that. I didn’t try it yet, but will give it a spin when I have the time.