Hi Yann,
we would be interested in testing these out in conjuction with a GPU node.
Would this be possible?
Our application is in reducing a dataloading bottleneck for the training of our networks with the latest GPUs.
Cheers,
Johnny
Hi Yann,
we would be interested in testing these out in conjuction with a GPU node.
Would this be possible?
Our application is in reducing a dataloading bottleneck for the training of our networks with the latest GPUs.
Cheers,
Johnny
Hi Johnny,
sure: I’ve created a directory for you:
[root@gpu002.cluster ~]# df -h /srv/flashblade/users/raine
Filesystem Size Used Avail Use% Mounted on
10.40.44.44:/baobab/users 55T 0 55T 0% /srv/flashblade/users
Thanks for the feedback.
Dear Yann,
I would be also interested in trying out these fast storages.
Could you also create a repository for me?
Thanks and cheers,
Manuel
Hi, it’s Done. /srv/flashblade/users/guthma.
I was playing around a bit with reading and copying hdf5 files with python from /srv/flashblade/users/guthma
and the writing is by a large factor slower than when writing e.g. on /srv/beegfs/scratch/groups/dpnc/atlas/
Manipulating 200 files takes some minutes on /srv/beegfs/scratch/groups/dpnc/atlas/ while on /srv/flashblade/users/guthma it took hours
I used the cpu node277 for testing this
Hi, you are right, it seems we have a network issue
We’ll investigate next week and update this post.
edit: this should now be solved. I forgot to update the post end of the week.
FYI,
In general, I also experience slow hdf5 reading on baobab GPU nodes.
On a laptop, reading the same file maybe 10x faster than on the GPU node.
(the hdf5 file is in my scratch/ folder)
Best,
Julien
Hi Julien,
Accessing the file from a CPU or GPU node should be the same. As you are talking about your scratch folder, it means we are talking about network shared disks. On your laptop, you are probably using a local SSD, which is clearly faster. The storage on the cluster may be slower or faster depending on the user load.
On the cluster, you may use a local storage as well: hpc:storage_on_hpc [eResearch Doc]
Hi, we’ll meet the storage vendor soon, do you have some feedback for us?