Use of shared memory /dev/shm

Hello, I’m using eggNOG mapper and they say we should put the database in /dev/shm.
This folder exists and I have write access to it.
As I read online /dev/shm is nothing more than shared memory. Is there some advantage to put (temporarely) the database in /dev/shm rather than my scratch directory?

Do I have acces from different cluster nodes on to the same /dev/shm ?

Dear Silas,

Indeed /dev/shm is (probably) stored in memory and thus faster than any other local/remote storage.
As it’s stored in memory the space is limited.
Temporary storage sorted by speed:

  • /dev/shm
  • /scratch (local storage, erased when job is terminated).
  • $USER/scratch (network storage, not erased when job is terminated)

No. Is this needed? Does eggNOG support to be run on multiple nodes?

As a side note: as /dev/shm is memory backed, you must request enough memory in your sbatch job to run your job + store your data. If you don’t, you’ll end up with a out-of-memory.

So it’s ram memory mounted as disk.
I split my fasta in subsets. I would like to use 10 eggNOG jobs accessing the database (45G), so I would need to copy it 10 times to the /dev/shm and request enough memory?

Hello,
in this case it’s probably better to let your database in the scratch space as this will use almost half the ram of each node you’ll be using. I just saw that you aren’t using the new scratch space. Please check here to enable it:

Please let us know if the performances are good enough in the scratch space for this software.

By the way, if needed we can install eggnog-mapper-1.0.3-intel-2018a-Python-2.7.14 on Baobab.

2 posts were split to a new topic: Out of memory slurm