Sbatch job too slow

Dear community,

I am trying to run the same code on two different files. I would like to run it in batch mode using the job array. If I run this locally on my macbook it takes usually 10 minutes per file.

When I run it following the instructions here (hpc:slurm [eResearch Doc]) it takes more than one hour per file… I was wondering what I can do to improve the speed of the code within reasonable usage of baobab?

After I create my file array, I use the follwing line in terminal.
sbatch --array=1-2%100 my_sbatch_mt.sh

Thank you very much for any suggestions!
Regards,
Robert

This is my following bash script (my_sbatch_mt):

#!/bin/sh
#SBATCH --job-name unpack-all           # this is a parameter to help you sort your job when listing it
#SBATCH --error Log/jobname-error.e%j     # optional. By default a file slurm-{jobid}.out will be created
#SBATCH --output Log/jobname-out.o%j      # optional. By default the error and output files are merged
#SBATCH --ntasks 1                    # number of tasks in your job. One by default
#SBATCH --cpus-per-task 1             # number of cpus in your job. One by default
#SBATCH --partition public-cpu         # the partition to use. By default debug-cpu
#SBATCH --time 0-01:00:00                  # maximum run time.

#We want to have at least 8GB RAM on this node
#SBATCH --mem=8000 

ml GCC/11.2.0  OpenMPI/4.1.1 ROOT/6.24.06 CMake/3.22.1

srun unpack_all_array.sh "${SLURM_ARRAY_TASK_ID}" | tee Log/merge_num-stdout.txt                      # run your software


-----------------------------------------------------

Hi,

I don’t know the software unpack_all_array.sh. Do you know if this soft is doing a lot of IO or if it is able to handle more than one cpu? As you are talking about 1h runtime, you should use another partition: shared-cpu limited to 12h bug bigger (in term of number of nodes) than public-cpu.