Hello all,
I am writing because I cannot run my script on the baobab2 cluster.
I tried with an old script that was working back then and I always have the same message:
srun: job 5815111 queued and waiting for resources
srun: job 5815111 has been allocated resources
srun: error: cpu001: task 0: Exited with exit code 13
slurmstepd: error: execve(): test.sh: Permission denied
Here is the .sh file I try to run :
#!/bin/sh
#SBATCH --partition=public-cpu
#SBATCH --cpus-per-task=8
#SBATCH --mem-per-cpu=6000
#SBATCH --time=10:00:00
module load GCC/10.3.0 OpenMPI/4.1.1 R/4.1.0 nodejs
srun R CMD BATCH test_smartvote_GE_candidates.R
Am I doing something wrong here? Anyone else had this issue? Any idea how I could solve this?
Thanks a lot for all the help you can provide!!
Hi @Maxime.Walder
For the next time please create your post in category “HPC support > HPC Issue”.
Do you have a file name test.sh (maybe your sbatch file) ?
Could you give the output of :
ls -ls test.sh
It seems you have a permission issue on this file
Hi @Adrien,
Thanks a lot for your reply, and sorry for the misclassification, I’ll be more careful next time!
So the command you indicated gived the following:
1 -rw-r–r-- 1 walderm hpc_users 224 May 22 11:16 test.sh
What I don’t get is that it was working before, I just tried this because I was trying to run another code and it was not working, so I ran something that was working before and it gives the message indicatd above!
Many thanks!
Are you running your jobs like that ? :
srun test.sh
Using a sbatch file, you should use sbatch test.sh
Let me know if it’s working fine ?
Hi Adrien,
Thank you so much for helping me fixe! I had not used this for a while and confused the two commands. Using the sbatch command fixed it thank you!
1 Like