If you are asking for help, try to provide information that can help us solve your issue, such as :
what did you try: I made a test bash script to run a python file
what didn’t work: I get the error “sbatch: error: Batch job submission failed: Invalid account or account/partition combination specified”
what was the expected result: I don’t know, this is the first time I’m trying to use Yggdrasil
what was the error message: “sbatch: error: Batch job submission failed: Invalid account or account/partition combination specified”
path to the relevant files (logs, sbatch script, etc): everything is in my $HOME/Yggdrasil folder.
This is only the first step and I expect to run into other issues with the python script itself (it was made to run in parallel with pathos but I don’t understand if that is the correct way to proceed, or if the code should run one iteration and I specify the number of iterations in slurm).
Hi,
do you still have the issue? Maybe a transient error while we created your account.
I did a quick test from your account and it is working:
[sibony@login1.yggdrasil Yggdrasil]$ srun hostname
srun: job 7720559 queued and waiting for resources
srun: job 7720559 has been allocated resources
cpu001.yggdrasil
While at it, I had a quick look to your sbatch script:
I tried using --cpus-per-task originally, but i was not allowed more than 32 cpus. I was told since my script is parallelised using pathos which is based on MPI, that it was okay for me to use instead --ntasks and that it would fulfill the same job. Is that not the case?
The thing is that in your sbatch script you aren’t loading any MPI backend, so maybe you load it out of the sbatch script?
I have no clue about pathos. This seems to be a another new way of parallelizing jobs like many already tried. Version 0.2.8, let see if this will go up to version 1.0.
In the mean time, please do not ask for such a high number of cpus without beeing sure of what it does. First try using the debug nodes.
--cpus-per-task is limited to the physical number of cpus per server. You can check here the characteristics of our compute nodes. hpc:hpc_clusters [eResearch Doc]