Invalid account or account/partition combination specified

If you are asking for help, try to provide information that can help us solve your issue, such as :

what did you try: I made a test bash script to run a python file

what didn’t work: I get the error “sbatch: error: Batch job submission failed: Invalid account or account/partition combination specified”

what was the expected result: I don’t know, this is the first time I’m trying to use Yggdrasil

what was the error message: “sbatch: error: Batch job submission failed: Invalid account or account/partition combination specified”

path to the relevant files (logs, sbatch script, etc): everything is in my $HOME/Yggdrasil folder.

This is only the first step and I expect to run into other issues with the python script itself (it was made to run in parallel with pathos but I don’t understand if that is the correct way to proceed, or if the code should run one iteration and I specify the number of iterations in slurm).

Thank you in advance for your help.

Hi,
do you still have the issue? Maybe a transient error while we created your account.

I did a quick test from your account and it is working:

[sibony@login1.yggdrasil Yggdrasil]$ srun hostname
srun: job 7720559 queued and waiting for resources
srun: job 7720559 has been allocated resources
cpu001.yggdrasil

While at it, I had a quick look to your sbatch script:

#! /bin/bash
#SBATCH --job-name=Mh_z_highSFE
#SBATCH --output=Mh_z_highSFE.out
#SBATCH --error=Mh_z_highSFE.err
#SBATCH --ntasks=256
#SBATCH --partition=public-cpu
#SBATCH --mem-per-cpu=1000
#SBATCH --time=1-00:00:00
#SBATCH --hint=nomultithread
export NUM_THREADSPROCESSES=${SLURM_NTASKS}
python -u /home/users/s/sibony/Yggdrasil/CHE_Yggdrasil.py --SFE 0.002 --save highSFE

You are asking for 256 tasks. This means that you are launching 256 time python. This is probably not what you want.

Check here for more information about job type.

Best

Yann

Hello,

No, indeed the issue was resolved.

I tried using --cpus-per-task originally, but i was not allowed more than 32 cpus. I was told since my script is parallelised using pathos which is based on MPI, that it was okay for me to use instead --ntasks and that it would fulfill the same job. Is that not the case?

Best,
Yves

The thing is that in your sbatch script you aren’t loading any MPI backend, so maybe you load it out of the sbatch script?

I have no clue about pathos. This seems to be a another new way of parallelizing jobs like many already tried. Version 0.2.8, let see if this will go up to version 1.0.

In the mean time, please do not ask for such a high number of cpus without beeing sure of what it does. First try using the debug nodes.

--cpus-per-task is limited to the physical number of cpus per server. You can check here the characteristics of our compute nodes. hpc:hpc_clusters [eResearch Doc]

Best

A post was split to a new topic: Issue with my newly created account