My jobs get killed and I don't know why

Hello, I try to run my job on the cluster, and always it gets killed for reasons I don’t understand.

My snakmake script here should submit other jobs to the cluster which seems to do but then it get’0s killed and I don’t understand why.

jobid: 52120906

CLUSTER: 2021-11-16 15:01:11 submit command: sbatch --parsable --output=cluster_log/%j.out --error=cluster_log/%j.out --job-name=08f4171c-7b2c-4808-a17f-70ac33a67041 --cpus-per-task=5 -n1 --time=10 --mem=250009m --partition=shared-bigmem /srv/beegfs/scratch/users/k/kiesers/MetaHit/.snakemake/tmp.kylx51jg/snakejob.08f4171c-7b2c-4808-a17f-70ac33a67041.a02996c9-1676-57d4-95af-d27dcd458a43.sh
Submitted group job a02996c9-1676-57d4-95af-d27dcd458a43 with external jobid '52120960'.
srun: First task exited 30s ago
srun: StepId=52120906.0 tasks 0-3,5-11: running
srun: StepId=52120906.0 task 4: exited abnormally
srun: launch/slurm: _step_signal: Terminating StepId=52120906.0
srun: Job step aborted: Waiting up to 92 seconds for job step to finish.
slurmstepd: error: *** STEP 52120906.0 ON node207 CANCELLED AT 2021-11-16T15:01:26 ***
srun: error: node207: tasks 0-2,5-11: Killed
srun: error: node207: task 3: Killed

Hi,

hard to say without knowing how this is supposed to work.

What may be strange: according to the first line, the sbatch command is submitting a one task job with 5 cpu, and on the log below, it seems you had 12 tasks with number 4 exiting abnormally?