Batch job results in Time limit error

I have time limit error:

Sat Oct  5 19:19:11 CEST 2019
Python 3.6.6
slurmstepd: error: *** JOB 20770298 ON gpu006 CANCELLED AT 2019-10-05T19:20:26 DUE TO TIME LIMIT ***

Only one minute after it cuts me…

my script :

  # SBATCH --time=0-:02:00
  #SBATCH --partition=shared-gpu-EL7
  #SBATCH --output=slurm-%x-%A_%a.out
  ##SBATCH --mem=20000
   ##SBATCH --gres=gpu:titan:1

I get cut after 1 minutes so maybe its a configuration issue
IS there some way to know how many rights I have because I also noticed I can run up to two .

I tried with both variations

SBATCH --time=0-:02:00:00

SBATCH --time=100

Any ideas ?

Hi, I normally use the format HH:MM:SS for time. Sbatch is very picky about argument for time. If its wrongly written he will do some strange things.


You have a space right after the “#”.

You are very exigent maybe! According to the man page:

Time may be of the form HH:MM:SS to run a job at a specific time of day (seconds are optional). (If that time is already past, the next day is assumed.) You may also specify midnight , noon , fika (3 PM) or teatime (4 PM) and you can have a time-of-day suffixed with AM or PM for running in the morning or the evening. You can also say what day the job will be run, by specifying a date of the form MMDDYY or MM/DD/YY YYYY-MM-DD . Combine date and time using the following format YYYY-MM-DD[THH:MM[:SS]] . You can also give times like now + count time-units , where the time-units can be seconds (default), minutes , hours , days , or weeks and you can tell Slurm to run the job today with the keyword today and to run the job tomorrow with the keyword tomorrow