CPU/GPU job failing

Hello all, I am trying to get a cpu node to mount my vs code session on - for which I use this slurm script.

#SBATCH --job-name=bgcpu
#SBATCH --cpus-per-task=1
#SBATCH --time=00-12:00:00
#SBATCH --partition=private-dpnc-cpu,shared-cpu
#SBATCH --output=/home/users/s/senguptd/logs/slurm-%A.out
#SBATCH --mem=25GB

srun sleep 12h

Once I get a node I login to that node and mount my VSCode for the day.
However I am getting this error when I try to launch the job

Any idea what the raisedsignal:53 refers to, and what the solution to this is?

It looks like none of my scripts are able to launch jobs - all of them fail with the same error: RaiseSignal:53 (they don’t seem to create any log file either).
Any help is appreciated as my entire workflow is currently disrupted.

Okay I did some digging and it looks like it was because I had hit disk quota. I cleared my tmp (all I could) and it looks like things are running for now.

Dear @Debajyoti.Sengupta , my2cents: you can check your quota usage with the command listed [here].(hpc:storage_on_hpc [eResearch Doc])