Hi,
When working on scratch, I am getting “Disk quota exceeded” just by simply moving files around. It doesn’t look like I have exceeded the limit of 10.000.000 chunks.
Hope you can help.
Cheers,
Malte
Hi,
When working on scratch, I am getting “Disk quota exceeded” just by simply moving files around. It doesn’t look like I have exceeded the limit of 10.000.000 chunks.
Hope you can help.
Cheers,
Malte
Works again now
Cheers,
Malte
I am getting the same error
scratch seems to be 99% filled. Could this be the reason?
(baobab)-[coppinp@login2 scratch]$ df -hT
beegfs_scratch beegfs 1.5P 1.5P 20T 99% /srv/beegfs/scratch
Update, it seems that it was indeed because scratch filled up.
I created some space, and am currently moving some ~20TB of communal data used by our entire research group (DAMPE) to another storage. Hopefully this should help.
Update, since this afternoon it seems that there is 78T (5%) of free disk space.
Yet I still got jobs which failed at 21:05:08 this evening with the error:
IOError: [Errno 122] Disk quota exceeded
@HPC Any idea?
Hi @Malte.Algren, @Paul.Coppin
Currently the scratch filesystem seems to have enough space both in use and inode
2024-05-15T07:04:00Z
(baobab)-[root@login2 ~]$ df -t beegfs -h
Filesystem Size Used Avail Use% Mounted on
beegfs_home 138T 116T 22T 85% /home
beegfs_scratch 1.5P 1.4P 83T 95% /srv/beegfs/scratch
however, we’re not immune to the possibility that the aggreagat of current jobs may temporarily fill the scratch.
Are you sure your error appeared after the space was freed up?
Well, I think that is the problem when running long jobs that save files from scratch. It might be temporarily filled and your job will crash.
But no at the moment I am not getting the error.
Still seeing the issue where scripts can run for some time but eventually, I get “IOError: [Errno 122] Disk quota exceeded”.
Any way to add additional storage to the scratch?
Same here, almost all my jobs suddenly failed again yesterday evening (2024-05-15) because of the disk quota error
Hello,
@Yann.Sagon found the problem! For share “private_dpnc” quota for number of files was reached :
(baobab)-[algren@admin1 ~]$ beegfs-get-quota-home-scratch.sh -g private_dpnc
user/group || size || chunk files
storage | name | id || used | hard || used | hard
----------------------------|------||------------|------------||---------|---------
home | private_dpnc| 1014|| 12.76 TiB| unlimited|| 24719933|unlimited
scratch | private_dpnc| 1014|| 489.86 TiB| 875.00 TiB|| 69942301| 70000000
As DPNC has dedicated storage I will update this value to 100000000.
Best regards,
@Yann.Sagon @Gael.Rossignol Thanks a lot for digging out the issue and updating the quota!