ZFS file system creation is much quicker (basically combines the LVM + mkfs steps above) but I don't know of any clusters using ZFS to manage local file systems on the compute nodes :-)
[root@r00n00 /]# mkdir /tmp-alloc/slurm-2147483647
[root@r00n00 /]# xfs_quota -x -c 'project -s -p /tmp-alloc/slurm-2147483647 2147483647' /tmp-alloc
Setting up project 2147483647 (path /tmp-alloc/slurm-2147483647)...
Processed 1 (/etc/projects and cmdline) paths for project 2147483647 with recursion depth infinite (-1).
[root@r00n00 /]# xfs_quota -x -c 'limit -p bhard=1g 2147483647' /tmp-alloc
[root@r00n00 /]# cd /tmp-alloc/slurm-2147483647
[root@r00n00 slurm-2147483647]# dd if=/dev/zero of=zeroes bs=5M count=1000
dd: error writing ‘zeroes’: No space left on device
1073741824 bytes (1.1 GB) copied, 2.92232 s, 367 MB/s
:
[root@r00n00 /]# rm -rf /tmp-alloc/slurm-2147483647
[root@r00n00 /]# xfs_quota -x -c 'limit -p bhard=0 2147483647' /tmp-alloc
Since Slurm jobids max out at 0x03FFFFFF (and 2147483647 = 0x7FFFFFFF) we have an easy on-demand project id to use on the file system. Slurm tmpfs plugins have to do a mkdir to create the per-job directory, adding two xfs_quota commands (which run in more or less O(1) time) won't extend the prolog by much. Likewise, Slurm tmpfs plugins have to scrub the directory at job cleanup, so adding another xfs_quota command will not do much to change their epilog execution times. The main question is "where does the tmpfs plugin find the quota limit for the job?"