[slurm-users] How can jobs request a minimum available (free) TmpFS disk space?

Sam Gallop (NBI) sam.gallop at nbi.ac.uk
Wed Sep 4 16:40:52 UTC 2019


Just to add the conversation. We also wrote our own GRES/plugin for this. Similarly the GRES enables the user to select the amount of GB units that they require. The plugin part invokes LVM to create a logical volume on an SSD device for the requested size. The volume is then made available to the job. At the end of the job the volume is torn down by destroying it. This results in one deletion regardless of the number files, which is good for us as some of our applications can create many thousands of small files which could take time to delete individually.

I did play around with XFS quotas on our large systems (SGI UV300, HPE MC990-X and Superdome Flex) but it couldn't get it working how I wanted (or how I thought it should work). I'll re-visit it knowing that other people have got XFS quotas working.

---
Samuel Gallop
Computing infrastructure for Science
CiS Support & Development

-----Original Message-----
From: slurm-users <slurm-users-bounces at lists.schedmd.com> On Behalf Of Chris Samuel
Sent: 04 September 2019 07:50
To: slurm-users at lists.schedmd.com
Subject: Re: [slurm-users] How can jobs request a minimum available (free) TmpFS disk space?

On Monday, 2 September 2019 11:02:57 AM PDT Ole Holm Nielsen wrote:

> We have some users requesting that a certain minimum size of the
> *Available* (i.e., free) TmpFS disk space should be present on nodes 
> before a job should be considered by the scheduler for a set of nodes.

At Swinburne I did this by defining a GRES called "tmp" for nodes and then translated any --tmp request into a GRES request in the submit filter.

The prolog for the job would then create a directory for the job in the nodes local NVME XFS /tmp filesystem, set a project quota on that directory to the amount requested (so it couldn't be exceeded) and then used the private tmp spank plugin to map that into what the job saw as /tmp, /var/tmp and /dev/shm.

The epilog then cleaned up after the job.

Worked nicely!

All the best,
Chris
--
  Chris Samuel  :  http://www.csamuel.org/  :  Berkeley, CA, USA







More information about the slurm-users mailing list