<div dir="ltr"><div>Will, there are some excellent responses here.</div><div>I agree that moving data to local fast storage on a node is a great idea.</div><div><br></div><div>Regarding the NFS storage, I would look at implementing BeeGFS if you can get some new hardware or free up existing hardware.</div><div>BeeGFS is a skoosh case to set up.<br></div><div><br></div><div>(*) Scottish slang. Skoosh case - very easy</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div class="gmail_attr" dir="ltr">On Sat, 23 Feb 2019 at 04:56, Raymond Wan <<a href="mailto:rwan.work@gmail.com">rwan.work@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid"><br>
Hi Will,<br>
<br>
<br>
On 23/2/2019 1:50 AM, Will Dennis wrote:<br>
> For one of my groups, on the GPU servers in their cluster, I have provided a RAID-0 md array of multi-TB SSDs (for I/O speed) mounted on a given path ("/mnt/local" for historical reasons) that they can use for local scratch space. Their other servers in the cluster have a single multi-TB spinning disk mounted at that same path. We do not manage the data at all on this path; it's currently up to the researchers to put needed data there, and remove the data when it is no longer needed. (They wanted us to auto-manage the removal, but we aren't in a position to know what data they still need or not, and "delete data if atime/mtime is older than [...]" via cron is a bit too simplistic.) They can use that local-disk path in any way they want, with the caveat that it's not to be used as "permanent storage", there's no backups, and if we suffer a disk failure, etc, we just replace with new and the old data is gone.<br>
<br>
<br>
IMHO, auto-managing the data removal is a slippery slope. <br>
If the disk space is the research group's, perhaps just let <br>
them manage it. Whatever expiry date you put on files, <br>
someone will come along and ask for you to change it.<br>
<br>
I suppose one thing you could ask them to do, if you do need <br>
to auto-manage it, is to ask them to write scripts that <br>
"touch" the files they've used (even if it is read only). I <br>
guess it's up to you how involved you want to be.<br>
<br>
<br>
> The other group has (at this moment) no local disk at all on their worker nodes. They actually work with even bigger data sets than the first group, and they are the ones that really need a solution. I figured that if I solve the one group's problem, I also can implement on the other (and perhaps even on future Slurm clusters we spin up.)<br>
<br>
<br>
Sounds like the problem is really how willing this second <br>
group is with purchasing additional local disk space. <br>
(Which, to be effective, should be the same space at the <br>
same path across all nodes. And that's assuming you have <br>
the space on each node... The servers that I use have one <br>
local disk for each node; there wouldn't be enough drive <br>
bays for every research group to add a drive -- we have more <br>
than 2 research groups.)<br>
<br>
<br>
> A few other questions I have:<br>
> - is it possible in Slurm to define more than one filesystem path (i.e, other than "/tmp") as "TmpDisk"?<br>
> - any way to allocate storage on a node via GRES or another method?<br>
<br>
<br>
It seems there have been more useful replies since. But <br>
about your first question, I think I can answer on behalf of <br>
the computers I use. I don't believe "/tmp" has been <br>
specifically set as the "TmpDisk" in the SLURM <br>
configuration. We have Unix-level read/write access to it. <br>
We can also "cd" over to our NFS mounted home directories <br>
when we run our programs (at the top of the SLURM submitted <br>
script).<br>
<br>
In that sense, our system administrators gave us the freedom <br>
to choose. But on the downside, they never did any <br>
profiling and gave suggestions such as running programs on <br>
local disk.<br>
<br>
Anyway, they just allocated some space on the local disk as <br>
/tmp. I didn't mean that it was specifically configured as <br>
TmpDisk, as far as I know.<br>
<br>
Good luck!<br>
<br>
Ray<br>
<br>
<br>
<br>
</blockquote></div>