[slurm-users] Slurm version 20.11.5 is now available

Brian Andrus toomuchit at gmail.com
Fri Mar 19 21:23:12 UTC 2021

The method I use for jobs is to make /scratch a symlink to where ever it 
may be best suited. Then all users just use /scratch

eg: /scratch -> /dev/shm for a ramdisk or /scrach->/mnt/ssd for local 
ssd, etc

Brian Andrus

On 3/19/2021 6:25 AM, Paul Edmon wrote:
> I was about to ask this as well as we use /scratch as our tmp space 
> not /tmp.  I haven't kicked the tires on this to know how it works but 
> after I take a look at it I will probably file a feature request to 
> make the name of the tmp dir flexible.
> -Paul Edmon-
> On 3/19/2021 7:19 AM, Tina Friedrich wrote:
>> That's excellent; I've been using the 'auto_tmpdir' plugin for this; 
>> having that functionality within SLURM will be good.
>> Have a question though - we have a need to also create a per-job 
>> /scratch/ (on a shared fast file system) in much the same way.
>> I don't see a way that the currentl tmpfs plugin can be used to do 
>> that, as it would seem that it's hard-coded to mount things into 
>> /tmp/ (i.e. where to mount a file system can not be changed). Or am I 
>> misreading this?
>> Tina
>> On 16/03/2021 22:26, Tim Wickberg wrote:
>>> One errant backspace snuck into that announcement: the 
>>> job_container.conf man page (with an 'r') serves as the initial 
>>> documentation for this new job_container/tmpfs plugin. The link to 
>>> the HTML version of the man page has been corrected in the text below:
>>> On 3/16/21 4:16 PM, Tim Wickberg wrote:
>>>> We are pleased to announce the availability of Slurm version 20.11.5.
>>>> This includes a number of moderate severity bug fixes, alongside a 
>>>> new job_container/tmpfs plugin developed by NERSC that can be used 
>>>> to create per-job filesystem namespaces.
>>>> Initial documentation for this plugin is available at:
>>>> https://slurm.schedmd.com/job_container.conf.html
>>>> Slurm can be downloaded from https://www.schedmd.com/downloads.php .
>>>> - Tim

More information about the slurm-users mailing list