[slurm-users] Staging data on the nodes one will be processing on via sbatch
wdennis at nec-labs.com
Sat Apr 3 19:42:03 UTC 2021
We have various NFS servers that contain the data that our researchers want to process. These are mounted on our Slurm clusters on well-known paths. Also, the nodes have local fast scratch disk on another well-known path. We do not have any distributed file systems in use (Our Slurm clusters are basically just collections of hetero nodes of differing types, not a traditional HPC setup by any means.)
In most cases, the researchers can process the data directly off the NFS mounts without it causing any issues, but in some cases, this slows down the computation unacceptably. They could manually copy the data to the local drive using an allocation & srun commands, but I am wondering if there is a way to do this in sbatch?
I tried this method:
wdennis at submit01 ~> sbatch transfer.sbatch
Submitted batch job 329572
wdennis at submit01 ~> sbatch --dependency=afterok:329572 test_job.sbatch
Submitted batch job 329573
wdennis at submit01 ~> sbatch --dependency=afterok:329573 rm_data.sbatch
Submitted batch job 329574
wdennis at submit01 ~>
wdennis at submit01 ~> squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
329573 gpu wdennis_ wdennis PD 0:00 1 (Dependency)
329574 gpu wdennis_ wdennis PD 0:00 1 (Dependency)
329572 gpu wdennis_ wdennis R 0:23 1 compute-gpu02
But it seems to not preserve the node allocated with the --dependency jobs:
What is the best way to do something like “stage the data on a local path / run computation using the local copy / remove the locally staged data when complete”?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the slurm-users