[slurm-users] [External] Another question about partition and node allocation
Michael Robbert
mrobbert at mines.edu
Wed Apr 15 13:42:49 UTC 2020
The more flexible way to do this is with QoS. (PreemptType=preempt/qos) You'll need to have Accounting enabled and you'll probably want qos listed in AccountingStorageEnforce. Once you do that you create a "shared" for the scavenger jobs, a QoS for each group that buys into resources. Assign the correct number of resources to each group's QoS as GrpTRES limits, add the "shared" qos as preemptable from that qos and give the qos a higher priority.
Mike Robbert
On 4/10/20, 15:34, "slurm-users on behalf of Renata Maria Dart" <slurm-users-bounces at lists.schedmd.com on behalf of renata at slac.stanford.edu> wrote:
CAUTION: This email originated from outside of the Colorado School of Mines organization. Do not click on links or open attachments unless you recognize the sender and know the content is safe.
Hi, we have 40 nodes (all the same, amd nodes with 128 cores) which
have all been purchased by different groups at our lab and each group
would like to have immediate access of course to what they have paid
for. The stakeholder groups are also fine with allowing the general
public to use their hosts/cores provided they can preempt the general
public's jobs. One way I can see to do that is to assign specific
nodes to each stakeholder group defined as a partition, something like this:
PartitionName=shared Default=yes Priority=10 MaxTime=5-00:00:00 DefaultTime=30 PreemptMode=CANCEL State=UP Nodes=amd[0001-0040]
PartitionName=exp1 Default=no Priority=50 MaxTime=5-00:00:00 DefaultTime=1-00:00:00 PreemptMode=OFF State=UP Nodes=amd[0001-0003]
PartitionName=exp2 Default=no Priority=50 MaxTime=5-00:00:00 DefaultTime=1-00:00:00 PreemptMode=OFF State=UP Nodes=amd[0004-0019]
PartitionName=exp3 Default=no Priority=50 MaxTime=5-00:00:00 DefaultTime=1-00:00:00 PreemptMode=OFF State=UP Nodes=amd[0020-0040]
Is this the most efficient and best use of resources? In the above
scenario if scavenger jobs are running on a given experiment's hosts
and the experiment needs to run jobs, then scavenger jobs get
preempted, even if there are idle hosts in the other stakeholder
partitions. Is there a way to guarantee say exp1 that they will have
priority on 386 cores but not necessarily tie them to 3 specific hosts?
Thanks,
Renata
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5173 bytes
Desc: not available
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20200415/810fc2de/attachment.bin>
More information about the slurm-users
mailing list