[slurm-users] job priority keeping resources from being used?
Brian Andrus
toomuchit at gmail.com
Fri Nov 1 17:57:46 UTC 2019
I ask if you are specifying it, because if not, slurm will assume a job
will use all the memory available.
So without specifying, your big job gets allocated 100% of the memory so
nothing could be sent to the node. Same if you don't specify for the
little jobs. It would want 100%, but if anything is running there, 100%
is not available as far as slurm is concerned.
Brian
On 11/1/2019 10:52 AM, c b wrote:
> yes, there is enough memory for each of these jobs, and there is
> enough memory to run the high resource and low resource jobs at the
> same time.
>
> On Fri, Nov 1, 2019 at 1:37 PM Brian Andrus <toomuchit at gmail.com
> <mailto:toomuchit at gmail.com>> wrote:
>
> Are you specifying memory for each of the jobs?
>
> Can't run a small job if there isn't enough memory available for it.
>
> Brian Andrus
>
> On 11/1/2019 7:42 AM, c b wrote:
>> I have:
>> SelectType=select/cons_res
>> SelectTypeParameters=CR_CPU_Memory
>>
>> On Fri, Nov 1, 2019 at 10:39 AM Mark Hahn <hahn at mcmaster.ca
>> <mailto:hahn at mcmaster.ca>> wrote:
>>
>> > In theory, these small jobs could slip in and run alongside
>> the large jobs,
>>
>> what are your SelectType and SelectTypeParameters settings?
>> ExclusiveUser=YES on partitions?
>>
>> regards, mark hahn.
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20191101/4fa92420/attachment.htm>
More information about the slurm-users
mailing list