[slurm-users] Job dispatching policy
John Hearns
hearnsj at googlemail.com
Wed Apr 24 07:59:46 UTC 2019
I would suggest that if those applications really are not possible with
Slurm - then reserve a set of nodes for interactive use and disable the
Slurm daemon on them.
Direct users to those nodes.
More constructively - maybe the list can help you get the X11 applications
to run using Slurm.
Could you give some details please?
On Wed, 24 Apr 2019 at 07:17, Mahmood Naderan <mahmood.nt at gmail.com> wrote:
> Thanks for the info.
> Thing is that I don't want to totally set the node as unhealthy. Assume
> the following scenarios:
>
> compute-0-0 running slurm jobs and system load is 15 (32 cores)
> compute-0-1 running non-slurm jobs and system load is 25 (32 cores)
> Then a new slurm job should be dispatched to compute-0-0
>
>
> compute-0-0 running slurm jobs and system load is 25 (32 cores)
> compute-0-1 running non-slurm jobs and system load is 10 (32 cores)
> Then a new slurm job should be run on compute-0-1 (assuming that it need
> about 10 cores and not 30 cores).
>
>
> I know that running non slurm jobs sounds ugly, but there are some X11
> applications that are not slurm friendly.
> Number of non slurm nodes though are small.
>
>
>
> On Tue, Apr 23, 2019, 18:45 Prentice Bisbal <pbisbal at pppl.gov> wrote:
>
>>
>> On 4/23/19 2:47 AM, Mahmood Naderan wrote:
>>
>> Hi,
>> How can I change the job distribution policy? Since some nodes are
>> running non-slurm jobs, it seems that the dispatcher isn't aware of system
>> load. Therefore, it assumes that the node is free.
>>
>> I want to change the policy based on the system load.
>>
>> Regards,
>> Mahmood
>>
>>
>> This is not a good practice. Allowing users to submit jobs that are
>> controlled by Slurm outside of the Slurm mechanism kind of defeats the
>> purpose of using Slurm in the first place.
>>
>> --
>> Prentice
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190424/39b6de71/attachment.html>
More information about the slurm-users
mailing list