[slurm-users] Distribute a single node resources across multiple partitons
Jason Simms
jsimms1 at swarthmore.edu
Thu Jul 6 12:58:38 UTC 2023
Hello Purvesh,
I'm not an expert in this, but I expect a common question would be, why are
you wanting to do this? More information would be helpful. On the surface,
it seems like you could just allocate two full nodes to each partition. You
must have a reason why that is unacceptable, however.
My first inclination, without more information, is to say, "don't do that."
If you must, one way I can think to (sort of) accomplish what you want is
to configure the partitions with the MaxCPUsPerNode option:
PartitionName=ppart Nodes=node[01-04] MaxCPUsPerNode=8
PartitionName=cpart Nodes=node[01-04] MaxCPUsPerNode=8
I don't think this guarantees which specific CPUs are assigned to each
partition, though I do believe there may be a way to do that. In any case,
this might work for your needs.
Warmest regards,
Jason
On Thu, Jul 6, 2023 at 8:24 AM Purvesh Parmar <purveshp0507 at gmail.com>
wrote:
> Hi,
>
> Do I need separate slurmctld and slurmd to run for this? I am struggling
> for this. Any pointers.
>
> --
> Purvesh
>
>
> On Mon, 26 Jun 2023 at 12:15, Purvesh Parmar <purveshp0507 at gmail.com>
> wrote:
>
>> Hi,
>>
>> I have slurm 20.11 in a cluster of 4 nodes, with each node having 16
>> cpus. I want to create two partitions (ppart and cpart) and want that 8
>> cores from each of the 4 nodes should be part of part of ppart and
>> remaining 8 cores should be part of cpart, this means, I want to distribute
>> each node's resources across multiple partitions exclusively. How to go
>> about this?
>>
>>
>> --
>> Purvesh
>>
>
--
*Jason L. Simms, Ph.D., M.P.H.*
Manager of Research Computing
Swarthmore College
Information Technology Services
(610) 328-8102
Schedule a meeting: https://calendly.com/jlsimms
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20230706/e8917f2b/attachment.htm>
More information about the slurm-users
mailing list