I'm not sure I understand why your app must decide the placement, rather then tell Slurm about the requirements (This sounds suspiciously like Not Invented Here syndrome), but Slurm does have the '-w' flag to salloc,sbatch and srun.
I just don't understand if you don't have an entire cluster to yourselves, how can you do a, not to mention b or c. Any change to Slurm select mechanism is always site-wide.
I might go on a limb here, but I think Slurm would probably make better placement choices than your self developed app, if you can communicate the requirements well enough.
How does you app choose placement and cores? Why can't it communicate those requirements to Slurm instead of making the decision itself?
I can guess at some reasons and there can be many, including but not limited to: topology, heterogeneous HW and different parts of the app having different HW requirements, some results placed on some nodes requiring followup jobs to run on same nodes, NUMA considerations for acceleration cards (including custom, mostly FPGA cards) etc.
If you provide the placement algorithm (in broad strokes), perhaps we can find a Slurm solution that doesn't require breaking existing sites. If that is the case, how much will it cost to 'degrade' your app to communicate those requirements to Slurm instead of making the placement decisions itself?
It's possible that you would be better off investing in developing a monitoring solution that would cover the ' update it regularly about any change in the current state of available resources '.
Again, that is also ruled out if you use a site without total ownership - no site will allow you to place jobs without first allocating you the resources, no matter the scheduling solution, which brings us back to using `salloc -w`.
That said, --nodelist has the downside of requesting nodes that might not be available, causing your jobs to starve while resources are available.
Imagine the following scenario:
1. Your app gets resource availability from Slurm.
2. Your app starts calculating the placement.
3. Meanwhile Slurm allocates those resources.
4. The plugin communicates the need to recalculate placement.
5. Your app restarts it's calculation
6. Meanwhile Slurm allocates the resources your app was going to use now, since it was never told to reserve anything for you.
...
On highly active clusters, with pending queues in the millions, such a starvation scenario is not that far fetched.
Best,
--Dani_L.
On 09/07/2024 11:15:51, Bhaskar Chakraborty via slurm-users wrote:
Hello,
We wish to have a scheduling integration with Slurm. Our own application has a backend system which will decide the placement of jobs across hosts & CPU cores. The backend takes its own time to come back with a placement (which may take a few seconds) & we expect slurm to update it regularly about any change in the current state of available resources.
For this we believe we have 3 options broadly:
- We use the const_tres Select plugin & modify it to let it query our backend system for job placements.
- We write our own Select plugin avoiding any other Select plugin.
- We use existing select plugin & also register our own plugin. Idea is that our plugin will cater to 'our' jobs (specific partition, say) while all other jobs would be taken up by the default plugin.
Problem with a> is that this leads to modification of existing plugin code & calling (our) library code from inside Select plugin lib.
With b> the issue is unless we have the full Slurm cluster to ourselves this isn't viable. Any insight how to proceed with this? Where should our select plugin, assuming we need to make one, fits in the slurm integration.
We are not sure whether c> is allowed in Slurm.
We went through existing Select plugins Linear & cons_tres. However, not able to figure out how to use them or write something on similar lines to suit our purpose. Any help in this regard is appreciated.
Apologies if this question (or any other very similar) is already answered, please point to the relevant thread then.
Thanks in advance for any pointers.
Regards,
Bhaskar.