[slurm-users] How to queue jobs based on non-existent features

Raj Sahae rsahae at tesla.com
Fri Jul 10 18:12:22 UTC 2020


Hi Paddy,

Yes, this is a CI/CD pipeline. We currently use Jenkins pipelines but it has some significant drawbacks that Slurm solves out of the box that make it an attractive alternative.
You noted some of them already, like good real time queue management, pre-emption, node weighting, high resolution priority queueing.
Jenkins also doesn’t scale as well w.r.t. node management, it’s quite resource heavy.

My original email was a bit wordy but I should emphasize that if we want Slurm to do the exact same thing as our current Jenkins pipeline, we can already do that and it works reasonably well.
Now I’m trying to move beyond feature parity and am having trouble doing so.

Thanks,

Raj Sahae | m. +1 (408) 230-8531

From: slurm-users <slurm-users-bounces at lists.schedmd.com> on behalf of Paddy Doyle <paddy at tchpc.tcd.ie>
Reply-To: Slurm User Community List <slurm-users at lists.schedmd.com>
Date: Friday, July 10, 2020 at 10:31 AM
To: Slurm User Community List <slurm-users at lists.schedmd.com>
Subject: Re: [slurm-users] How to queue jobs based on non-existent features

Hi Raj,

It sounds like you might be coming from a CI/CD pipeline setup, but just in
case you're not, would you consider something like Jenkins or Gitlab CI
instead of Slurm?

The users could create multi-stage pipelines, with the 'build' stage
installing the required software version, and then multiple 'test' stages
to run the tests.

It's not the same idea as queuing up multiple jobs. Nor do you get queue
priorities or weighting and all of that good stuff from Slurm that you are
looking for.

Within Slurm, yeah writing custom JobSubmitPlugins and NodeFeaturesPlugins
might be required.

Paddy

On Thu, Jul 09, 2020 at 11:15:57PM +0000, Raj Sahae wrote:

> Hi all,
>
> My apologies if this is sent twice. The first time I sent it without my subscription to the list being complete.
>
> I am attempting to use Slurm as a test automation system for its fairly advanced queueing and job control abilities, and also because it scales very well.
> However, since our use case is a bit outside the standard usage of Slurm, we are hitting some issues that don’t appear to have obvious solutions.
>
> In our current setup, the Slurm nodes are hosts attached to a test system. Our pipeline (greatly simplified) would be to install some software on the test system and then run sets of tests against it.
> In our old pipeline, this was done in a single job, however with Slurm I was hoping to decouple these two actions as it makes the entire pipeline more robust to update failures and would give us more finely grained job control for the actual test run.
>
> I would like to allow users to queue jobs with constraints indicating which software version they need. Then separately some automated job would scan the queue, see jobs that are not being allocated due to missing resources, and queue software installs appropriately. We attempted to do this using the Active/Available Features configuration. We use HealthCheck and Epilog scripts to scrape the test system for software properties (version, commit, etc.) and assign them as Features. Once an install is complete and the Features are updated, queued jobs would start to be allocated on those nodes.
>
> Herein lies the conundrum. If a user submits a job, constraining to run on Version A, but all nodes in the cluster are currently configured with Features=Version-B, Slurm will fail to queue the job, indicating an invalid feature specification. I completely understand why Features are implemented this way, so my question is, is there some workaround or other Slurm capabilities that I could use to achieve this behavior? Otherwise my options seem to be:
>
> 1. Go back to how we did it before. The pipeline would have the same level of robustness as before but at least we would still be able to leverage other queueing capabilities of Slurm.
> 2. Write our own Feature or Job Submit plugin that customizes this behavior just for us. Seems possible but adds lead time and complexity to the situation.
>
> It's not feasible to update the config for all branches/versions/commits to be AvailableFeatures, as our branch ecosystem is quite large and the maintenance of that approach would not scale well.
>
> Thanks,
>
> Raj Sahae | Manager, Software QA
> 3500 Deer Creek Rd, Palo Alto, CA 94304
> m. +1 (408) 230-8531 | rsahae at tesla.com<file:///composeviewinternalloadurl/%3Cmailto:rsahae@tesla.com%3E>
>
> [cid:image001.png at 01D6560C.399F5D30]<http://www.tesla.com/<http://www.tesla.com>>
>



--
Paddy Doyle
Research IT / Trinity Centre for High Performance Computing,
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
Phone: +353-1-896-3725
https://www.tchpc.tcd.ie/<https://www.tchpc.tcd.ie>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20200710/68e3a61a/attachment-0001.htm>


More information about the slurm-users mailing list