[slurm-users] sbatch - accept jobs above limits

Stephen Cousins steve.cousins at maine.edu
Wed Feb 9 06:29:03 UTC 2022


I can duplicate this error word for word by submitting a job asking for
150gb of memory when the nodes in that partition have a maximum of 128GB.

Take a look at the memory values in your node specifications and in your
job script or command line. Maybe there is a typo.

On Tue, Feb 8, 2022, 8:40 PM Stephen Cousins <steve.cousins at maine.edu>
wrote:

> What I'm saying is that the job might not be able to run in that
> partition. Ever. The job might be asking for more resources than the
> partition can provide. Maybe I'm wrong but it would help to know what the
> partition definition is, along with what resources the nodes in that
> partition have specified (both of these in slurm.conf) and then what the
> job is asking for.
>
> On Tue, Feb 8, 2022, 7:36 PM <z148x at arcor.de> wrote:
>
>> Yes, the partition does not meet the requirements now.
>>
>> The job should still be submitted and wait until requirements are
>> available.
>>
>>
>> On 09.02.22 00:11, Stephen Cousins wrote:
>> > I think this message comes up when there are no nodes in that partition
>> > have the resources capable to meet the requirements. Can you show what
>> the
>> > partition definition is in slurm.conf along with what the job is asking
>> for?
>> >
>> > On Tue, Feb 8, 2022, 5:25 PM <z148x at arcor.de> wrote:
>> >
>> >>
>> >> Dear all,
>> >>
>> >> sbatch jobs are immediately rejected if no suitable node is available
>> in
>> >> the configuration.
>> >>
>> >>
>> >> sbatch: error: Memory specification can not be satisfied
>> >> sbatch: error: Batch job submission failed: Requested node
>> configuration
>> >> is not available
>> >>
>> >> These jobs should be accepted, if a suitable node will be active soon.
>> >> For example, these jobs could be in PartitionConfig.
>> >>
>> >> Is that configurable?
>> >>
>> >>
>> >> Many thanks,
>> >>
>> >> Mike
>> >>
>> >>
>> >
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20220209/17615a2a/attachment-0001.htm>


More information about the slurm-users mailing list