[slurm-users] ReqNodeNotAvail, but none of nodes in partition are listed.
pbisbal at pppl.gov
Mon May 7 15:55:47 MDT 2018
> Fewer. ;)
True. What was I thinking?
> sometimes even the person who set the reservation doesn’t figure it out.
Like me/us? ;)
On 05/07/2018 05:42 PM, Ryan Novosielski wrote:
> Fewer. ;)
> I think rumor had it that there were plans for some improvement in this area (you might check the bugs or this mailing list — I can’t remember where I saw it, but it was awhile back now), because ReqNodeNotAvail almost never means something useful, and reservations don’t actually generate any message whatsoever that would indicate that they are there. Almost 100% of the time we see questions about this at our site, it’s a reservation doing it, and sometimes even the person who set the reservation doesn’t figure it out.
>> On May 7, 2018, at 5:32 PM, Prentice Bisbal <pbisbal at pppl.gov> wrote:
>> Dang it. That's it. I recently changed the default time limit on some of my partitions, to only 48 hours. I have a reservation that starts on Friday at 5 PM. These jobs are all assigned to partitions that still have longer time limits. I forgot that not all partitions have the new 48-hour limit.
>> Still, Slurm should provide a better error message for that situation, since I'm sure it's not that uncommon for this to happen. It would certainly result in a lot less tickets being sent to me.
>> Prentice Bisbal
>> Lead Software Engineer
>> Princeton Plasma Physics Laboratory
>> On 05/07/2018 05:11 PM, Ryan Novosielski wrote:
>>> In my experience, it may say that even if it has nothing to do with the reason the job isn’t running, if there are nodes on the system that aren’t available.
>>> I assume you’ve checked for reservations?
>>>> On May 7, 2018, at 5:06 PM, Prentice Bisbal <pbisbal at pppl.gov> wrote:
>>>> Dear Slurm Users,
>>>> On my cluster, I have several partitions, each with their own QOS, time limits, etc.
>>>> Several times today, I've received complaints from users that they submitted jobs to a partition with available nodes, but jobs are stuck in the PD state. I have spent the majority of my day investigating this, but haven't turned up anything meaningful. Both jobs show the "ReqNodeNotAvail" reason, but none of the nodes listed at not available are even in the partition these jobs are submitted to. Neither job has requested a specific node, either.
>>>> I have checked slurmctld.log on the server, and have not been able to find any clues. Any where else I should look? Any ideas what could be causing this?
>>> || \\UTGERS, |---------------------------*O*---------------------------
>>> ||_// the State | Ryan Novosielski - novosirj at rutgers.edu
>>> || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
>>> || \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark
> || \\UTGERS, |---------------------------*O*---------------------------
> ||_// the State | Ryan Novosielski - novosirj at rutgers.edu
> || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
> || \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark
More information about the slurm-users