[slurm-users] Errors after removing partition
toomuchit at gmail.com
Sat Jul 27 15:56:48 UTC 2019
The jobs themselves no longer exist. They had completed before I deleted
the partition, which is odd to me.
I may have did 'reconfigure' before restarting slurmctld, it was awhile
ago, so I don't recall.
On 7/26/2019 8:10 PM, Chris Samuel wrote:
> On 26/7/19 8:28 am, Jeffrey Frey wrote:
>> If you check the source code (src/slurmctld/job_mgr.c) this error is
>> indeed thrown when slurmctl unpacks job state files. Tracing through
>> read_slurm_conf() -> load_all_job_state() -> _load_job_state():
> I don't think that's the actual error that Brian is seeing, as that's
> just a "verbose()" message (as are another 3 of the 5 instances of
> this). The only one that's actually an error is this one:
> in this function:
> * reset_job_bitmaps - reestablish bitmaps for existing jobs.
> * this should be called after rebuilding node information,
> * but before using any job entries.
> It looks like it should mark these jobs as failed, is that the case
> Brian: when you removed the partition did you restart slurmctld or
> just do an scontrol reconfigure?
> BTW that check was introduced in 2003 by Moe :-)
> All the best,
More information about the slurm-users