[slurm-users] sacct issue: jobs staying in "RUNNING" state
Will Dennis
wdennis at nec-labs.com
Wed Jul 17 01:04:42 UTC 2019
A few more things to note:
- (Should have mentioned this earlier) running Slurm 17.11.7 ( via https://launchpad.net/~jonathonf/+archive/ubuntu/slurm )
- Restarted slurmctld and slurmdbd, but still getting the slurmdbd errors as before in slurmctld.log
- Ran "mysqlcheck --databases slurm_acct_db --auto-repair", output was:
slurm_acct_db.acct_coord_table OK
slurm_acct_db.acct_table OK
slurm_acct_db.clus_res_table OK
slurm_acct_db.cluster_table OK
slurm_acct_db.convert_version_table OK
slurm_acct_db.federation_table OK
slurm_acct_db.macluster_assoc_table OK
slurm_acct_db.macluster_assoc_usage_day_table OK
slurm_acct_db.macluster_assoc_usage_hour_table OK
slurm_acct_db.macluster_assoc_usage_month_table OK
slurm_acct_db.macluster_event_table OK
slurm_acct_db.macluster_job_table OK
slurm_acct_db.macluster_last_ran_table OK
slurm_acct_db.macluster_resv_table OK
slurm_acct_db.macluster_step_table OK
slurm_acct_db.macluster_suspend_table OK
slurm_acct_db.macluster_usage_day_table OK
slurm_acct_db.macluster_usage_hour_table OK
slurm_acct_db.macluster_usage_month_table OK
slurm_acct_db.macluster_wckey_table OK
slurm_acct_db.macluster_wckey_usage_day_table OK
slurm_acct_db.macluster_wckey_usage_hour_table OK
slurm_acct_db.macluster_wckey_usage_month_table OK
slurm_acct_db.qos_table OK
slurm_acct_db.res_table OK
slurm_acct_db.table_defs_table OK
slurm_acct_db.tres_table OK
slurm_acct_db.txn_table OK
slurm_acct_db.user_table OK
- Nothing in /var/log/mysql/error.log for as far back as logs go
- Ran "sacctmgr show runaway", there were a LOT of runaway jobs; chose "Y" to fix, then output of "sacctmgr show runaway" was nil. A few minutes later however, "sacctmgr show runaway" had entries again.
If someone knows what else I might try to isolate/resolve this issue, please kindly assist...
From: Will Dennis
Sent: Tuesday, July 16, 2019 2:43 PM
To: slurm-users at lists.schedmd.com
Subject: sacct issue: jobs staying in "RUNNING" state
Hi all,
Was looking at the running jobs on one groups cluster, and saw there was an insane amount of "running" jobs when I did a sacct -X -s R; then looked at output of squeue, and found a much more reasonable number...
root at slurm-controller1:/ # sacct -X -p -s R | wc -l
8895
root@ slurm-controller1:/ # squeue | wc -l
43
In looking for the cause, I see a large amount of the following in the slurmctld.log file:
[2019-07-16T09:36:51.464] error: slurmdbd: agent queue is full (20140), discarding DBD_STEP_START:1442 request
[2019-07-16T09:40:27.515] error: slurmdbd: agent queue filling (20140), RESTART SLURMDBD NOW
[2019-07-16T09:40:27.515] error: slurmdbd: agent queue is full (20140), discarding DBD_JOB_COMPLETE:1424 request
[2019-07-16T09:40:27.515] error: slurmdbd: agent queue is full (20140), discarding DBD_STEP_COMPLETE:1441 request
[2019-07-16T09:42:40.766] error: slurmdbd: agent queue filling (20140), RESTART SLURMDBD NOW
[2019-07-16T09:42:40.766] error: slurmdbd: agent queue is full (20140), discarding DBD_STEP_START:1442 request
[2019-07-16T09:46:05.905] error: slurmdbd: agent queue filling (20140), RESTART SLURMDBD NOW
[2019-07-16T09:46:05.905] error: slurmdbd: agent queue is full (20140), discarding DBD_STEP_COMPLETE:1441 request
[2019-07-16T09:46:05.905] error: slurmdbd: agent queue is full (20140), discarding DBD_JOB_COMPLETE:1424 request
[2019-07-16T09:48:42.616] error: slurmdbd: agent queue filling (20140), RESTART SLURMDBD NOW
[2019-07-16T09:48:42.616] error: slurmdbd: agent queue is full (20140), discarding DBD_JOB_COMPLETE:1424 request
[2019-07-16T09:48:42.616] error: slurmdbd: agent queue is full (20140), discarding DBD_STEP_COMPLETE:1441 request
[2019-07-16T09:53:00.188] error: slurmdbd: agent queue filling (20140), RESTART SLURMDBD NOW
[2019-07-16T09:53:00.188] error: slurmdbd: agent queue is full (20140), discarding DBD_JOB_COMPLETE:1424 request
[2019-07-16T09:53:00.189] error: slurmdbd: agent queue is full (20140), discarding DBD_STEP_COMPLETE:1441 request
What may be the cause of this issue? And, is there any way now to correct the accounting records in the db?
Thanks,
Will
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190717/c02b8660/attachment.htm>
More information about the slurm-users
mailing list