[slurm-users] enabling job script archival
Paul Edmon
pedmon at cfa.harvard.edu
Thu Sep 28 17:48:58 UTC 2023
Slurm should take care of it when you add it.
So far as horror stories, under previous versions our database size
ballooned to be so massive that it actually prevented us from upgrading
and we had to drop the columns containing the job_script and job_env.
This was back before slurm started hashing the scripts so that it would
only store one copy of duplicate scripts. After this point we found
that the job_script database stayed at a fairly reasonable size as most
users use functionally the same script each time. However the job_env
continued to grow like crazy as there are variables in our environment
that change fairly consistently depending on where the user is. Thus
job_envs ended up being too massive to keep around and so we had to drop
them. Frankly we never really used them for debugging. The job_scripts
though are super useful and not that much overhead.
In summary my recommendation is to only store job_scripts. job_envs add
too much storage for little gain, unless your job_envs are basically the
same for each user in each location.
Also it should be noted that there is no way to prune out job_scripts or
job_envs right now. So the only way to get rid of them if they get large
is to 0 out the column in the table. You can ask SchedMD for the mysql
command to do this as we had to do it here to our job_envs.
-Paul Edmon-
On 9/28/2023 1:40 PM, Davide DelVento wrote:
> In my current slurm installation, (recently upgraded to slurm
> v23.02.3), I only have
>
> AccountingStoreFlags=job_comment
>
> I now intend to add both
>
> AccountingStoreFlags=job_script
> AccountingStoreFlags=job_env
>
> leaving the default 4MB value for max_script_size
>
> Do I need to do anything on the DB myself, or will slurm take care of
> the additional tables if needed?
>
> Any comments/suggestions/gotcha/pitfalls/horror_stories to share? I
> know about the additional diskspace and potentially load needed, and
> with our resources and typical workload I should be okay with that.
>
> Thanks!
More information about the slurm-users
mailing list