[slurm-users] Extreme long db upgrade 16.05.6 -> 17.11.3

Lech Nieroda lech.nieroda at uni-koeln.de
Wed Apr 3 10:30:36 UTC 2019


Hello Chris,

I’ve submitted the bug report together with a patch.
We don’t have a  support contract but I suppose they’ll at least read it ;)
The code is identical for 18.08.x and 19.05.x, it’s just a different offset.

Kind regards,
Lech

> Am 02.04.2019 um 15:18 schrieb Ole Holm Nielsen <Ole.H.Nielsen at fysik.dtu.dk>:
> 
> Hi Lech,
> 
> IMHO, the Slurm user community would benefit the most from your interesting work on MySQL/MariaDB performance, if your patch could be made against the current 18.08 and the coming 19.05 releases.  This would ensure that your work is carried forward.
> 
> Would you be able to make patches against 18.08 and 19.05?  If you submit the patches to SchedMD, my guess is that they'd be very interested.  A site with a SchedMD support contract (such as our site) could also submit a bug report including your patch.
> 
> /Ole
> 
> On 4/2/19 2:56 PM, Lech Nieroda wrote:
>> That’s probably it.
>> Sub-queries are known for potential performance issues, so one wonders why the devs didn’t extract it accordingly and made the code more robust or at least compatible with RHEL/CentOS 6 rather than including that remark in the release notes.
>>> Am 02.04.2019 um 07:20 schrieb Chris Samuel <chris at csamuel.org>:
>>> 
>>> On Monday, 1 April 2019 7:55:09 AM PDT Lech Nieroda wrote:
>>> 
>>>> Further analysis of the query has shown that the mysql optimizer has choosen
>>>> the wrong execution plan. This may depend on the mysql version, ours was
>>>> 5.1.69.
>>> 
>>> I suspect this is the issue documented in the release notes for 17.11:
>>> 
>>> https://github.com/SchedMD/slurm/blob/slurm-17.11/RELEASE_NOTES
>>> 
>>> NOTE FOR THOSE UPGRADING SLURMDBD: The database conversion process from
>>>      SlurmDBD 16.05 or 17.02 may not work properly with MySQL 5.1 (as was the
>>>      default version for RHEL 6).  Upgrading to a newer version of MariaDB or
>>>      MySQL is strongly encouraged to prevent this problem.
> 




More information about the slurm-users mailing list