[slurm-users] Power9 ACC922
Sergi More
sergi.more at bsc.es
Tue Apr 16 14:37:30 UTC 2019
Hi,
We have a Power9 cluster (AC922) working without problems. Now with
18.08, but have been running as well with 17.11. No extra steps/problems
found during installation because of Power9.
Thank you,
Sergi.
On 16/04/2019 16:05, Bill Wichser wrote:
> Does anyone on this list run Slurm on the Sierra-like machines from
> IBM? I believe they are the ACC922 nodes. We are looking to purchase
> a small cluster of these nodes but have concerns about the scheduler.
>
> Just looking for a nod that, yes it works fine, as well as any issues
> seen during deployment. Danny says he has heard of no problems but
> that doesn't mean the folks in the trenches haven't seen issues!
>
> Thanks,
> Bill
>
--
------------------------------------------------------------------------
Sergi More Codina
Operations - HPC System administration
Barcelona Supercomputing Center
Centro Nacional de Supercomputacion
WWW: http://www.bsc.es Tel: +34-93-405 42 27
e-mail: sergi.more at bsc.es Fax: +34-93-413 77 21
------------------------------------------------------------------------
WARNING / LEGAL TEXT: This message is intended only for the use of the
individual or entity to which it is addressed and may contain
information which is privileged, confidential, proprietary, or exempt
from disclosure under applicable law. If you are not the intended
recipient or the person responsible for delivering the message to the
intended recipient, you are strictly prohibited from disclosing,
distributing, copying, or in any way using this message. If you have
received this communication in error, please notify the sender and
destroy and delete any copies you may have received.
http://www.bsc.es/disclaimer
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3617 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190416/2e588df5/attachment.bin>
More information about the slurm-users
mailing list