[slurm-users] Heterogeneous HPC

Renfro, Michael Renfro at tntech.edu
Thu Sep 19 19:50:41 UTC 2019


Never used Rocks, but as far as Slurm or anything else is concerned, Singularity is just another program. It will need to be accessible from any compute nodes you want to use it on (whether that’s from OS-installed packages, from a shared NFS area, or whatever shouldn’t matter).

So your user will still just use srun, salloc, sbatch, or whatever to invoke Singularity. My local docs for this are at [1], and they use the normal Singularity RPM from EPEL.

[1] https://its.tntech.edu/display/MON/Using+Containers+in+Your+HPC+Account

> On Sep 19, 2019, at 2:33 PM, Mahmood Naderan <mahmood.nt at gmail.com> wrote:
> 
> External Email Warning
> This email originated from outside the university. Please use caution when opening attachments, clicking links, or responding to requests.
> For the replies. Matlab was an example. I would also like to create to containers for OpenFoam with different versions. Then a user can choose what he actually wants.
> 
> I would also like to know, if the technologies you mentioned can be deployed in multinode clusters. Currently, we use Rocks 7. Should I install singularity (or others) on all nodes or just the frontend?
> And then, can users use "srun" or "salloc" for interactively login to a node and run the container or not?
> 
> Regards,
> Mahmood
> 
> 
> 
> 
> On Thu, Sep 19, 2019 at 8:03 PM Michael Jennings <mej at lanl.gov> wrote:
> 
> Docker is the wrong choice for HPC, at least today.  But Podman, from
> Red Hat's CRI-O project, is a drop-in replacement for Docker which
> doesn't use the client-server model of Docker and therefore addresses
> many of the challenges with trying to run Docker for HPC user jobs.
> 
> There's also LANL's Charliecloud, which is a highly optimized
> container runtime that (unlike the other options in this space, save
> Podman) DOES NOT require any root privileges whatsoever, not even at
> install time.  For (hopefully obvious) security reasons, you are far
> safer using one of the unprivileged options.
> 
> Here at Los Alamos, we use both Charliecloud and Podman/Buildah along
> with the Spokeo and umoci tools.  While we do not permit Singularity
> on our systems for security reasons and don't run Shifter because it
> requires privilege, we have had Charliecloud deployed and actively
> used on both our Classified and Open Science systems for well over a
> year now, and we are in the process of getting Podman/Buildah and
> friends into the Secure systems as we speak.
> 
> (Note that all of the above require RHEL7 or higher; if you need RHEL6
> support, you'll want to check out Shifter.)
> 
> Here are some videos of talks that might help you get up-to-speed on
> this subject:
> 
> "LISA18 - Containers and Security on Planet X"
> (https://youtu.be/F3qCvZMzUtE) - Why containers matter for HPC, what
> makes HPC so different from the typical Docker/AppC use cases, and how
> to choose the right solution for your site.
> 
> "Charliecloud - Unprivileged Containers for HPC"
> (https://youtu.be/ESsZgcaP-ZQ) - What containers actually are under
> the hood, how they work, what they are good for, and how to get up and
> running with Charliecloud in under 5 minutes.
> 
> "Container Mythbusters" (https://youtu.be/FFyXdgWXD3A) - Dispelling
> common misconceptions and debunking propaganda around containers,
> container runtime security, and when/how you should (and should NOT)
> use containers.
> 
> Hope those help!
> Michael
> 
> -- 
> Michael E. Jennings <mej at lanl.gov>
> HPC Systems Team, Los Alamos National Laboratory
> Bldg. 03-2327, Rm. 2341     W: +1 (505) 606-0605
> 



More information about the slurm-users mailing list