[slurm-users] Configuration issue on Ubuntu

Umut Arus umuta at sabanciuniv.edu
Tue Aug 28 07:43:54 MDT 2018


>
> Hi,
>
> I'm trying to install and configure slurm-wlm 17.11.2 package. Firstly I
> wanted to configure as a single host. munge, slurmd and slurmctld was
> installed. munge slurmd can up and run properly but I couldnt up and run
> slurmctld!
>
> It seems the main problem is; slurmctld: fatal: No front end nodes defined
>
>
> Some outputs are on the below:
> ------------
> slurmctld: debug3: Success.
> slurmctld: debug3: not enforcing associations and no list was given so we
> are giving a blank list
> slurmctld: debug2: No Assoc usage file
> (/var/lib/slurm-llnl/slurmctld/assoc_usage) to recover
> slurmctld: debug:  Reading slurm.conf file: /etc/slurm-llnl/slurm.conf
> slurmctld: debug3: layouts: layouts_init()...
> slurmctld: layouts: no layout to initialize
> slurmctld: debug3: Trying to load plugin
> /usr/lib/x86_64-linux-gnu/slurm-wlm/topology_none.so
> slurmctld: topology NONE plugin loaded
> slurmctld: debug3: Success.
> slurmctld: debug:  No DownNodes
> *slurmctld: fatal: No front end nodes defined*
>
> root at umuta:/etc/slurm-llnl# srun -N1 /bin/hostname
> srun: error: Unable to allocate resources: Unable to contact slurm
> controller (connect failure)
> root at umuta:/etc/slurm-llnl# slurmd -C
> NodeName=umuta CPUs=4 Boards=1 SocketsPerBoard=1 CoresPerSocket=2
> ThreadsPerCore=2 RealMemory=7880
> UpTime=69-02:45:50
> root at umuta:/etc/slurm-llnl#
> root at umuta:/etc/slurm-llnl#
> root at umuta:/etc/slurm-llnl#
> root at umuta:/etc/slurm-llnl# systemctl restart slurmctld
> root at umuta:/etc/slurm-llnl# systemctl status slurmctld
> ● slurmctld.service - Slurm controller daemon
>    Loaded: loaded (/lib/systemd/system/slurmctld.service; enabled; vendor
> preset: enabled)
>    Active: failed (Result: exit-code) since Tue 2018-08-28 16:22:01 +03;
> 5s ago
>      Docs: man:slurmctld(8)
>   Process: 30779 ExecStart=/usr/sbin/slurmctld $SLURMCTLD_OPTIONS
> (code=exited, status=0/SUCCESS)
>  Main PID: 30793 (code=exited, status=1/FAILURE)
>
> Ağu 28 16:22:01 umuta systemd[1]: Starting Slurm controller daemon...
> Ağu 28 16:22:01 umuta systemd[1]: slurmctld.service: New main PID 28172
> does not exist or is a zombie.
> Ağu 28 16:22:01 umuta systemd[1]: Started Slurm controller daemon.
> Ağu 28 16:22:01 umuta systemd[1]: slurmctld.service: Main process exited,
> code=exited, status=1/FAILURE
> Ağu 28 16:22:01 umuta systemd[1]: slurmctld.service: Failed with result
> 'exit-code'.
> root at umuta:/etc/slurm-llnl#
>
> Configured with configurator:
> root at umuta:/etc/slurm-llnl# cat slurm.conf
> # slurm.conf file generated by configurator easy.html.
> # Put this file on all nodes of your cluster.
> # See the slurm.conf man page for more information.
> #
> ControlMachine=umuta
> #ControlAddr=
> #
> #MailProg=/bin/mail
> MpiDefault=none
> #MpiParams=ports=#-#
> ProctrackType=proctrack/pgid
> ReturnToService=1
> SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
> #SlurmctldPort=6817
> SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
> #SlurmdPort=6818
> SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
> SlurmUser=slurm
> #SlurmdUser=root
> StateSaveLocation=/var/lib/slurm-llnl/slurmctld
> SwitchType=switch/none
> TaskPlugin=task/none
> #
> #
> # TIMERS
> #KillWait=30
> #MinJobAge=300
> #SlurmctldTimeout=120
> #SlurmdTimeout=300
> #
> #
> # SCHEDULING
> FastSchedule=1
> SchedulerType=sched/backfill
> #SchedulerPort=7321
> SelectType=select/linear
> #
> #
> # LOGGING AND ACCOUNTING
> AccountingStorageType=accounting_storage/none
> ClusterName=cluster
> #JobAcctGatherFrequency=30
> JobAcctGatherType=jobacct_gather/none
> #SlurmctldDebug=3
> SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
> #SlurmdDebug=3
> SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
> #
> #
> # COMPUTE NODES
> NodeName=umuta CPUs=1 State=UNKNOWN
> PartitionName=debug Nodes=umuta Default=YES MaxTime=INFINITE State=UP
>
> thanks.
>
>
> --
> *Umut A.*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20180828/8515ec3f/attachment.html>


More information about the slurm-users mailing list