<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class="">When in doubt, check the source:</div><div class=""><br class=""></div><div class=""><br class=""></div><blockquote style="margin: 0 0 0 40px; border: none; padding: 0px;" class=""><font size="1" class="">extern int select_g_select_nodeinfo_unpack(dynamic_plugin_data_t **nodeinfo,<br class=""> Buf buffer,<br class=""> uint16_t protocol_version)<br class="">{<br class=""> dynamic_plugin_data_t *nodeinfo_ptr = NULL;<br class=""> if (slurm_select_init(0) < 0)<br class=""> return SLURM_ERROR;<br class=""> nodeinfo_ptr = xmalloc(sizeof(dynamic_plugin_data_t));<br class=""> *nodeinfo = nodeinfo_ptr;<br class=""> if (protocol_version >= SLURM_MIN_PROTOCOL_VERSION) {<br class=""> int i;<br class=""> uint32_t plugin_id;<br class=""> safe_unpack32(&plugin_id, buffer);<br class=""> for (i=0; i<select_context_cnt; i++)<br class=""> if (*(ops[i].plugin_id) == plugin_id) {<br class=""> nodeinfo_ptr->plugin_id = i;<br class=""> break;<br class=""> }<br class=""> if (i >= select_context_cnt) {<br class=""> error("we don't have select plugin type %u",plugin_id);<br class=""> goto unpack_error;<br class=""> }<br class=""> }</font></blockquote><div class=""><font size="1" class=""><br class=""></font></div><div class=""><font size="1" class=""><br class=""></font></div><div class=""><br class=""></div><div class="">Your slurmd's probably haven't been reconfigured yet and are expecting the linear plugin when they connect to the newly-restarted slurmctld. They could probably do with a restart, assuming you've pushed-out slurm.conf changes to them.</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""><blockquote type="cite" class="">On Dec 13, 2018, at 10:10 AM, Julius, Chad <<a href="mailto:Chad.Julius@sdstate.edu" class="">Chad.Julius@sdstate.edu</a>> wrote:<br class=""><br class="">As an addendum,<br class=""> <br class="">I did try the suggestion mentioned here as well:<br class=""> <br class=""><a href="http://kb.brightcomputing.com/faq/index.php?action=artikel&cat=14&id=410&artlang=en&highlight=slurm" class="">http://kb.brightcomputing.com/faq/index.php?action=artikel&cat=14&id=410&artlang=en&highlight=slurm</a><br class=""> <br class="">Chad<br class=""> <br class="">From: slurm-users <slurm-users-bounces@lists.schedmd.com> On Behalf Of Julius, Chad<br class="">Sent: Thursday, December 13, 2018 8:54 AM<br class="">To: slurm-users@lists.schedmd.com<br class="">Subject: [slurm-users] Help with Con_Res Plugin Error<br class=""> <br class="">Slurm Users, <br class=""> <br class="">I am hoping that you all can help me with the problem below.<br class=""> <br class="">We just spun up a new cluster using Bright and have been trying to change the default behavior of slurm from linear to con_res. Should be simple enough but I am plagued by the following error:<br class=""> <br class="">error: we don't have select plugin type 102<br class=""> <br class="">Both the select_linear.so and select_cons_res.so are located in /cm/shared_tmp/apps/slurm/17.11.8/lib64/slurm/<br class=""> <br class="">I have been testing with just the compute nodes and not the GPU nodes etc... I added the following to my slurm.conf file:<br class=""> <br class=""># Scheduler<br class="">SchedulerType=sched/backfill<br class="">SelectType=select/cons_res<br class="">SelectTypeParameters=CR_Core<br class=""> <br class=""># Nodes<br class=""># NodeName=big-mem[001-005],node[001-056] # Entry from default install<br class=""># NodeName=gpu[001-004] Gres=gpu:2 # Entry from default install<br class="">NodeName=node[001-056] CPUs=2 RealMemory=196000 Sockets=2 CoresPerSocket=20 ThreadsPerCore=1 State=UNKNOWN<br class=""> <br class=""> <br class=""># Partitions<br class="">PartitionName=defq Default=YES MinNodes=1 AllowGroups=ALL PriorityJobFactor=1 PriorityTier=1 DisableRootJobs=NO RootOnly=NO Hidden=NO Shared=YES GraceTime=0 PreemptMode=OFF ReqResv=NO AllowAccounts=ALL AllowQos=ALL LLN=NO ExclusiveUser=NO OverSubscribe=NO OverTimeLimit=0 State=UP Nodes=gpu[001-004],big-mem[001-005],node[001-056]<br class="">PartitionName=test Default=NO MinNodes=1 AllowGroups=ALL PriorityJobFactor=1 PriorityTier=1 DisableRootJobs=NO RootOnly=NO Hidden=NO Shared=YES GraceTime=0 PreemptMode=OFF ReqResv=NO AllowAccounts=ALL AllowQos=ALL LLN=NO ExclusiveUser=NO OverSubscribe=NO OverTimeLimit=0 State=UP Nodes=node[001-056]<br class=""> <br class="">When I issue the scontrol reconfigure I get the following:<br class=""> <br class="">[root@thunder ~]# scontrol reconfigure<br class="">slurm_reconfigure error: Unable to contact slurm controller (connect failure)<br class="">[root@thunder ~]# systemctl status slurmctld.service<br class="">● slurmctld.service - Slurm controller daemon<br class=""> Loaded: loaded (/usr/lib/systemd/system/slurmctld.service; disabled; vendor preset: disabled)<br class=""> Active: failed (Result: exit-code) since Thu 2018-12-13 08:46:18 CST; 5s ago<br class=""> Process: 31416 ExecStart=/cm/shared/apps/slurm/17.11.8/sbin/slurmctld $SLURMCTLD_OPTIONS (code=exited, status=0/SUCCESS)<br class="">Main PID: 31418 (code=exited, status=1/FAILURE)<br class=""> <br class="">When I revert the changes, it goes back to an active working state.<br class=""> <br class="">The /var/log/slurmctld log shows this erorr message:<br class=""> <br class="">error: we don't have select plugin type 102<br class=""> <br class="">Has anyone else run into this problem? If so, can you recommend a fix?<br class=""> <br class="">Thanks, <br class=""> <br class="">Chad<br class=""></blockquote><br class=""><div class=""><br class="">::::::::::::::::::::::::::::::::::::::::::::::::::::::<br class="">Jeffrey T. Frey, Ph.D.<br class="">Systems Programmer V / HPC Management<br class="">Network & Systems Services / College of Engineering<br class="">University of Delaware, Newark DE 19716<br class="">Office: (302) 831-6034 Mobile: (302) 419-4976<br class="">::::::::::::::::::::::::::::::::::::::::::::::::::::::<br class=""><br class=""><br class=""><br class=""></div><br class=""></div></body></html>