[slurm-users] Submit job using srun fails but sbatch works

Alexander Åhman alexander at ydesign.se
Wed May 29 14:46:29 UTC 2019


I have tried to find a network error but can't see anything. Every node 
I've tested has the same (and correct) view of things.

_On node cn7:_ (the problematic one)
em1: link/ether 50:9a:4c:79:31:4d inet 10.28.3.137/24

_On login machine:_
[alex at li1 ~]$ host cn7
cn7.ydesign.se has address 10.28.3.137
[alex at li1 ~]$ arp cn7
Address                  HWtype  HWaddress           Flags 
Mask            Iface
cn7.ydesign.se           ether   50:9a:4c:79:31:4d C                     em1

_On slurmctld machine:_
[alex at cmgr1 ~]$ host cn7
cn7.ydesign.se has address 10.28.3.137
[alex at cmgr1 ~]$ arp cn7
Address                  HWtype  HWaddress           Flags 
Mask            Iface
cn7.ydesign.se           ether   50:9a:4c:79:31:4d C                     em1


Yes, I have seen your pages and must say that they have been pure gold 
on many occasions, thanks a lot Ole! But our cluster is still tiny and 
the whole cluster is located in its own network segment. The number of 
ARP entries is far from 512 (actually, more like ~30).

I just don't understand why sbatch works but not srun?
Could this be some error in the state files perhaps? Something that 
maybe got corrupted when the node (cn7) unexpectedly died?

Regards,
Alexander



Den 2019-05-29 kl. 15:12, skrev Ole Holm Nielsen:
> Hi Alexander,
>
> The error "can't find address for host cn7" would indicate a DNS 
> problem.  What is the output of "host cn7" from the srun host li1?
>
> How many network devices are in your subnet?  It may be that the Linux 
> kernel is doing "ARP cache trashing" if the number of devices 
> approaches 512.  What is the result of "arp cn7"?
>
> To fix ARP cache trashing look in my Slurm Wiki page
> https://wiki.fysik.dtu.dk/niflheim/Slurm_configuration#configure-arp-cache-for-large-networks 
>
>
> Best regards,
> Ole
>
> On 5/29/19 3:00 PM, Alexander Åhman wrote:
>> Hi,
>> Have a very strange problem. The cluster has been working just fine 
>> until one node died and now I can't submit jobs to 2 of the nodes 
>> using srun from the login machine. Using sbatch works just fine and 
>> also if I use srun from the same host as slurmctld.
>> All the other nodes works just fine as they always has, only 2 nodes 
>> are experiencing this problem. Very strange...
>>
>> Have checked network connectivity and DNS and that is OK. I can ping, 
>> ssh to all nodes just fine. All nodes are identical and using Slurm 
>> 18.08.
>> Also tested to reboot the 2 nodes and slurmctld but still same problem.
>>
>> [alex at li1 ~]$ srun -w cn7 hostname
>> srun: error: fwd_tree_thread: can't find address for host cn7, check 
>> slurm.conf
>> srun: error: Task launch for 1088816.0 failed on node cn7: Can't find 
>> an address, check slurm.conf
>> srun: error: Application launch failed: Can't find an address, check 
>> slurm.conf
>> srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
>> srun: error: Timed out waiting for job step to complete
>>
>> [alex at li1 ~]$ srun -w cn6 hostname
>> cn6.ydesign.se
>>
>> What is this error "can't find address for host" about? Have searched 
>> the web but can't find any good information about what the problem is 
>> or what to do to resolve it.
>>
>> Any kind soul out there who knows what to do next?
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190529/660f9dbc/attachment.html>


More information about the slurm-users mailing list