<div dir="ltr"><div><div>Hi Chris,<br><br></div>Thanks a lot, that setting did the trick. I didn't know about that feature.<br><br></div>Regards.<br></div><br><div class="gmail_quote"><div dir="ltr">El mié., 31 ene. 2018 a las 10:55, Christopher Samuel (<<a href="mailto:chris@csamuel.org">chris@csamuel.org</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 31/01/18 20:28, Miguel Gutiérrez Páez wrote:<br>
<br>
> If I unlimit memory resources (by commenting the last line in<br>
> custom.conf file), the same sbatch works properly. A scontrol show<br>
> job of a failed job shows that the job was launched in a compute<br>
> node, where there is no any restriction about memory (or other)<br>
> resource. So, the login node is the only node I limit resources. Why<br>
> is failing the sbatch if the compute nodes have no any restriction<br>
> but the login one?<br>
<br>
By default Slurm propagates resource limits from the node you are<br>
submitting from. Check the PropagateResourceLimits section of the<br>
slurm.conf manual page.<br>
<br>
Short version, add this to slurm.conf:<br>
<br>
PropagateResourceLimits NONE<br>
<br>
Hope that helps!<br>
<br>
All the best,<br>
Chris<br>
--<br>
Chris Samuel : <a href="http://www.csamuel.org/" rel="noreferrer" target="_blank">http://www.csamuel.org/</a> : Melbourne, VIC<br>
<br>
</blockquote></div>