[slurm-users] cpu limit issue
John Hearns
hearnsj at googlemail.com
Wed Jul 11 02:32:35 MDT 2018
Another thought - are we getting mixed up between hyperthreaded and
physical cores here?
I don't see how 12 hyperthreaded cores translates to 8 though - it would be
6!
On 11 July 2018 at 10:30, John Hearns <hearnsj at googlemail.com> wrote:
> Mahmood,
> I am sure you have checked this. Try running ps -eaf --forest
> while a job is running.
> I often find the --forest option helps to understand how batch jobs are
> being run.
>
> On 11 July 2018 at 09:12, Mahmood Naderan <mahmood.nt at gmail.com> wrote:
>
>> >Check the Gaussian log file for mention of its using just 8 CPUs-- just
>> because there are 12 CPUs available doesn't mean the program uses all of
>> >them. It will scale-back if 12 isn't a good match to the problem as I
>> recall.
>>
>>
>>
>> Well, in the log file, it says
>>
>> ******************************************
>> %nprocshared=12
>> Will use up to 12 processors via shared memory.
>> %mem=18GB
>> %chk=trimer.chk
>>
>> Maybe, it scales down to a good match. But I haven't seen that before.
>> That was why I asked the question.
>>
>>
>>
>>
>>
>> One more question. Does it matter if the user specify (or not specify)
>> --account in the sbatch script?
>>
>> [root at rocks7 ~]# sacctmgr list association format=partition,account,user,
>> grptres,maxwall
>> Partition Account User GrpTRES MaxWall
>> ---------- ---------- ---------- ------------- -----------
>> emerald z3 noor cpu=12,mem=1+ 30-00:00:00
>>
>>
>>
>> [noor at rocks7 ~]$ grep nprocshared trimer.gjf
>> %nprocshared=12
>> [noor at rocks7 ~]$ cat trimer.sh
>> #!/bin/bash
>> #SBATCH --output=trimer.out
>> #SBATCH --job-name=trimer
>> #SBATCH --ntasks=12
>> #SBATCH --mem=18GB
>> #SBATCH --partition=EMERALD
>> g09 trimer.gjf
>>
>>
>>
>>
>>
>>
>>
>> Regards,
>> Mahmood
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20180711/0c37761c/attachment.html>
More information about the slurm-users
mailing list