On the relationship between the number of parallel workers (matlab) and the number of assigned CPUs in a baobab node

dear HPC community.

I’ve requested a node in Baobab with 10 CPUs. In order to set up the number of parallel workers in matlab, I used the “parcluster” function:

ans = 

 Local Cluster

    Properties: 

                   Profile: local

                  Modified: false

                      Host: node056

                NumWorkers: 16

                NumThreads: 1

        JobStorageLocation: /home/gavirial/.matlab/local_cluster_jobs/R2019b

   RequiresOnlineLicensing: false

    Associated Jobs: 

            Number Pending: 0

             Number Queued: 0

            Number Running: 27

           Number Finished: 0

According to this output, It is not clear to me whether the right number of workers in my case would be 10 (which corresponds to the number of assigned CPUs in the node), or 16 ( the number of workers according the matlab version installed). Apparently, the number of nodes and CPUs should be the same:

https://ch.mathworks.com/matlabcentral/answers/219754-open-all-cores-in-parpool

Thanks in advance for any comment on this regard.

Dear @Julian.GaviriaLopez, please check a parfor example here.

In your case, NumWorkers is the number of detected CPU on the node, which may be different to what you requested. NumWorkers should be set to the number of cpu you request.

The example will not work as-is in your case, as you ask for a slurm session using salloc and then connect through ssh to the node. In this case the variable SLURM_CPUS_PER_TASK won’t be seen by the ssh session. It’s up to you to use the same number of workers as the number of cpu you requested to slurm.

Best