Parallel MATLAB (R2016)
This documentation is for MATLAB R2016. If you are using R2016, we strongly recommend that you update to MATLAB r2018. If you decide to do so, be aware that /usr/usc/matlab/R2018 uses different Slurm integration scripts. Please see our R2018 documentation, Parallel MATLAB (R2018).
MathWorks’ MATLAB Parallel Computing Toolbox (PCT) and Distributed Computing Engine (MDCS) is installed on the HPC cluster for MATLAB R2016.
To access the software, run (“source”) the setup script for this version:
You might want to include the following line in your program to suppress time zone warnings.
To use a parallel MATLAB environment, copy the files located in /usr/usc/matlab/R2016/parallel_scripts to your ~/matlab directory. You may have to create the directory if it does not already exist. After you have copied the files, follow the steps for either a single node or multi node job.
1. Modify lines 4 and 6 in get_LOCAL_cluster.m so that they read:
evalc('system(''mkdir -p <job_storage_path>'')'); c.JobStorageLocation='<job_storage_path>';
Where <job_storage_path> is a path to the directory you created for MATLAB to use for temporary storage.
2. To create a cluster object, add the following line to your program:
3. Finally, include the command parpool(<N>) in your program to start a pool of N workers.
1. Modify lines 5 and 6 in get_SLURM_cluster.m so that they read:
evalc('system(''mkdir -p <job_storage_path>'')'); cluster = parallel.cluster.Generic('JobStorageLocation', '<job_storage_path>');
Where <job_storage_path> is a path to the directory you created for MATLAB to user for temporary storage.
2. To create a cluster object, add the following line to your MATLAB program:
Where HH:MM:SS represents the hours, minutes, and seconds; that is, the amount of time you want to allocate resources for.
3. Finally, include the command parpool(<N>)in your MATLAB program to start a pool of N workers.
Note: You don’t have to add requests for –ntasks because get_SLURM_cluster.m will do that for you based on your parpool arguments.
You may wish to experiment with other options in order to accomplish your specific goals. If you need to submit jobs to a specific partition or have certain memory requirements you can specify additional Slurm parameters in the get_SLURM_CLUSTER function.