New Job Scheduler/Resource Management Utility

HPC has implemented the Simple Linux Utility for Resource Management (Slurm) for job scheduling and resource management, replacing our current resource management/job scheduler utility, Adaptive Computing’s PBS Torque/Moab. Slurm is an open-source utility that is widely used at national research centers, higher education research centers, government institutions, and other research institutions across the globe. This change was made to address load issues, increase optimization of existing resources through resource sharing, and provide additional functionality and flexibility that is not available to HPC through our PBS.

NOTE: Students currently taking HPC classes should check with their professors regarding any potential changes to HPC-based assignments related to this transition.

Preparing for Slurm

The transition to Slurm requires you to learn and use new commands for job submission and monitoring and you will need to update scripts, workflows, and some applications to make use of these new Slurm commands. Any PBS commands still in place in your scripts, workflows, and applications will not work.

We recommend that all HPC researchers make use of the following resources to aid in a smooth transition.

Read Up on the Changes and Learn the New Slurm Commands

HPC’s Slurm documentation provides an overview of Slurm, a list of popular commands, and a comparison of these commands to some of the PBS commands.  These pages also show how to modify configuration files on applications such as MATLAB to use Slurm instead of PBS and introduces the Slurm test environment and how to access it.

For a quick guide to converting PBS commands to Slurm, see our documentation on converting from PBS to Slurm.

Test Your Scripts and Workflows

A small number of compute nodes are available for trying out the Slurm commands and testing your updated scripts and workflows. Please be aware that the test environment cannot run large compute jobs due to production environment resource concerns. For information on using this test environment, see the Slurm documentation.

NOTE: HSDA members will need to contact HPC for additional configuration.

Take a Class

For those who would benefit from a more hands-on approach to the changes and how to update your scripts, we will be having workshops. You will receive emails with more details on these classes shortly.

Provide Feedback

We encourage any feedback and questions this transition, as we work to make it as low impact as possible. All feedback should be sent to

If you have any questions or concerns about Slurm and the changes to HPC job scheduling and processing, please contact us at