This page details the policies that govern the use of HPC resources. In addition to these specific HPC policies, you must also agree to abide by all USC computing policies while using HPC resources. These policies can be found at cio.usc.edu/policies/.
Head Node Process Limit Policy
You are restricted to one active process at a time on the head nodes: hpc-login3.usc.edu and hpc-login2.usc.edu. There is a 30-minute CPU time limit for any process on the head nodes. All processes that use more than 30 minutes of CPU time on these head nodes may be terminated without warning.
Linux Resource Limit Policy
You may submit jobs that require up to 24 hours of processing time and 99 nodes maximum to the standard default queue. If you request more resources than this maximum, set forth by the HPC faculty advisory committee, you will receive an error message that states your job was “rejected by all possible destinations.”
If your project requires more computing resources than the defined Linux resource limit, please send an email to email@example.com.
Linux cluster nodes can be accessed by submitting jobs to the following partitions:
- Main Partition: The main partition is the default queue for submitted jobs. There is a limit of 24 hours and/or 256 nodes for all jobs submitted to this partition, and a cap of ten simultaneous running jobs for any single user. Any additional jobs will be held in the queue until the running job(s) complete.
- Quick Partition: All jobs submitted to the default partition with a requested wall-clock time of less than an hour and a node count of four or less are transferred automatically to the quick queue. There is a limit of 20 quick-queue jobs per user, and a total of 100 jobs can be in the quick queue at any given time.
- Large Partition: All jobs submitted to the default partition that require more than 100 nodes are automatically transferred to the large queue. The large queue is limited to one running job per user at a time, with a requested wall-clock time limit of 24 hours.
- Long Partition: All jobs submitted to the default partition with a requested wall-clock time of greater than 24 hours that are limited to only one node are automatically transferred to the long queue. The long queue is limited to one running job per user at a time, with a requested wall-clock time limit of up to 336 hours.
Computing Resource Usage Policy
The Linux cluster is a shared computing resource. Jobs with a long wait or sleep loop jobs are not allowed on the cluster, as this wastes valuable computing time that could be used by other researchers. Any jobs with a long wait or that contain a sleep loop may be terminated without advance notice. Additionally, any processes that may create performance or load issues on the head node or interfere with other users’ jobs may be terminated. This includes compute jobs running on the compute nodes.
There are limits to the number of jobs that can be submitted and executed to a queue. These are detailed under the queue section on the Node Allocation page.
Temporary Disk Usage Policy
Use temporary disk to run your programs instead of your home directory or project space. This will increase performance, depending on how large your data set is. Any job that causes heavy load on our NFS server will be terminated without advance notice.
The temporary disk space policies for the Linux cluster may be found on the Temporary Disk Space page.
Any form of publication, including webpages, resulting from work done on HPC machines should include the following citation:
Computation for the work described in this paper was supported by the University of Southern California’s Center for High-Performance Computing (https://hpcc.usc.edu).
Copies of published papers acknowledging HPC should be submitted for inclusion on the HPC project website, under the publications page (https://hpc-web.usc.edu/projects). Be sure to include complete publication information (i.e, a URL, PDF, or PS file of the actual publication) and indicate if there are any restrictions on publication. Compliance with this policy will be a factor in the evaluation of account renewals.
Grants and Funding
If your project is supported by grants or other funding, this information should be included on the HPC project website under the grant page (https://hpc-web.usc.edu/projects). This will be used internally to provide a better idea of how USC researchers are making use of external funding.
The email account provided by the account’s primary investigator (PI) will automatically be subscribed the hpc-announce-l mailing list for important system announcements. It is the responsibility of the PI to ensure that a valid email address is provided. You are also encouraged to subscribe to the hpcc-discuss-l mailing list. This mailing list is a forum for users to ask questions and share knowledge.
All emails regarding HPC questions, concerns, and maintenance requests should be sent directly to firstname.lastname@example.org. HPC staff members are not responsible for slow responses to emails addressed to individual staff members.
If it has been determined that you have violated any of the HPC resource policies, or any other USC computing policies, your account(s) will be deactivated immediately. Your account will not be reactivated until HPC management receives a formal request from the primary investigator of your project.
If you have any questions or concerns regarding any of these policies, please send an email to email@example.com.