Frequently Asked Questions
Click the links below for answers to frequently asked questions regarding HPC accounts, using the HPC cluster, or software on HPC.
How do I get an HPC account?
In order to apply for a new HPC account, please complete the account application at www-rcf.usc.edu/rcfdocs/hpcc/allocations. You must access this application web page from a USC IP address. If you are off-campus or are otherwise using a non-USC network connection, you will need to use the USC Virtual Private Network (VPN) in order to access this web page. For information on accessing the USC VPN, see the VPN Overview page on the ITS website.
For additional information on applying for a HPC account, see the Applying for a HPC Account page.
I forgot my password. How can I reset it?
HPC does not have access to passwords and cannot reset them. Please contact the ITS Customer Support Center at 213-740-5555 for assistance with resetting your account password.
What is a quota?
Every applicant for an HPC allocation needs to justify his/her request for storage resources. Once your allocation has been approved, you will be granted a certain amount of disk space. Please be aware that the amount allocated to you may not match your request.
Your HPC disk space will be located at
Inside this directory, every user of this allocation will have a subdirectory, identified by their RCF login. This disk space is called the project directory. All users associated with the allocation will share the disk quota for the project directory.
In addition to your space in the project directory, you will also be granted your own disk quota, which is located in your RCF home directory. This directory has a fixed 1 GB quota and is strictly for personal use. Always start your HPC jobs from the project directory. The most common cause of “disk quota exceeded” error messages is a direct result of users starting jobs from their home directory and not their project directory.
You can check the quota on your RCF home directory by running the following command:
You can also check your RCF home directory quota and your allocation’s project directory quota on the Checking Your Quotas page.
If your allocation’s project directory has become full, please send an email to email@example.com for assistance. Please note that HPC is unable to increase your RCF home directory’s quota.
How do I view and/or change my shell?
The most commonly used shells are /bin/bash and /bin/tcsh. We recommend using /bin/bash due to its widespread adoption and powerful feature set. It is the default shell in the vast majority of Linux distributions today, and is the default shell for new HPC users at USC.
You can view your current shell by typing:
[user@hpc-login2 ~]$ echo $0 -bash
You can change your default shell by logging into a headnode (e.g., hpc-login2), typing the command chsh, and responding to the prompts.
NOTE: It can take up to 30 minutes for changes to your shell to take effect.
Below is an example of a shell change from tcsh to bash:
[user@hpc-login2 ~]$ chsh Password: Changing nis shell for user on almaak-06.usc.edu... Current password: Please enter your University ID (e.g. 0123456789): Valid shells: /bin/sh /bin/csh /usr/bin/sh /usr/bin/csh /usr/usc/bin/bash /usr/usc/bin/tcsh /bin/ksh /bin/tcsh /bin/bash /bin/rksh Old shell: /bin/tcsh New shell: /bin/bash Connection to almaak closed. Please allow up to 30 minutes for the change to take effect.
How do I use the temporary disk space on my account?
You must use either the /tmp or /scratch file system as your working directory for all jobs.
The /tmp File System
The /tmp file system is available locally on each node. Please refer to the Node Allocation page for the /tmp disk space available on each node. A directory is created exclusively for each job and is defined by the environment variable $TMPDIR. You have access to the /tmp directory of a particular node only when you are running a job on that node. All files created in /tmp are deleted before the next job starts.
The /scratch File System
The /scratch directory is a shared temporary file system. It is created when a new job starts and it is deleted at the end of the job. The /scratch file system should be used to store temporary files that need to be accessed from all the nodes. If you need to save the files in your /scratch file system, copy them to a permanent storage disk before your job ends.
Managing Files in the Temporary File Systems
Files located in the temporary file systems, /tmp and/scratch, are not backed up. It is your responsibility to copy important data to permanent file systems. While HPC will try to protect your files for the stated retention periods, hardware failures could result in data loss.
If the performance of a computing resource is degraded by a temporary file system that has too little available free space, HPC management may compress, move, or delete files that have not been accessed within the last five days.
Using the HPC Cluster
How do I log in to the HPC cluster?
To log in to the Linux cluster resource, you will need to use ssh to access either hpc-login1 for the 32-bit head node or hpc-login2 for the 64-bit head node.
These head nodes should only be used for editing and compiling programs; any computing should be done on the compute nodes. Computing jobs submitted to the head nodes may be terminated before they complete. To submit jobs to the compute nodes, use the Portable Batch System (PBS) .
How do I run jobs on the cluster?
To a run job on the HPC cluster, you will need to set up a Portable Batch System (PBS) file, in order to define the commands and cluster resources you wish to use for this job. This PBS file is a simple text file that can be easily edited by using a UNIX editor, such as vi, pico, or emacs. Information on UNIX editors can be found in the UNIX Overview section of the ITS website.
Setting Up a PBS File
Here is a sample PBS file, named myjobs.pbs, followed by an explanation of each line of the file.
#PBS -l nodes=1:ppn=2
#PBS -l walltime=00:00:59
The 1st line in the file sets what shell will be used for the job. Bash is used in this example, but csh or other valid shells will also work.
The 2nd line specifies the number of nodes and processors desired for this job. In this example, 1 node with 2 processors is requested.
The 3rd line in the PBS file states how much wall-clock time is requested. In this example 59 seconds of wall-clock time are requested.
The 4th line tells the HPC cluster to access the directory where the data is located for this job. In this example, the cluster is instructed to cd into the /home/rcf-proj3/pv/test/ directory.
The 5th line tells the cluster which program you would like to use to analyze your data. In this case, the cluster sources the environment for SAS.
The 6th line tells the cluster to run the program. In this case, it runs SAS, specifying my.sas as the argument in the current directory, /home/rcf-proj3/pv/test, defined in the previous line.
Running Your Job
After creating your PBS file, type the following command to submit your job:
Once you issue the command, you should receive a job id and 2 output files, one for standard output and one for standard error, in traditional UNIX fashion.
For more details and examples on how to run jobs, please see the Running a Job on the HPC Cluster page.
How do I report a problem with a job submission?
If a job submission results in an error, please send an email to firstname.lastname@example.org. Be sure to include the job ID, error message, and any additional information you can provide.
How do I know which allocation I should use to submit a job if I am in multiple HPC allocations? How do I specify a certain allocation?
To see a listing of all of your available accounts and your current core hour allocations in these accounts, use the following command:
If you have accounts in multiple allocations, the following command will show you your default allocation:
glsuser [-u <username>]
The default HPC allocation is used to run a job when no allocation is specified in the qsub command line.
You can override this by using the following command:
qsub -A [hpcc_allocation_name] myjob.pbs
For further details on qsub, please read the official man pages available by typing the following on any HPC login node:
How do I set up a non-standard compiler?
If you wish to use a non-standard compiler, i.e. a compiler other than one of the MPI compilers, HPC has GNU, Portland Group, Intel, and Absoft compilers installed under the /usr/usc directory. In order to use on of these compilers, you must source the appropriate setup.csh or setup.sh file each time you login.
For example, here is how you would set up the Portland Group compiler for use on your account.
Setting Up the Portland Group Compiler
The following command sets up the Portland Group compiler:
If you are planning to use only the Portland Group compilers, add the following to your .login or .profile file to set up your environment.
if (-r /usr/usc/pgi/default/setup.csh) then source /usr/usc/pgi/default/setup.csh endif
Software on HPC
How can I find out what software is available on the HPC?
A comprehensive list of all software installed in the HPC environment can be found in the /usr/usc directory. Most traditional UNIX utilities can be found in /usr/bin.
Why am I getting a “command not found” error when I try to run applications, such as MATLAB or SAS, on HPC?
In order to use software installed on HPC, its location must be defined in your PATH. Since /usr/usc is not in your PATH by default, the system is unable to find the requested software.
This can be resolved by sourcing the application’s environment. Below is an example of how to source MATLAB:
If you are using sh or bash, type:
If you are running csh or tcsh, type:
After running this command, your PATH will be set up properly, with MATLAB in it, and the program will be available to be run.
If you wish to avoid sourcing programs manually in the future, the source command can be placed in your .cshrc file if you are running the csh or tcsh shells or your .bashrc file if you are running the sh or bash shells.
If you have additional questions about HPC, please send an email to email@example.com.