HPC Infrastructure

HPC comprises a diverse mix of computing and data resources. Two Linux clusters constitute the principal computing resource. In addition, HPC has a central facility that provides more than 3 petabytes of combined disk storage and potential access to nearly a petabyte of tape storage.

Housed in USC’s state-of-the-art data center, HPC allows faculty members to maximize their individual research resources by contributing to HPC’s condominium-style compute environment. Under this arrangement, researchers may use their grant awards to acquire HPC nodes, which are accessed through the same interface as the standard HPC nodes. HPC staff members monitor and maintain the equipment on a round-the-clock basis, enabling researchers to focus on their primary research. Researchers have sole access to their condo nodes except for twice a year when HPC runs the LINPACK benchmarks.

Linux Clusters

The compute power of HPC enables USC researchers to tackle grand-challenge class problems in a broad range of scientific disciplines.

HPC has two Linux clusters. The center’s newest cluster is comprised of:

  • 264 Hewlett-Packard SL250, dual Sandy Bridge Xeon 8-core 2.6 gigahertz, dual-processor, dual NVIDIA K20 GPUs containing 2,496 cores each, with 64 gigabytes of memory
  • 448 Hewlett-Packard SL230, dual Sandy Bridge Xeon 8-core 2.6 gigahertz dual-processor CPUs with 64 gigabytes of memory
  • 288 Lenovo nx360m5 dual Haswell Xeon 8-core 2.6 gigahertz dual-processor CPUs with 64 gigabytes of memory
  • 19 Lenovo nx360m5 2.6 gigahertz dual-processor, dual NVIDIA K40 GPUs containing 2,880 cores each with 64 gigabytes of memory
  • 5 Lenovo nx360m5 2.6 gigahertz dual-processor, dual NVIDIA K80 GPUs containing 2 x 2,496 cores each with 64 gigabytes of memory

This cluster is on a 56.6 FDR Infiniband backbone and achieved a benchmark of 621.5 teraflops or 621.5 trillion floating-point calculations per second.

HPC’s other 1,736-node, 4-core, 6-core, and 12-core dual-processor cluster contains Dell, Oracle Sun, Hewlett-Packard, and IBM compute nodes on a 10-gigabit Myrinet backbone and includes 4 large-memory nodes with 1 terabyte of RAM and 4 10-core Intel Xeon processors.

On each cluster, a bidirectional, low-latency fiber network interconnects the nodes, allowing for the development of massive production jobs that require high-speed communications among computational elements.


HPC has multiple-gigabit interfaces to the USC campus network. These interfaces facilitate massive data transfers and interactive access to the computing facilities at 10 gigabits per second, creating a campus grid environment through USC’s multiple-gigabit-per-second campus network.

HPC is home to the headquarters of Los Nettos, the regional computer network that provides redundant and reliable network bandwidth to the Los Nettos Consortium, comprising USC, the California Institute of Technology (Caltech), the Jet Propulsion Laboratory, Loyola Marymount University,  the Claremont Colleges, and Occidental College.

Now in its 24th year of operation, Los Nettos has built a fiber infrastructure, comprising dark fiber and leased circuits, which uses leading-edge optical technologies to enhance the network’s flexibility and provide services such as private virtual local area networks (VLANs), dynamic interconnects, and partitioned wavelengths between member sites.

Through its affiliations with the Corporation for Education Network Initiatives in California (CENIC), Internet2, National LambdaRail, and Internet exchanges such as Pacific Wave, CIIX, and Any2, Los Nettos offers high-capacity access to many national and international research and education networks.

Pacific Wave

Pacific Wave has developed a distributed Internet exchange facility on the West Coast (currently in Seattle, Sunnyvale, and Los Angeles) to allow interconnection among U.S. and international research and education networks. The Pacific Wave exchange, operated by Pacific Northwest Gigapop (PnWgP) and the Corporation for Education Network Initiatives in California (CENIC), enables data to pass directly between major national and international networks, increasing the efficiency and speed of data transfer while eliminating the costs associated with routing data through multiple circuits and dedicated interconnects. This infrastructure supports a wide variety of advanced science and engineering applications. USC researchers have access to this Internet exchange and its international and national participants through the Los Nettos regional network.