HPCC

Center for High-Performance Computing and Communications

HPCC Computing Resource Overview

The HPCC computing resource consists of 2 shared head nodes and a number of compute nodes. Each compute node has 2 to 12 cores/dual-processor chip sets with at least 60 GB of /tmp space on the 10-gigabit Myrinet backbone and the 56.6-gigabit FDR Infiniband backbone.

To edit, compile, and submit batch jobs to these compute nodes, you will need to log in to either of the head nodes, hpc-login1 or hpc-login2. For programs written in 32-bit i686 code, you should log in to hpc-login1; for programs written in 64bit x86_64 code, you should log in to hpc-login2.

System Details

The following chart details the type of hardware that makes up the HPCC computing resource and to which backbone each piece of hardware is attached. Due to upgrades, modifications, and additions, the following details about the HPCC computing resource are subject to change.

Vendor Model

 Backbone

Description
Oracle x2200

10-gigabit

4-core/dual proc 2.3GHz, 16GB memory
Dell 1950e

 10-gigabit

4-core/dual proc 2.33GHz, 12GB memory
Dell 1950e

10-gigabit

4-core/dual proc 2.5GHz, 12GB memory
Oracle x2200

 10-gigabit

4-core/dual proc 2.5GHz, 12GB memory
IBM dx340

 10-gigabit

4-core/dual proc 2.33GHz, 16GB memory
IBM dx360

 10-gigabit

6-core/dual proc 2.66GHz, 24GB memory
HP sl160

 10-gigabit

6-core/dual proc 3.0GHz, 24GB memory
HP dl165

10-gigabit

12-core/dual proc 2.33GHz, 48GB memory
Dell R910

10-gigabit

10-core/quad proc 2.0GHz, 1TB memory
HP SL250s

56.6-gigabit

8-core/dual proc 2.4GHz, 64GB memory
dual K20 NVIDIA GPUs
HP SL230s

56.6-gigabit

8-core/dual proc 2.4GHz, 128GB memory

Additional Information

For additional information on the Linux computing resource’s infrastructure, including information on the computing queues and nodes, see the Node Allocation page. For additional information on CentOS 6, HPCC’s operating system, see the page on HPCC’s Operating System.