HPCC

Center for High-Performance Computing and Communications

HPCC Computing Workshops

HPCC offers a variety of workshops to USC faculty, graduate students, and research staff on topics related to high-performance computing. Unless otherwise noted, workshops are free of charge.

Distributed Computing

The HPCC supercomputer is a distributed computing environment that contains more than 2,400 machines, as well as high-speed interconnects and a myriad of different storage systems. The cluster’s operating system is CentOS Linux. Torque and Moab are used for job submission and scheduling.

Introduction to Unix/Linux

This workshop covers how to setup the Unix/Linux environment on your computer (login, file transfer, X-window system), basics commands, text editors, compilers, and job submission to the HPCC cluster. This introductory workshop is most suitable for users who are new to parallel and cluster computing.

HPCCworkshop_LinuxIntro [PDF]

Advanced Use of the HPCC Cluster

This workshop focuses on the HPCC cluster environment specifically, covering topics such as compiling and installing software, monitoring and submitting jobs with PBS, and optimizing memory and file system usage. This workshop is designed to provide a more in-depth understanding of the HPCC cluster that will give users tools to more quickly and efficiently obtain their results.

HPCCworkshop_Advuse [PDF]

GPU Computing

Graphics Processing Units (GPUs) provide much more computing power and data processing capability than conventional computing architectures. Continuous improvements in GPU computing technology have enabled researchers to achieve substantial performance gains in the areas of climate modeling, materials simulations, financial modeling, MRI imaging, and health science applications.

In 2013, HPCC deployed a 264-node, GPU-based cluster in which each node harnesses dual-octacore Intel Xeon and dual Nvidia K20 GPU boards.

To help the USC community integrate GPU computing capabilities into their research, HPCC hosts several different workshops to facilitate GPU programming, using the Nvidia CUDA toolkit.

Introductory GPU Programming

This workshop—approximately three-hours long—is part lecture and part hands-on session. The lecture provides background on GPU computing, CUDA APIs, and a sample Monte Carlo (MC) code based on CUDA. In the hands-on session, participants learn how to write CUDA programs, submit jobs to the HPCC supercomputer, and evaluate the performance difference between CPU and GPU.

CUDA workshop presentation [PDF]

Matlab

MATLAB is a numerical computing environment and fourth-generation programming language. MATLAB enables matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, Java, and Fortran.

Parallel and GPU Computing with MATLAB

This workshop covers how to use MATLAB on the HPCC supercomputer’s GPU-enabled nodes to increase processing speed.

Parallel Computing with MATLAB workshop presentation [PDF]