HPC Computing Workshops

HPC offers a variety of workshops to USC faculty, graduate students, and research staff on topics related to high-performance computing. Unless otherwise noted, workshops are free of charge.

Distributed Computing

The HPC supercomputer is a distributed computing environment that contains more than 2,400 machines, as well as high-speed interconnects and a myriad of different storage systems. The cluster’s operating system is CentOS Linux. Torque and Moab are used for job submission and scheduling.

Introduction to Unix/Linux

This workshop covers how to setup the Unix/Linux environment on your computer (login, file transfer, X-window system), basics commands, text editors, compilers, and job submission to the HPC cluster. This introductory workshop is most suitable for users who are new to parallel and cluster computing.

LinuxIntro

Introduction to HPC Cluster computing

This workshop focuses on efficient use of the HPC computing resources, covering topics such as compiling and installing software, monitoring and submitting jobs with PBS, and optimizing memory and file system usage. Designed to provide a more in-depth understanding of the HPC cluster, this session will give users tools to more quickly and efficiently obtain their desired results.This workshop is intended for novice- and intermediate-level HPC users. It is highly recommended that you take the Introduction to Linux/Unix workshop if you haven’t worked in the Linux environment before.

Introduction to the HPC Cluster

GPU Computing

Graphics Processing Units (GPUs) provide much more computing power and data processing capability than conventional computing architectures. Continuous improvements in GPU computing technology have enabled researchers to achieve substantial performance gains in the areas of climate modeling, materials simulations, financial modeling, MRI imaging, and health science applications.

In 2013, HPC deployed a 264-node, GPU-based cluster in which each node harnesses dual-octacore Intel Xeon and dual Nvidia K20 GPU boards.

To help the USC community integrate GPU computing capabilities into their research, HPC hosts several different workshops to facilitate GPU programming, using the Nvidia CUDA toolkit.

Introductory GPU Programming

This workshop—approximately three-hours long—is part lecture and part hands-on session. The lecture provides background on GPU computing, CUDA APIs, and a sample Monte Carlo (MC) code based on CUDA. In the hands-on session, participants learn how to write CUDA programs, submit jobs to the HPC supercomputer, and evaluate the performance difference between CPU and GPU.

CUDA workshop presentation [PDF]

Matlab

MATLAB is a numerical computing environment and fourth-generation programming language. MATLAB enables matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, Java, and Fortran.

Parallel and GPU Computing with MATLAB

This workshop covers how to use MATLAB on the HPC supercomputer’s GPU-enabled nodes to increase processing speed.

ParallelComputingWithMATLAB workshop presentation [PDF]

R Computing

Introduction to Parallel Computing in R Using the snowfall package

This workshop covers the basics of the R statistical programming language, a brief overview of parallel programming structure, an introduction to snowfall’s major functions using basic examples, and lastly a demonstration of the use of these functions to parallelize real statistical simulations.

ParallelComputingR_Rinstallation [PDF]

ParallelComputingR_Snowfall [PDF]