HPC Computing Workshops

Current Workshops

Introduction to Linux and the HPC Cluster

This workshop is a beginning Linux class designed for new HPC account holders who have little or no prior experience using the Linux operating system. The emphasis will be on preparing users to run jobs on HPC. The workshop will cover shells, basic commands, file permissions, text editing, scripting, utilities, and process management. This workshop is most suitable for users who are new to working in a remote computing environment.

Introduction to Linux [PDF] – March, 2015

Introduction to HPC Cluster Computing

This workshop is designed to provide a more in-depth understanding of the HPC cluster. It will cover topics such as submitting and monitoring jobs with PBS and optimizing memory and file system usage. This workshop is intended for novice- and intermediate-level HPC users. It is highly recommended that you take the introductory Linux workshop first if you have not previously worked in a Linux environment.

Introduction to HPC Cluster Computing [PDF] – 2015

Running Parallel MATLAB on the HPC Cluster

This workshop is designed to teach researchers how to use MATLAB’s Parallel Computing Toolbox on the HPC cluster. This hands-on tutorial will cover topics such as creating and submitting parallel MATLAB job scripts with PBS, porting a serial MATLAB script to a parallel script, and optimizing CPU and memory usage. It is recommended that you take an introductory workshop first if you have not previously worked in a Linux or HPC environment.

Running Parallel MATLAB on the HPC Cluster [PDF]  – 2014

Working with Pegasus Workflows on HPC

This workshop is designed to introduce researchers to using Pegasus workflows on HPC. This hands-on tutorial will cover workflow design, composition, and execution; how to execute an MPI application as part of a workflow and how to monitor, debug, and analyze workflows using Pegasus tools. It is recommended that you take an introductory workshop first if you have not previously worked in a Linux or HPC environment.

Scientific Workflows Through Pegasus WMS at USC HPC Cluster 

Installing Software on HPC

This workshop is designed for researchers who need to install software packages on HPC. Using examples, we will cover concepts such as python package installation, downloading software, linking to libraries, using Makefiles, compiling and optimizing code for HPC, and handling dependencies. It is recommended that you take the introductory Linux workshop first if you have not previously worked in a Linux environment.

Optimizing Job Performance on HPC

This workshop is designed for users who are currently running jobs and want to learn new techniques for speeding up and parallelizing their computations. Using examples, it will cover such topics as passing arguments to PBS files, PBSDSH, job arrays, parallel math libraries, checkpointing, profilers, and compiler arguments to enhance performance. This workshop is intended for intermediate- and advanced-level HPC users. It is recommended that you take the introductory HPC workshop first if you have not previously worked in an HPC environment.

Working with Big and (Lots of) Little Data on HPC

This workshop is designed for researchers who must manage either very large datasets or very large numbers of small data files for their computations. Using examples, we will cover HPC filesystems for data upload and staging, storage options, combining text files, the ZOT problem and use of an image database, and general ways to improve performance. It is recommended that you take the introductory Linux workshop first if you have not previously worked in a Linux environment.

Previous Workshops and Presentations

GPU Computing

Graphics Processing Units (GPUs) provide much more computing power and data processing capability than conventional computing architectures. Continuous improvements in GPU computing technology have enabled researchers to achieve substantial performance gains in the areas of climate modeling, materials simulations, financial modeling, MRI imaging, and health science applications.

In 2013, HPC deployed a 264-node, GPU-based cluster in which each node harnesses dual-octacore Intel Xeon and dual Nvidia K20 GPU boards.

To help the USC community integrate GPU computing capabilities into their research, HPC hosts several different workshops to facilitate GPU programming, using the Nvidia CUDA toolkit.

Introductory GPU Programming

This workshop—approximately three-hours long—is part lecture and part hands-on session. The lecture provides background on GPU computing, CUDA APIs, and a sample Monte Carlo (MC) code based on CUDA. In the hands-on session, participants learn how to write CUDA programs, submit jobs to the HPC supercomputer, and evaluate the performance difference between CPU and GPU.

CUDA workshop presentation [PDF]

R Computing

Introduction to Parallel Computing in R Using the snowfall package

This workshop covers the basics of the R statistical programming language, a brief overview of parallel programming structure, an introduction to snowfall’s major functions using basic examples, and lastly a demonstration of the use of these functions to parallelize real statistical simulations.

ParallelComputingR_Rinstallation [PDF]

ParallelComputingR_Snowfall [PDF]