HPC Computing Workshops

Current Workshops

HPC workshop presentations are updated several times a year. The slides below are not necessarily the most recent versions. Copies of the most recent presentations can be found in /home/rcf-proj/workshop/handouts. Some of the presentations include example files. Unless otherwise noted, these files will reside in a subdirectory of /home/rcf-proj/workshop.

All workshops are approximately two hours in length and are offered several times a year. Workshops are announced through the HPC mailing list and are also posted at hpcc.usc.edu/education.

New User Presentation

This hour-long presentation introduces the top ten things that all new HPC researchers should know about HPC. It covers topics such as HPC’s mission and services, user accounts and policies, HPC cluster resources, software installed on the cluster, and more.

HPC New User Presentation (April 2017) [PDF]

Introduction to Linux and the HPC Cluster

This workshop is a beginning Linux class designed for new HPC account holders who have little or no prior experience using the Linux operating system. The emphasis will be on preparing users to run jobs on HPC. The workshop will cover shells, basic commands, file permissions, text editing, scripting, utilities, and process management. This workshop is most suitable for users who are new to working in a remote computing environment.

Introduction to Linux and the HPC Cluster (January 2017) [PDF]

Introduction to HPC Cluster Computing

This workshop is designed to provide a more in-depth understanding of the HPC cluster. It will cover topics such as submitting and monitoring jobs with PBS and optimizing memory and file system usage. This workshop is intended for novice- and intermediate-level HPC users. It is highly recommended that you take the introductory Linux workshop first if you have not previously worked in a Linux environment.

Introduction to the HPC Cluster (June 2017) [PDF]

Advanced Topics in HPC Cluster Computing

This workshop is designed for users who are currently running jobs and want to learn new techniques for speeding up and parallelizing their computations. Topics may include data upload, storage and staging, efficient use of the file system, PBS arguments, PBSDSH and job arrays, parallel math libraries, checkpointing and profilers, and compiler arguments to enhance performance.

This workshop is intended for intermediate- and advanced-level HPC users. It is recommended that you take the introductory HPC workshop first if you have not previously worked in an HPC environment.

Advanced Topics in HPC Cluster Computing (June 2017) [PDF]

Running Parallel MATLAB on the HPC Cluster

This workshop is designed to teach researchers how to use MATLAB’s Parallel Computing Toolbox on the HPC cluster. This hands-on tutorial will cover topics such as creating and submitting parallel MATLAB job scripts with PBS, porting a serial MATLAB script to a parallel script, and optimizing CPU and memory usage. It is recommended that you take an introductory workshop first if you have not previously worked in a Linux or HPC environment.

Running Parallel MATLAB on the HPC Cluster (March 2017) [PDF]

Working with Pegasus Workflows on HPC

This workshop is designed to introduce researchers to using Pegasus workflows on HPC. This hands-on tutorial will cover workflow design, composition, and execution; how to execute an MPI application as part of a workflow and how to monitor, debug, and analyze workflows using Pegasus tools. It is recommended that you take an introductory workshop first if you have not previously worked in a Linux or HPC environment.

Installing Software on HPC

This workshop is designed for researchers who need to install software packages on HPC. Using examples, we will cover concepts such as python package installation, downloading software, linking to libraries, using Makefiles, compiling and optimizing code for HPC, and handling dependencies. It is recommended that you take the introductory Linux workshop first if you have not previously worked in a Linux environment.

Installing Software on HPC (April 2017) [PDF}

Previous Workshops and Presentations

The presentations available here are out-of-date and are made available for references purposes only.

GPU Computing

Graphics Processing Units (GPUs) provide much more computing power and data processing capability than conventional computing architectures. Continuous improvements in GPU computing technology have enabled researchers to achieve substantial performance gains in the areas of climate modeling, materials simulations, financial modeling, MRI imaging, and health science applications.

In 2013, HPC deployed a 264-node, GPU-based cluster in which each node harnesses dual-octacore Intel Xeon and dual Nvidia K20 GPU boards.

To help the USC community integrate GPU computing capabilities into their research, HPC hosts several different workshops to facilitate GPU programming, using the Nvidia CUDA toolkit.

Introductory GPU Programming

This workshop—approximately three-hours long—is part lecture and part hands-on session. The lecture provides background on GPU computing, CUDA APIs, and a sample Monte Carlo (MC) code based on CUDA. In the hands-on session, participants learn how to write CUDA programs, submit jobs to the HPC supercomputer, and evaluate the performance difference between CPU and GPU.

Introduction to GPU Programming (July 2013) [PDF]

R Computing

Introduction to Parallel Computing in R Using the snowfall package

This workshop covers the basics of the R statistical programming language, a brief overview of parallel programming structure, an introduction to snowfall’s major functions using basic examples, and lastly a demonstration of the use of these functions to parallelize real statistical simulations.

Introduction to Parallel Computing in R Using the snowfall package (July 2014)
[PDF]