Fei Sha

Fei Sha is an associate professor of computer science at USC. His research focuses on statistical machine learning and artificial intelligence (AI). The Sha lab has been both developing basic methodologies (such as learning models and computational statistics tools) and applying those methodologies to real-world AI problems in areas such as language processing, computer vision, robotics, and biomedical sciences.

In the past few years, both academic and industry communities have made significant advances in AI by applying learning techniques—in particular “deep learning”—to large-scale datasets. Through deep learning, researchers have trained computer systems by using extremely large datasets, with hundreds of millions of parameters, to learn by example, simulating the same neural networks found in the human brain. Large-scale, high-performance computing is key to making this happen.

In the research project “Novel Machine Learning Techniques for Speech Recognition of Languages with Low-Density Resources,” Professor Sha’s lab has used HPC resources to investigate speech recognition. The lab has been among the first to compare the two most important machine learning approaches—deep neural networks (DNNs) and kernel methods—by applying both to real-world, large-scale speech recognition tasks. This research has led to the surprising discovery that kernel methods, often nicknamed “shallow learning,” are as effective as deep learning methods.

Many other research projects in Professor Sha’s lab have also benefitted from HPC support. For instance, the lab has been working on computers’ deep semantic understanding of visual signals (including images and videos) and transforming these signals to different modalities, such as texts. Using deep learning models, many computations are required in order to train computers to translate images into text. The distributed computing environment at HPC provides a natural platform for carrying out these computations. Using the HPC environment, Sha’s lab has outperformed the Google Research lab with respect to large-scale, zero-shot learning.

ABOVE: A conceptual diagram of kernel machines from the Journal of Machine Learning Research.