Ram Nevatia is a professor of computer science and electrical engineering and the director of the Institute for Robotics and Intelligent Systems at the USC Viterbi School of Engineering.
Nevatia and his team develop algorithms to enable the automated indexing of video content. As the volume of public and private video data expands at a rapid rate, with YouTube alone reporting more than 100 videos posted every minute, it becomes increasingly crucial to be able to quickly find specific moments within this growing aggregation of video data.
Nevatia and his team address the problem of efficiently locating specific portions within long expanses of video through semantic content extraction and content indexing. To this end, Nevatia and his team are developing a finer set of query terms than is currently available. Some of the key factors that affect the content indexing process include the movement of subjects and objects in and out of the picture, the camera’s viewing angle, and the overall illumination.
The methods used by Nevatia and his team are a combination of the two major threads of analysis currently in use in the field: a statistical approach of relatively simple, local features, which is robust to possible variations but produces only limited descriptions of the subjects, objects, and actions, and a structural, spatio-temporal approach, which produces more accurate descriptions but requires a detection system that has a heightened sensitivity to motion and a more accurate detection of components. By combining these two approaches, Nevatia hopes to increase the speed and accuracy of video search capability.
In order to validate the group’s algorithms, Nevatia uses HPC resources to process massive quantities of data. He and his team recently participated in the annual TRECVID MED and MER evaluations conducted by the National Institute of Science and Technology (NIST) where he used HPC resources to process 4,000 hours of video data.