Publications

Multimodal Intent Discovery from Livestream Videos

NAACL 2022 Findings

Publication date: July 11, 2022

Adyasha Maharana, Quan Hung Tran, David Seunghyun Yoon, Franck Dernoncourt, Trung Bui, Walter Chang, Mohit Bansal

Individuals, educational institutions, and businesses are prolific at generating instructional video content such as “how-to” and tutorial guides. While significant progress has been made in basic video understanding tasks, identifying procedural intent within these instructional videos is a challenging and important task that remains unexplored but essential to video summarization, search, and recommendations. This paper introduces the problem of instructional video intent identification and extraction. We construct and present a new multimodal dataset consisting of instructional videos containing both detailed and abstract procedural intent that enable training and evaluation of joint video and text understanding models. We then introduce a multimodal cascaded cross-attention model to efficiently combine the weaker and noisier video signal with the more discriminative text signal. Our experiments show that our proposed model brings significant gains compared to strong baselines, including large-scale pretrained multimodal models. Our analysis further identifies that the task benefits from spatial as well as motion features extracted from videos, and provides insight on how the video signal is preferentially used for intent discovery. We also show that current models struggle to comprehend the nature of abstract intents, revealing important gaps in multimodal understanding and paving the way for future work.

Learn More