COMPOSITIONAL PROMPT TUNING WITH MOTION CUES FOR OPEN-VOCABULARY VIDEO RELATION DETECTION

Abstract

Prompt tuning with large-scale pretrained vision-language models empowers open-vocabulary predictions trained on limited base categories, e.g., object classification and detection. In this paper, we propose compositional prompt tuning with motion cues: an extended prompt tuning paradigm for compositional predictions of video data. In particular, we present Relation Prompt (RePro) for Open-vocabulary Video Visual Relation Detection (Open-VidVRD), where conventional prompt tuning is easily biased to certain subject-object combinations and motion patterns. To this end, RePro addresses the two technical challenges of Open-VidVRD: 1) the prompt tokens should respect the two different semantic roles of subject and object, and 2) the tuning should account for the diverse spatio-temporal motion patterns of the subject-object compositions. Without bells and whistles, our RePro achieves a new state-of-the-art performance on two Vid-VRD benchmarks of not only the base training object and predicate categories, but also the unseen ones. Extensive ablations also demonstrate the effectiveness of the proposed compositional and multi-mode design of prompts.

1. INTRODUCTION

For example, as shown in Figure 1 , the action feed of child to dog co-occurs with several other predicates (e.g., away, towards). This characteristic makes VidVRD have more plentiful and diverse relations between objects than its image counterpart. As a result, it is impractical to collect sufficient annotations for all categories for VidVRD. Therefore, to make VidVRD practical, we should know how to generalize the model, trained on limited annotations, to new object and predicate classes unseen in training data.



Figure 1: Examples of VidVRD. The relation graphs are w.r.t the whole video clip. Dashed lines denote unseen new categories in the training data. Video visual relation detection (VidVRD) aims to detect the visual relationships between object tracklets in videos as <subject, predicate, object> triplets (Shang et al., 2017; Chen et al., 2021; 2023; Gao et al., 2021; 2022), e.g., dog-towards-child shown in Figure 1. Compared to its counterpart in still images (Chen et al., 2019; Li et al., 2022b;c;d;e), due to the extra temporal axis, there are usually multiple relationships with different temporal scales, and a subject-object pair can have several predicates with ambiguous boundaries.For example, as shown in Figure1, the action feed of child to dog co-occurs with several other predicates (e.g., away, towards). This characteristic makes VidVRD have more plentiful and diverse relations between objects than its image counterpart. As a result, it is impractical to collect sufficient annotations for all categories for VidVRD. Therefore, to make VidVRD practical, we should know how to generalize the model, trained on limited annotations, to new object and predicate classes unseen in training data.

availability

//github.com/

