ON UNI-MODAL FEATURE LEARNING IN SUPERVISED MULTI-MODAL LEARNING

Abstract

We abstract the features of multi-modal data into 1) uni-modal features, which can be learned from uni-modal training, and 2) paired features, which can only be learned from cross-modal interaction. Multi-modal joint training is expected to benefit from cross-modal interaction on the basis of ensuring uni-modal feature learning. However, recent late-fusion training approaches still suffer from insufficient learning of uni-modal features on each modality, and we prove that this phenomenon does hurt the model's generalization ability. Given a multi-modal task, we propose to choose targeted late-fusion learning method from Uni-Modal Ensemble (UME) and the proposed Uni-Modal Teacher (UMT), according to the distribution of uni-modal and paired features. We demonstrate that, under a simple guiding strategy, we can achieve comparable results to other complex late-fusion or intermediate-fusion methods on multi-modal datasets, including VGG-Sound, Kinetics-400, UCF101, and ModelNet40.



According to how the features of multi-modal data can be learned, we abstract them into two categories: (1) uni-modal features, which can be learned from uni-modal training, and (2) paired features, which can only be learned from cross-modal interaction. In this paper, we focus on multimodal tasks where uni-modal priors are meaningful 1 (Kay et al., 2017; Chen et al., 2020b). Ideally, we hope that multi-modal joint training can learn paired features through cross-modal interactions on the basis of ensuring that enough uni-modal features are learned. Uni-modal prior here means that we get predictions only according to one modality in multi-modal tasks.1



Figure1: Overview of Modality Laziness. Although multi-modal joint training provides the opportunity for cross-modal interaction to learn paired features, the model easily saturates and ignores the uni-modal features that are hard to learn but also important to generalization.

