LPT: LONG-TAILED PROMPT TUNING FOR IMAGE CLASSIFICATION

Abstract

For long-tailed classification tasks, most works often pretrain a big model on a large-scale (unlabeled) dataset, and then fine-tune the whole pretrained model for adapting to long-tailed data. Though promising, fine-tuning the whole pretrained model tends to suffer from high cost in computation and deployment of different models for different tasks, as well as weakened generalization capability for overfitting to certain features of long-tailed data. To alleviate these issues, we propose an effective Long-tailed Prompt Tuning (LPT) method for long-tailed classification tasks. LPT introduces several trainable prompts into a frozen pretrained model to adapt it to long-tailed data. For better effectiveness, we divide prompts into two groups: 1) a shared prompt for the whole long-tailed dataset to learn general features and to adapt a pretrained model into the target long-tailed domain; and 2) group-specific prompts to gather group-specific features for the samples which have similar features and also to empower the pretrained model with fine-grained discrimination ability. Then we design a two-phase training paradigm to learn these prompts. In the first phase, we train the shared prompt via conventional supervised prompt tuning to adapt a pretrained model to the desired long-tailed domain. In the second phase, we use the learnt shared prompt as query to select a small best matched set for a group of similar samples from the group-specific prompt set to dig the common features of these similar samples, and then optimize these prompts with a dual sampling strategy and the asymmetric Gaussian Clouded Logit loss. By only fine-tuning a few prompts while fixing the pretrained model, LPT can reduce training cost and deployment cost by storing a few prompts, and enjoys a strong generalization ability of the pretrained model. Experiments show that on various long-tailed benchmarks, with only ∼1.1% extra trainable parameters, LPT achieves comparable or higher performance than previous whole model fine-tuning methods, and is more robust to domain-shift.

1. INTRODUCTION

Learning from long-tailed data (Cui et al., 2019; Kang et al., 2020; Zhang et al., 2021b) is very challenging in the deep learning era, since networks often excessively overfit to majority classes while ignoring the minority classes due to the overwhelming training sample number of majority classes. To eliminate this negative effect, previous methods focus on three individual aspects: 1) resampling the long-tailed data distribution (Kang et al., 2020; Li et al., 2022; 2021a; Ren et al., 2020) to achieve balance among all classes in each minibatch data, 2) re-weighting the training loss (Cui et al., 2019; Li et al., 2022; Menon et al., 2021) to give heavier weights to minority classes, and 3) specially-designed decoupled training (Kang et al., 2020) , knowledge distillation (Li et al., 2021b) or ensemble learning (Zhou et al., 2020; Wang et al., 2020) . Although alleviating the negative effect in long-tailed learning in some sense and achieving better overall performance, these methods generally need to train both feature extractors and linear classifiers from scratch or from pretrained models on large-scale datasets, e.g. ImageNet (Deng et al., 2009) , thus suffering from three issues. Firstly, to adapt to long-tailed data, this whole model finetuning requires much higher extra training cost. Secondly, fine-tuning whole model also impairs the generalization ability of the pretrained model, since the pretrained model trained on a large-scale dataset often sees abundant data and enjoys strong discriminative ability to various kinds of fea-1

availability

publicly available at https://github.com/DongSky/LPT.

