MEDICAL IMAGE UNDERSTANDING WITH PRE-TRAINED VISION LANGUAGE MODELS: A COM-PREHENSIVE STUDY

Abstract

The large-scale pre-trained vision language models (VLM) have shown remarkable domain transfer capability on natural images. However, it remains unknown whether this capability can also apply to the medical image domain. This paper thoroughly studies the knowledge transferability of pre-trained VLMs to the medical domain, where we show that well-designed medical prompts are the key to elicit knowledge from pre-trained VLMs. We demonstrate that by prompting with expressive attributes that are shared between domains, the VLM can carry the knowledge across domains and improve its generalization. This mechanism empowers VLMs to recognize novel objects with fewer or without image samples. Furthermore, to avoid the laborious manual designing process, we develop three approaches for automatic generation of medical prompts, which can inject expertlevel medical knowledge and image-specific information into the prompts for finegrained grounding. We conduct extensive experiments on thirteen different medical datasets across various modalities, showing that our well-designed prompts greatly improve the zero-shot performance compared to the default prompts, and our fine-tuned models surpass the supervised models by a significant margin.

1. INTRODUCTION

There may not exist another domain like medical images that requires high level of expert knowledge, while acquiring expert labeled data is also quite expensive. In fact, limited amount of welllabeled data is one of the factors that deter the medical image domain moves toward the era of largescale pre-trained models, and transfer learning becomes a natural choice. Nevertheless, as argued in (Niu et al., 2021) , the mismatch between domains may compromise the capability of the pre-trained models being transferred from one to another (Raghu et al., 2019) . Unfortunately, this mismatch also exists between medical and natural image domains. Therefore, finding a data-efficient approach with superior domain transfer performance is essential for advancing medical image understanding. Though pre-trained vision-language models (VLMs) have shown much success in domain transfer tasks, it is not known whether the knowledge learned from natural image and text pairs through large pre-trained vision-language models can benefit the understanding of the medical images. As pointed out by (Shen et al., 2022) , the large-scale VLMs perform well in recognizing common objects but may not perform well while encountering visual concepts that rarely appeared in their pre-training data. This observation motivates us to discover an even stronger approach to bridge the domain gap. In VL models like GLIP (Li et al., 2022) , X-VLM (Zeng et al., 2021), and VinVL (Zhang et al., 2021) , prompt learning also plays an essential role in enhancing the model's generalization. Instead of simply aligning the text and image pairs, GLIP aims to ground image regions with the help of text prompts and shows that prompts with expressive attributes can further improve model's performance in domain transfer. We presume that a prompt integrated with expert-level knowledge and imagespecific information could vastly help the domain transfer process because one key challenge in

