CONTEXTUALIZED GENERATIVE RETRIEVAL

Abstract

The text retrieval task is mainly performed in two ways: the bi-encoder approach and the generative approach. The bi-encoder approach maps the document and query embeddings to common vector space and performs a nearest neighbor search. It stably shows high performance and efficiency across different domains but has an embedding space bottleneck as it interacts in L2 or inner product space. The generative retrieval model retrieves by generating a target sequence and overcomes the embedding space bottleneck by interacting in the parametric space. However, it fails to retrieve the information it has not seen during the training process as it depends solely on the information encoded in its own model parameters. To leverage the advantages of both approaches, we propose Contextualized Generative Retrieval model, which uses contextualized embeddings (output embeddings of a language model encoder) as vocab embeddings at the decoding step of generative retrieval. The model uses information encoded in both the non-parametric space of contextualized token embeddings and the parametric space of the generative retrieval model. Our approach of generative retrieval with contextualized vocab embeddings shows higher performance than generative retrieval with only vanilla vocab embeddings in the document retrieval task, an average of 6% and 18% (25%) higher performance in KILT (NQ, TQA) R-precision and NQ-320k Hits@1 (@10), respectively, suggesting the benefits of using contextualized embedding in generative retrieval models. 1

1. INTRODUCTION

Text retrieval is often formulated as finding the most relevant items from a large corpus given an input query. The bi-encoder approach of using an encoder to map the documents and the query to a common vector space and performing a nearest neighbor search has been a common practice in text retrieval tasks (Karpukhin et al., 2020; Wu et al., 2020; Ni et al., 2021) . Despite its high performance and popularity, it has an embedding space bottleneck (Luan et al., 2021; Lee et al., 2022; Cao et al., 2021) . The performance decreases as the document length increases due to the limited expressiveness of fixed-size document embeddings. Also, it misses the fine-grained interaction between the query and the document as they interact in L2 or inner product space. The bi-encoder approach also requires large storage space to save all document embeddings. A recently-proposed alternative to the bi-encoder approach is using a generative retrieval model (Cao et al., 2021; Tay et al., 2022; Bevilacqua et al., 2022; Lee et al., 2022) which retrieves the most relevant sequence by generating the item token-by-token, where the item is the identifier of the target sequence or the sequence itself (e.g., title, passage, document ID). They show high performance while using a low storage footprint by overcoming the embedding space bottleneck. These models interact in the parametric space of the language model rather than just in the inner product space. However, as existing generative retrieval models rely solely on the information encoded in their own parameters, the model cannot retrieve the correct target sequence if it has not seen such information during the training process.



We will make our code publicly available.1

