CONTEXTUALIZED GENERATIVE RETRIEVAL

Abstract

The text retrieval task is mainly performed in two ways: the bi-encoder approach and the generative approach. The bi-encoder approach maps the document and query embeddings to common vector space and performs a nearest neighbor search. It stably shows high performance and efficiency across different domains but has an embedding space bottleneck as it interacts in L2 or inner product space. The generative retrieval model retrieves by generating a target sequence and overcomes the embedding space bottleneck by interacting in the parametric space. However, it fails to retrieve the information it has not seen during the training process as it depends solely on the information encoded in its own model parameters. To leverage the advantages of both approaches, we propose Contextualized Generative Retrieval model, which uses contextualized embeddings (output embeddings of a language model encoder) as vocab embeddings at the decoding step of generative retrieval. The model uses information encoded in both the non-parametric space of contextualized token embeddings and the parametric space of the generative retrieval model. Our approach of generative retrieval with contextualized vocab embeddings shows higher performance than generative retrieval with only vanilla vocab embeddings in the document retrieval task, an average of 6% and 18% (25%) higher performance in KILT (NQ, TQA) R-precision and NQ-320k Hits@1 (@10), respectively, suggesting the benefits of using contextualized embedding in generative retrieval models. 1

1. INTRODUCTION

Text retrieval is often formulated as finding the most relevant items from a large corpus given an input query. The bi-encoder approach of using an encoder to map the documents and the query to a common vector space and performing a nearest neighbor search has been a common practice in text retrieval tasks (Karpukhin et al., 2020; Wu et al., 2020; Ni et al., 2021) . Despite its high performance and popularity, it has an embedding space bottleneck (Luan et al., 2021; Lee et al., 2022; Cao et al., 2021) . The performance decreases as the document length increases due to the limited expressiveness of fixed-size document embeddings. Also, it misses the fine-grained interaction between the query and the document as they interact in L2 or inner product space. The bi-encoder approach also requires large storage space to save all document embeddings. A recently-proposed alternative to the bi-encoder approach is using a generative retrieval model (Cao et al., 2021; Tay et al., 2022; Bevilacqua et al., 2022; Lee et al., 2022) which retrieves the most relevant sequence by generating the item token-by-token, where the item is the identifier of the target sequence or the sequence itself (e.g., title, passage, document ID). They show high performance while using a low storage footprint by overcoming the embedding space bottleneck. These models interact in the parametric space of the language model rather than just in the inner product space. However, as existing generative retrieval models rely solely on the information encoded in their own parameters, the model cannot retrieve the correct target sequence if it has not seen such information during the training process. To this end, we propose contextualized generative retrieval model (CGR), a retrieval model that overcomes the aforementioned limitations of existing generative retrieval models by leveraging contextualized vocab embeddings (output embeddings of language model encoder) to make use of non-parametric information from the context surrounding the vocab tokens. It uses not only the parametric space of the model as in generative retrieval models but also the non-parametric space of contextualized target embeddings (external memory) as in bi-encoder models. As in Figure 1 , the model has two submodules: (1) an EMBedding model (EMB), which is an encoder model that outputs contextualized embeddings, and (2) a RETrieval model (RET), which is an encoder-decoder model that retrieves a target sequence when given an input query. The model first constructs the contextualized embedding matrix with the output embeddings of EMB and uses the matrix as the decoder vocab embeddings when training RET. By utilizing the contextualized embedding matrix rather than the vanilla embedding matrix while generating a target sequence, RET uses both information encoded in its own parameters as existing generative retrieval models and information encoded in the contextualized embeddings. Also, as RET uses the contextualized embeddings during both the training and inference step, RET is optimized to utilize the information encoded in the contextualized embeddings. We show the importance of using external memory (non-parametric space) of contextualized target embedding in generative retrieval models by comparing the performance between CGR and GENRE (Cao et al., 2021) , a generative retrieval model which only operates on the parametric space. CGR shows an average of 6% increment in Natural Questions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017) in KILT (Petroni et al., 2021) and 18% (25%) higher performance in Hit@1 (@10) of NQ-320k. We also compare the results with different baselines for a comprehensive understanding of the model performance. The main contributions of our paper are as follows: • We present Contextualized Generative Retrieval (CGR), a generative retrieval model which uses the contextualized embedding matrix while generating a target sequence. It shows an average of 6% and 18% (25%) higher performance in KILT (NQ, TQA) R-precision and NQ-320k Hits@1 (@10), respectively, compared to GENRE in the same setting. • We show that using contrastive learning as intermediate training further increases the performance of the contextualized generative retrieval model by a large margin. • We perform extensive ablation studies and analysis over several variants of contextualized generative retrieval models for a comprehensive understanding of how to use contextualized embeddings and why using contextualized embeddings is better than using vanilla vocab embeddings. , which can retrieve any span from any position in the corpus by using the compressed full-text substring index (FM-Index). In this work, we propose Contextualized Generative Retrieval which generates the target sequence by utilizing the contextualized embedding matrix rather than the vanilla vocab embedding matrix as in the aforementioned generative retrieval models. Therefore, the model utilizes both the parametric space of the generative retrieval and the non-parametric space of contextualized token embeddings. To the best of our knowledge, we are the first to utilize the contextualized token embeddings on generative retrieval models.



We will make our code publicly available.



Generative Retrieval Existing generative retrieval models retrieve relevant items by generating either the identifiers or entire sequences of the items.Cao et al. (2021)  propose GENRE (Generative ENtity REtrieval), which retrieves a document by generating the titles with constrained beam search.Tay et al. (2022)  propose DSI (Differentiable Search Index), which assigns a unique ID to each item in the corpus and trains the model to encode all information of the document and the ID in the model parameters. During the inference step, DSI generates the ID of the most relevant document.Wang et al. (2022)  propose NCI (Neural Corpus Indexer), which also retrieves by generating the document ID as in DSI, but improves performance by query generation and prefix-aware weight-adaptive decoder.Bevilacqua et al. (2022)  propose SEAL (Search Engines with Autoregressive LMs)

