Papers
arxiv:2410.07722

DyVo: Dynamic Vocabularies for Learned Sparse Retrieval with Entities

Published on Oct 10
· Submitted by andrewyates on Oct 17
Authors:
,
,

Abstract

Learned Sparse Retrieval (LSR) models use vocabularies from pre-trained transformers, which often split entities into nonsensical fragments. Splitting entities can reduce retrieval accuracy and limits the model's ability to incorporate up-to-date world knowledge not included in the training data. In this work, we enhance the LSR vocabulary with Wikipedia concepts and entities, enabling the model to resolve ambiguities more effectively and stay current with evolving knowledge. Central to our approach is a Dynamic Vocabulary (DyVo) head, which leverages existing entity embeddings and an entity retrieval component that identifies entities relevant to a query or document. We use the DyVo head to generate entity weights, which are then merged with word piece weights to create joint representations for efficient indexing and retrieval using an inverted index. In experiments across three entity-rich document ranking datasets, the resulting DyVo model substantially outperforms state-of-the-art baselines.

Community

Paper author Paper submitter

We introduce DyVo, an approach for extending the vocabulary of learned sparse retrieval models by leveraging external embeddings. We use DyVo to enrich sparse representations of queries and documents with entities and concepts from Wikipedia, leading to consistent improvements on three entity-rich datasets. While we focus on entities in this work, DyVo is more general. We see dynamic vocabularies as a path towards creating richer sparse representations by adding domain-relevant tokens while maintaining their transparency advantage over dense retrieval.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.07722 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.07722 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.07722 in a Space README.md to link it from this page.

Collections including this paper 1