SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval
paper available at https://arxiv.org/pdf/2207.02578
code available at https://github.com/microsoft/unilm/tree/master/simlm
Paper abstract
In this paper, we propose SimLM (Similarity matching with Language Model pre-training), a simple yet effective pre-training method for dense passage retrieval.
It employs a simple bottleneck architecture that learns to compress the passage information into a dense vector through self-supervised pre-training.
We use a replaced language modeling objective, which is inspired by ELECTRA,
to improve the sample efficiency and reduce the mismatch of the input distribution between pre-training and fine-tuning.
SimLM only requires access to unlabeled corpus, and is more broadly applicable when there are no labeled data or queries.
We conduct experiments on several large-scale passage retrieval datasets, and show substantial improvements over strong baselines under various settings.
Remarkably, SimLM even outperforms multi-vector approaches such as ColBERTv2 which incurs significantly more storage cost.
Results on MS-MARCO passage ranking task
Model | dev MRR@10 | dev R@50 | dev R@1k | TREC DL 2019 nDCG@10 | TREC DL 2020 nDCG@10 |
---|---|---|---|---|---|
RocketQAv2 | 38.8 | 86.2 | 98.1 | – | – |
coCondenser | 38.2 | 86.5 | 98.4 | 71.7 | 68.4 |
ColBERTv2 | 39.7 | 86.8 | 98.4 | – | – |
SimLM (this model) | 41.1 | 87.8 | 98.7 | 71.4 | 69.7 |
前往AI网址导航