Javascript must be enabled to continue!
MD2PR: A Multi-level Distillation based Dense Passage Retrieval Model
View through CrossRef
Abstract
Reranker and retriever are two important components in information retrieval. The retriever typically adopts a dual-encoder model, where queries and documents are separately input into two pre-trained models, and the vectors generated by the models are used for similarity calculation. The reranker often uses a cross-encoder model, where the concatenated query-document pairs are input into a pre-trained model to obtain word similarities. However, the dual-encoder model lacks interaction between queries and documents due to its independent encoding, while the cross-encoder model requires substantial computational cost for attention calculation, making it difficult to obtain real-time retrieval results. In this paper, we propose a dense retrieval model called MD2PR based on multi-level knowledge distillation, that is, the knowledge learned from the cross-encoder is distilled to the dual-encoder at both the sentence level and word level. Sentence-level distillation enhances the dual-encoder on capturing the themes and emotions of sentences. Word-level distillation improves the dual-encoder in analysis of word semantics and relationships. As a result, the dual-encoder can be used independently for subsequent encoding and retrieval, avoiding the significant computational cost associated with the participation of the cross-encoder. Furthermore, we propose a dynamic false negative filtering method, which updates the threshold during multiple training iterations to ensure the effective identification of false negatives and thus obtains a more comprehensive semantic representation space. The experimental results over two standard datasets show our MD2PR outperforms 14 baseline models in terms of MRR and Recall metrics.
Title: MD2PR: A Multi-level Distillation based Dense Passage Retrieval Model
Description:
Abstract
Reranker and retriever are two important components in information retrieval.
The retriever typically adopts a dual-encoder model, where queries and documents are separately input into two pre-trained models, and the vectors generated by the models are used for similarity calculation.
The reranker often uses a cross-encoder model, where the concatenated query-document pairs are input into a pre-trained model to obtain word similarities.
However, the dual-encoder model lacks interaction between queries and documents due to its independent encoding, while the cross-encoder model requires substantial computational cost for attention calculation, making it difficult to obtain real-time retrieval results.
In this paper, we propose a dense retrieval model called MD2PR based on multi-level knowledge distillation, that is, the knowledge learned from the cross-encoder is distilled to the dual-encoder at both the sentence level and word level.
Sentence-level distillation enhances the dual-encoder on capturing the themes and emotions of sentences.
Word-level distillation improves the dual-encoder in analysis of word semantics and relationships.
As a result, the dual-encoder can be used independently for subsequent encoding and retrieval, avoiding the significant computational cost associated with the participation of the cross-encoder.
Furthermore, we propose a dynamic false negative filtering method, which updates the threshold during multiple training iterations to ensure the effective identification of false negatives and thus obtains a more comprehensive semantic representation space.
The experimental results over two standard datasets show our MD2PR outperforms 14 baseline models in terms of MRR and Recall metrics.
Related Results
A Comprehensive Review of Distillation in the Pharmaceutical Industry
A Comprehensive Review of Distillation in the Pharmaceutical Industry
Distillation processes play a pivotal role in the pharmaceutical industry for the purification of active pharmaceutical ingredients (APIs), intermediates, and solvent recovery. Thi...
Steam Distillation Studies For The Kern River Field
Steam Distillation Studies For The Kern River Field
Abstract
The interactions of heavy oil and injected steam in the mature steamflood at the Kern River Field have been extensively studied to gain insight into the ...
Establishment and Application of the Multi-Peak Forecasting Model
Establishment and Application of the Multi-Peak Forecasting Model
Abstract
After the development of the oil field, it is an important task to predict the production and the recoverable reserve opportunely by the production data....
Dense Fog Burst Reinforcement over Eastern China
Dense Fog Burst Reinforcement over Eastern China
<p>Fog can be hazardous weather. Dense and polluted fog is especially known to impact transportation, air quality, and public health. Low visibilities on fog days thr...
Improving Sentence Retrieval Using Sequence Similarity
Improving Sentence Retrieval Using Sequence Similarity
Sentence retrieval is an information retrieval technique that aims to find sentences corresponding to an information need. It is used for tasks like question answering (QA) or nove...
New Research Progress in Image Retrieval
New Research Progress in Image Retrieval
Image retrieval is generally divided into two categories: one is text-based Image Retrieval; another is content-based Image Retrieval. Early image retrieval technology is mainly ba...
A New Remote Sensing Image Retrieval Method Based on CNN and YOLO
A New Remote Sensing Image Retrieval Method Based on CNN and YOLO
<>Retrieving remote sensing images plays a key role in RS fields, which activates researchers to design a highly effective extraction method of image high-level features. How...
Dense gas in a giant molecular filament
Dense gas in a giant molecular filament
Context. Recent surveys of the Galactic plane in the dust continuum and CO emission lines reveal that large (≳50 pc) and massive (≳105 M⊙) filaments, know as giant molecular filame...

