DenserRetriever

MTEB Retrieval Experiments

We run this experiment on a server, which requires ES and Milvus installations specified here.

MTEB datasets

MTEB retrieval datasets consists of 15 datasets. The datasets stats including name, corpus size, train, dev and test query sizes are listed in the following table.

Name#Corpus#Train Query#Dev Query#Test Query
ArguAna8,674001,406
ClimateFEVER5,416,593001,535
CQADupstack457,199009,963
DBPedia4,635,922067400
FEVER5,416,568109,8106,6666,666
FiQA201857,6385,500500648
HotpotQA5,233,32985,0005,4477,405
MSMARCO8,841,823502,9396,98043
NFCorpus3,6332,590324323
NQ2,681,468003,452
QuoraRetrieval522,93105,00010,000
SCIDOCS25,657001,000
SciFact5,1838090300
Touche2020382,5450049
TRECCOVID171,3320050

Train and test xgboost models

For each dataset in MTEB, we trained an xgboost model on the training dataset and tested on the test dataset. To speed up the experiments, we used up to 10k queries per dataset in training (max_query_size: 10000 in config_server.yaml). For datasets which do not have training data, we used the development data to train. If neither training nor development data exists, we applied the 3-fold cross-validation. That is, we randomly split the test data into three folds, we used two folds to train a xgboost model and tested on the third fold. We applied this process three times so the whole test dataset can be evaluated.

We fixed the xgboost model training with the following settings. Specifically, we used the ndcg metric as model update objective, a moderate learning rate (eta) of 0.1, regularization parameter (gamma) of 1.0, min_child_weight of 0.1, maximum depth of tree up to 6, and evaluation metric of ndcg@10. We used a fixed number (100) of boosting iterations (num_boost_round), thus no attempting to optimize the training per dataset.

params = {
                "objective": "rank:ndcg",
                "eta": 0.1,
                "gamma": 1.0,
                "min_child_weight": 0.1,
                "max_depth": 6,
                "eval_metric": "ndcg@10",
            }

The source code for the experiment can be found at train_and_test.py. We ran the following command to train 8 xgboost models (ES+VS, ES+RR, VS+RR, ES+VS+RR, ES+VS_n, ES+RR_n, VS+RR_n, and ES+VS+RR_n) using MSMARCO training data. The definitions of these 8 models can be found at training. The parameters are dataset_name, config file, train split, and test split respectively. We need to configure hosts, users and passwords for Elasticsearch and Milvus in the config file experiments/config_server.yaml.

poetry run python experiments/train_and_test.py experiments/config_server.yaml mteb/msmarco train test

After the training, we can find the models at /home/ubuntu/denser_output_retriever/exp_msmarco/models/xgb_*. We note that the prefix /home/ubuntu/denser_output_retriever/ is defined in the config_server.yaml file

output_prefix: /home/ubuntu/denser_output_retriever/

In addition to training, this experiment also evaluated the 8 trained models on the msmarco test data and reported the ndcg@10 accuracy. We expect to get the ndcg@10 of 47.23 for ES+VS+RR_n model.

Test xgboost models

To evaluate a trained model on 26 MTEB datasets, we need to specify the model in config_server.yaml file.

model: PATH_TO_YOUR_TRAINED_MODEL

We can then evaluate the MTEB dataset (MSMARCO as an example) by running:

poetry run python experiments/test.py

We will get the ndcg@10 score after the evaluation.

Experiment results

We list the ndcg@10 scores of different models in the following table. Ref is the reference ndcg@10 of snowflake-arctic-embed-m from Huggingface leaderboard, which is consistent with our reported VS accuracy. The bold numbers are the highest accuracy per dataset in our experiments. We use VS instead of Ref as the vector search baseline. Delta and % are the ndcg@10 absolute and relative gains of ES+VS+RR_n model compared to VS baseline.

NameESVSES+VS/ES+VS_nES+RR/ES+RR_nVS+RR/VS+RR_nES+VS+RR/ES+VS+RR_nRefDelta/%
ArguAna42.9356.4956.68/57.2747.45/48.2156.32/56.4456.81/57.2856.440.79/1.39%
ClimateFEVER18.1039.1239.21/39.0128.20/28.3439.06/38.7139.11/39.2539.370.13/0.33%
CQADupstack25.1342.2342.40/42.5137.68/37.5443.92/44.2543.85/44.3243.812.09/4.94%
DBPedia27.4244.6645.26/44.2647.94/48.2648.62/49.0848.79/49.1344.734.47/10.00%
FEVER72.8088.9089.29/90.0584.38/84.9489.84/90.3090.21/91.0089.022.10/2.36%
FiQA201823.8942.2942.57/42.7936.62/36.3143.04/43.0943.19/43.2242.40.93/2.19%
HotpotQA54.9473.6574.74/75.0174.93/75.3977.64/78.0777.95/78.3773.654.72/6.40%
MSMARCO21.8441.7741.65/41.7246.93/47.1547.11/47.2447.09/47.2341.775.46/13.07%
NFCorpus31.4036.7437.37/37.6334.51/35.3637.32/37.3137.70/37.1536.770.41/1.11%
NQ27.2161.3360.51/61.2055.60/55.4761.50/62.2462.27/62.3562.431.02/1.66%
QuoraRetrieval74.2380.7386.64/86.9184.14/84.4087.76/88.1088.39/88.5487.427.81/9.67%
SCIDOCS14.6821.0320.49/20.0616.48/16.4820.51/20.1920.34/20.0321.10-1.00/-4.75%
SciFact58.4273.1673.28/75.0869.08/69.6972.73/73.6273.08/75.3373.552.17/2.96%
Touche202029.9232.6531.86/34.2629.76/29.9330.47/29.3031.51/30.9831.47-1.67/-5.11%
TRECCOVID52.0278.9277.78/79.1275.59/76.9580.34/81.1981.97/83.0179.654.09/5.18%
Average38.3254.2454.64/55.1251.28/51.6255.74/55.9456.15/56.4754.912.23/4.11%

The MTEB experiment results are summarized as follows.

Vector search by snowflake-arctic-embed-m model can significantly boost the Elasticsearch NDCG@10 baseline from 38.32 to 54.24. The combination of Elasticsearch, vector search and a reranker via xgboost models can further improve the vector search baseline. For instance, the ES+VS+RR_n model achieves the highest NDCG@10 score of 56.47, surpassing the vector search baseline (NDCG@10 of 54.24) by an absolute increase of 2.23 and a relative improvement of 4.11%.

For datasets which have training data (FEVER, FiQA2018, HotpotQA, NFCorpus, and SciFact), the combinations of Elasticsearch, vector search and reranker via xgboost models are more beneficial, which can be witnessed by the following table.

NameVSES+VS+RR_nDeltaDelta%
FEVER88.991.002.102.36
FiQA201842.2943.220.932.19
HotpotQA73.6578.374.726.4
MSMARCO41.7747.235.4613.07
NFCorpus36.7437.150.411.11
SciFact73.1675.332.172.96
Average59.4162.052.634.68

The ES+VS+RR_n model (NDCG@10 of 62.05) improves the vector search NDCG@10 baseline (NDCG@10 of 59.41) by 2.63 absolute and 4.68% relative gains on these five datasets. It is worth noting that, on the widely used benchmark dataset MSMARCO, the ES+VS+RR_n leads to a significant relative NDCG@10 gain of 13.07% when compared to vector search baseline.

On this page

Edit on Github