Update README.md
Browse files
README.md
CHANGED
@@ -195,8 +195,8 @@ bi-encoder_msmarco_bert-base_german (new) | 0.5300 <br /> 🏆 | 0.7196 <br />
|
|
195 |
[BM25](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html#bm25) | 0.3196 | 0.5377 | 0.5740 | "lexical approach"
|
196 |
|
197 |
**It is crucial to understand that the comparisons are also made with models based on other transformer approaches.**
|
198 |
-
For example, in particular
|
199 |
-
A direct comparison based on the same approach can be made with svalabs.
|
200 |
In this case, the model presented here outperforms its predecessor by up to 14 percentage points.
|
201 |
|
202 |
Note:
|
|
|
195 |
[BM25](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html#bm25) | 0.3196 | 0.5377 | 0.5740 | "lexical approach"
|
196 |
|
197 |
**It is crucial to understand that the comparisons are also made with models based on other transformer approaches.**
|
198 |
+
For example, in particular [deepset/gbert-base-germandpr-X](https://huggingface.co/deepset/gbert-base-germandpr-ctx_encoder) is theoretically a more up-to-date approach that is nevertheless beaten.
|
199 |
+
A direct comparison based on the same approach can be made with [svalabs/bi-electra-ms-marco-german-uncased](svalabs/bi-electra-ms-marco-german-uncased).
|
200 |
In this case, the model presented here outperforms its predecessor by up to 14 percentage points.
|
201 |
|
202 |
Note:
|