FIX: typos in README.md
Browse files
README.md
CHANGED
@@ -1779,7 +1779,7 @@ license: cc-by-nc-4.0
|
|
1779 |
|
1780 |
Linq-Embed-Mistral has been developed by building upon the foundations of the [E5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) and [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) models. We focus on improving text retrieval using advanced data refinement methods, including sophisticated data crafting, data filtering, and negative mining guided by teacher models, which are highly tailored to each task, to improve the quality of the synthetic data generated by LLM. These methods are applied to both existing benchmark dataset and highly tailored synthetic dataset generated via LLMs. Our efforts primarily aim to create high-quality triplet datasets (query, positive example, negative example), significantly improving text retrieval performance.
|
1781 |
|
1782 |
-
Linq-Embed-Mistral performs well in the MTEB benchmarks (as of May 29, 2024). The model excels in retrieval tasks, ranking <ins>**`1st`**</ins> among all models listed on the MTEB leaderboard with a performance score of <ins>**`60.2`**</ins>. This outstanding performance underscores its superior capability in enhancing search precision and reliability. The model achieves an average score of <ins>**`68.2`**</ins> across 56 datasets in the MTEB benchmarks, making it the highest-ranking publicly accessible model and third overall. (Please note that [NV-Emb-v1](https://huggingface.co/nvidia/NV-Embed-v1) and [voyage-large-2-instruct](https://docs.voyageai.com/embeddings/), ranked 1st and 2nd on the leaderboard as of May 29, reported their performance without releasing their models.)
|
1783 |
|
1784 |
|
1785 |
This project is for research purposes only. Third-party datasets may be subject to additional terms and conditions under their associated licenses. Please refer to specific papers for more details:
|
|
|
1779 |
|
1780 |
Linq-Embed-Mistral has been developed by building upon the foundations of the [E5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) and [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) models. We focus on improving text retrieval using advanced data refinement methods, including sophisticated data crafting, data filtering, and negative mining guided by teacher models, which are highly tailored to each task, to improve the quality of the synthetic data generated by LLM. These methods are applied to both existing benchmark dataset and highly tailored synthetic dataset generated via LLMs. Our efforts primarily aim to create high-quality triplet datasets (query, positive example, negative example), significantly improving text retrieval performance.
|
1781 |
|
1782 |
+
Linq-Embed-Mistral performs well in the MTEB benchmarks (as of May 29, 2024). The model excels in retrieval tasks, ranking <ins>**`1st`**</ins> among all models listed on the MTEB leaderboard with a performance score of <ins>**`60.2`**</ins>. This outstanding performance underscores its superior capability in enhancing search precision and reliability. The model achieves an average score of <ins>**`68.2`**</ins> across 56 datasets in the MTEB benchmarks, making it the highest-ranking publicly accessible model and third overall. (Please note that [NV-Emb-v1](https://huggingface.co/nvidia/NV-Embed-v1) and [voyage-large-2-instruct](https://docs.voyageai.com/embeddings/), ranked 1st and 2nd on the leaderboard as of May 29, reported their performance without releasing their models.)
|
1783 |
|
1784 |
|
1785 |
This project is for research purposes only. Third-party datasets may be subject to additional terms and conditions under their associated licenses. Please refer to specific papers for more details:
|