Combination of Embedding Models: Arctic S & BGE Small (en; v1.5)
Acknowledgement | Combination of Embedding Models | Usage | Citation | License
Acknowledgement
First of all, we want to acknowledge the original creators of the Snowflake/snowflake-arctic-embed-s and BAAI/bge-base-en-v1.5 models which are used to create this model. Our model is just a combination of these two models, and we have not made any changes to the original models.
Furthermore, we want to acknowledge the team of Marqo, who has worked on the idea of combining two models through concatenation in parallel to ourselves. Their initial effort allowed to re-use existing pieces of code, in particular the modeling script for bringing the combined model to HuggingFace.
Combination of Embedding Models
Overview
Embedding models have become increasingly powerful and applicable across various use cases. However, the next significant challenge lies in enhancing their efficiency in terms of resource consumption. Our goal is to experiment with combining two embedding models to achieve better performance with fewer resources.
Key Insights
- Diversity Matters: Initial findings suggest that combining models with differing characteristics can complement each other, resulting in improved outcomes. To design an effective combination, the diversity of the models—evaluated by factors like MTEB performance, architecture, and training data—is crucial.
- Combination Technique:
- We combine the embeddings of two models using the most straightforward approach: concatenation.
- Prior to concatenation, we normalize the embeddings to ensure they are on the same scale. This step is vital for achieving coherent and meaningful results.
Implementation
We combined the following models:
Model Details
- Output Embedding Dimensions: 768 (384 + 384)
- Total Parameters: 66M (33M + 33M)
Future Directions
While the results are promising, we acknowledge the complexity of model combinations and the importance of focusing on more than leaderboard rankings. The simplicity of concatenating embeddings yielding tangible gains emphasizes the potential for further exploration in this area.
We look forward to conducting additional experiments and engaging in discussions to deepen our understanding of effective model combinations.
Usage
Please refer to our first concatenated model, Arctic M (v1.5) & BGE small, for more details on how to use this model.
Citation
@misc{https://doi.org/10.48550/arxiv.2407.08275,
doi = {10.48550/ARXIV.2407.08275},
url = {https://arxiv.org/abs/2407.08275},
author = {Caspari, Laura and Dastidar, Kanishka Ghosh and Zerhoudi, Saber and Mitrovic, Jelena and Granitzer, Michael},
title = {Beyond Benchmarks: Evaluating Embedding Model Similarity for Retrieval Augmented Generation Systems},
year = {2024},
copyright = {Creative Commons Attribution 4.0 International}
}
License
Notice that Arctic S is licensed under Apache-2.0 and BGE Small (en; v1.5) is licensed under MIT license. Please refer to the licenses of the original models for more details.
- Downloads last month
- 50
Evaluation results
- main_score on MTEB ArguAna (default)test set self-reported61.696
- map_at_1 on MTEB ArguAna (default)test set self-reported37.553
- map_at_10 on MTEB ArguAna (default)test set self-reported53.488
- map_at_100 on MTEB ArguAna (default)test set self-reported54.174
- map_at_1000 on MTEB ArguAna (default)test set self-reported54.178
- map_at_20 on MTEB ArguAna (default)test set self-reported54.045
- map_at_3 on MTEB ArguAna (default)test set self-reported49.348
- map_at_5 on MTEB ArguAna (default)test set self-reported51.901
- mrr_at_1 on MTEB ArguAna (default)test set self-reported38.265
- mrr_at_10 on MTEB ArguAna (default)test set self-reported53.738