BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("girijesh/bge-base-financial-matryoshka")
# Run inference
sentences = [
    'We make our branded beverage products available to consumers throughout the world through our network of independent bottling partners, distributors, wholesalers and retailers as well as our consolidated bottling and distribution operations.',
    'How does The Coca-Cola Company distribute its beverage products globally?',
    "What accounting method is predominantly used to determine inventory costs in the Company's supermarket divisions before LIFO adjustments?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7143
cosine_accuracy@3 0.8486
cosine_accuracy@5 0.8814
cosine_accuracy@10 0.9171
cosine_precision@1 0.7143
cosine_precision@3 0.2829
cosine_precision@5 0.1763
cosine_precision@10 0.0917
cosine_recall@1 0.7143
cosine_recall@3 0.8486
cosine_recall@5 0.8814
cosine_recall@10 0.9171
cosine_ndcg@10 0.8196
cosine_mrr@10 0.788
cosine_map@100 0.7915

Information Retrieval

Metric Value
cosine_accuracy@1 0.7157
cosine_accuracy@3 0.8457
cosine_accuracy@5 0.8814
cosine_accuracy@10 0.92
cosine_precision@1 0.7157
cosine_precision@3 0.2819
cosine_precision@5 0.1763
cosine_precision@10 0.092
cosine_recall@1 0.7157
cosine_recall@3 0.8457
cosine_recall@5 0.8814
cosine_recall@10 0.92
cosine_ndcg@10 0.82
cosine_mrr@10 0.7878
cosine_map@100 0.7912

Information Retrieval

Metric Value
cosine_accuracy@1 0.6914
cosine_accuracy@3 0.8471
cosine_accuracy@5 0.88
cosine_accuracy@10 0.91
cosine_precision@1 0.6914
cosine_precision@3 0.2824
cosine_precision@5 0.176
cosine_precision@10 0.091
cosine_recall@1 0.6914
cosine_recall@3 0.8471
cosine_recall@5 0.88
cosine_recall@10 0.91
cosine_ndcg@10 0.8088
cosine_mrr@10 0.7756
cosine_map@100 0.7799

Information Retrieval

Metric Value
cosine_accuracy@1 0.6914
cosine_accuracy@3 0.83
cosine_accuracy@5 0.87
cosine_accuracy@10 0.9071
cosine_precision@1 0.6914
cosine_precision@3 0.2767
cosine_precision@5 0.174
cosine_precision@10 0.0907
cosine_recall@1 0.6914
cosine_recall@3 0.83
cosine_recall@5 0.87
cosine_recall@10 0.9071
cosine_ndcg@10 0.8025
cosine_mrr@10 0.7686
cosine_map@100 0.7729

Information Retrieval

Metric Value
cosine_accuracy@1 0.6586
cosine_accuracy@3 0.8029
cosine_accuracy@5 0.8357
cosine_accuracy@10 0.8829
cosine_precision@1 0.6586
cosine_precision@3 0.2676
cosine_precision@5 0.1671
cosine_precision@10 0.0883
cosine_recall@1 0.6586
cosine_recall@3 0.8029
cosine_recall@5 0.8357
cosine_recall@10 0.8829
cosine_ndcg@10 0.7736
cosine_mrr@10 0.7384
cosine_map@100 0.7434

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 6,300 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 8 tokens
    • mean: 44.98 tokens
    • max: 439 tokens
    • min: 7 tokens
    • mean: 20.31 tokens
    • max: 45 tokens
  • Samples:
    positive anchor
    Change in control events potentially triggering benefits under the CIC Plan and Mr. Begor’s agreement would occur, subject to certain exceptions, if (1) any person acquires 20% or more of our voting stock; (2) upon a merger or other business combination, our shareholders receive less than two-thirds of the common stock and combined voting power of the new company; (3) members of the current Board of Directors ceasing to constitute a majority of the Board of Directors, except for new directors that are regularly elected; (4) we sell or otherwise dispose of all or substantially all of our assets; or (5) we liquidate or dissolve. What events potentially trigger benefits under Mark W. Begor's change in control agreement and the CIC Plan?
    The growth in marketplace revenue was primarily due to the impact of the pricing update to increase our seller transaction fee for the Etsy marketplace from 5% to 6.5% beginning on April 11, 2022, and an increase in foreign currency payments, which we earn an additional transaction fee on, in the year ended December 31, 2023. What drove the growth in marketplace revenue for the year ended December 31, 2023?
    We are focused on ensuring that we efficiently allocate our resources to the areas with the highest potential for profitable growth. ... The uncertain macroeconomic environment in many of these markets is expected to continue and we aim to ensure our investments in these international markets are appropriate relative to the size of the opportunity. What are Hershey's goals for international expansion and how are they being approached?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_map@100 dim_512_cosine_map@100 dim_256_cosine_map@100 dim_128_cosine_map@100 dim_64_cosine_map@100
0.9697 6 - 0.7527 0.7516 0.7454 0.7253 0.6808
1.6162 10 2.3351 - - - - -
1.9394 12 - 0.7740 0.7699 0.7707 0.7474 0.7188
2.9091 18 - 0.7784 0.7790 0.7735 0.7575 0.7275
3.2323 20 1.0519 - - - - -
3.8788 24 - 0.7818 0.7784 0.7763 0.7581 0.7293
0.9697 6 - 0.7836 0.7826 0.7817 0.7664 0.7353
1.6162 10 0.8132 - - - - -
1.9394 12 - 0.7887 0.7887 0.7837 0.7714 0.7409
2.9091 18 - 0.7897 0.7902 0.7798 0.7721 0.7410
3.2323 20 0.6098 - - - - -
3.8788 24 - 0.7915 0.7912 0.7799 0.7729 0.7434
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.2.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 1.0.1
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
18
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for girijesh/bge-base-financial-matryoshka

Finetuned
(305)
this model

Evaluation results