MugheesAwan11's picture
Add new SentenceTransformer model.
da0b33e verified
metadata
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
  - en
library_name: sentence-transformers
license: apache-2.0
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_ndcg@100
  - cosine_mrr@10
  - cosine_map@100
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:10000
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: >-
      Enzalutamide ( brand name Xtandi ) is a synthetic non-steroidal
      antiandrogen ( NSAA ) which was developed by the pharmaceutical company
      Medivation for the treatment of metastatic , castration-resistant prostate
      cancer . Medivation has reported up to an 89 % decrease in serum prostate
      specific antigen ( PSA ) levels after a month of taking the drug .
      Research suggests that enzalutamide may also be effective in the treatment
      of certain types of breast cancer . In August 2012 , the United States (
      U.S. ) Food and Drug Administration ( FDA ) approved enzalutamide for the
      treatment of castration-resistant prostate cancer .
    sentences:
      - what type of cancer is enzalutamide
      - who is simon cho
      - who is dr william farone
  - source_sentence: >-
      Sohel Rana is a Bangladeshi footballer who plays as a midfielder . He
      currently plays for Sheikh Jamal Dhanmondi Club .
    sentences:
      - who is sohel rana
      - who is olympicos
      - who is roberto laserna
  - source_sentence: >-
      Qarah Qayeh ( قره قيه , also Romanized as Qareh Qīyeh ) is a village in
      Chaharduli Rural District , Keshavarz District , Shahin Dezh County , West
      Azerbaijan Province , Iran . At the 2006 census , its population was 465 ,
      in 93 families .
    sentences:
      - what was the knoxville riot
      - what language is kbif
      - where is qarah qayeh
  - source_sentence: >-
      Martin Severin Janus From ( 8 April 1828 -- 6 May 1895 ) was a Danish
      chess master .   Born in Nakskov , From received his first education at
      the grammar school of Nykøbing Falster . He entered the army as a
      volunteer during the Prussian-Danish War ( Schleswig-Holstein War of
      Succession ) , where he served in the brigade of Major-General Olaf Rye
      and partook in the Battle of Fredericia on July 6 , 1849 .   After the war
      From settled in Copenhagen . He was employed by the Statistical Bureau ,
      where he met Magnus Oscar Møllerstrøm , then the strongest chess player in
      Copenhagen . Next , he worked in the central office for prison management
      , and in 1890 he became an inspector of the penitentiary of Christianshavn
      . In 1891 he received the order Ridder af Dannebrog ( `` Knight of the
      Danish cloth '' , i.e. flag of Denmark ) , which is the second highest of
      Danish orders .   In 1895 Severin From died of cancer . He is interred at
      Vestre Cemetery , Copenhagen .
    sentences:
      - when did martin from die
      - what is hymenoxys lemmonii
      - where is macomb square il
  - source_sentence: >-
      The Recession of 1937 -- 1938 was an economic downturn that occurred
      during the Great Depression in the United States .   By the spring of 1937
      , production , profits , and wages had regained their 1929 levels .
      Unemployment remained high , but it was slightly lower than the 25 % rate
      seen in 1933 . The American economy took a sharp downturn in mid-1937 ,
      lasting for 13 months through most of 1938 . Industrial production
      declined almost 30 percent and production of durable goods fell even
      faster .   Unemployment jumped from 14.3 % in 1937 to 19.0 % in 1938 .
      Manufacturing output fell by 37 % from the 1937 peak and was back to 1934
      levels .  Producers reduced their expenditures on durable goods , and
      inventories declined , but personal income was only 15 % lower than it had
      been at the peak in 1937 . In most sectors , hourly earnings continued to
      rise throughout the recession , which partly compensated for the reduction
      in the number of hours worked . As unemployment rose , consumers
      expenditures declined , thereby leading to further cutbacks in production
      .
    sentences:
      - when did the great depression peak in the u.s. economy?
      - what is tom mount's specialty
      - where is poulton
model-index:
  - name: SentenceTransformer based on BAAI/bge-base-en-v1.5
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 768
          type: dim_768
        metrics:
          - type: cosine_accuracy@1
            value: 0.906
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.954
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.962
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.975
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.906
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.31799999999999995
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.19240000000000004
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09750000000000003
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.906
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.954
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.962
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.975
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9422297521305668
            name: Cosine Ndcg@10
          - type: cosine_ndcg@100
            value: 0.9458947974911144
            name: Cosine Ndcg@100
          - type: cosine_mrr@10
            value: 0.9315763888888889
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9323383888065935
            name: Cosine Map@100

SentenceTransformer based on BAAI/bge-base-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("MugheesAwan11/bge-base-climate_fever-dataset-10k-2k-e2")
# Run inference
sentences = [
    'The Recession of 1937 -- 1938 was an economic downturn that occurred during the Great Depression in the United States .   By the spring of 1937 , production , profits , and wages had regained their 1929 levels . Unemployment remained high , but it was slightly lower than the 25 % rate seen in 1933 . The American economy took a sharp downturn in mid-1937 , lasting for 13 months through most of 1938 . Industrial production declined almost 30 percent and production of durable goods fell even faster .   Unemployment jumped from 14.3 % in 1937 to 19.0 % in 1938 . Manufacturing output fell by 37 % from the 1937 peak and was back to 1934 levels .  Producers reduced their expenditures on durable goods , and inventories declined , but personal income was only 15 % lower than it had been at the peak in 1937 . In most sectors , hourly earnings continued to rise throughout the recession , which partly compensated for the reduction in the number of hours worked . As unemployment rose , consumers expenditures declined , thereby leading to further cutbacks in production .',
    'when did the great depression peak in the u.s. economy?',
    'where is poulton',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.906
cosine_accuracy@3 0.954
cosine_accuracy@5 0.962
cosine_accuracy@10 0.975
cosine_precision@1 0.906
cosine_precision@3 0.318
cosine_precision@5 0.1924
cosine_precision@10 0.0975
cosine_recall@1 0.906
cosine_recall@3 0.954
cosine_recall@5 0.962
cosine_recall@10 0.975
cosine_ndcg@10 0.9422
cosine_ndcg@100 0.9459
cosine_mrr@10 0.9316
cosine_map@100 0.9323

Training Details

Training Dataset

Unnamed Dataset

  • Size: 10,000 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 2 tokens
    • mean: 116.45 tokens
    • max: 512 tokens
    • min: 4 tokens
    • mean: 8.6 tokens
    • max: 19 tokens
  • Samples:
    positive anchor
    Professor Maurice Cockrill , RA , FBA ( 8 October 1936 -- 1 December 2013 ) was a British painter and poet . Born in Hartlepool , County Durham , he studied at Wrexham School of Art , north east Wales , then Denbigh Technical College and later the University of Reading from 1960 -- 64 . In Liverpool , where he lived for nearly twenty years from 1964 , he taught at Liverpool College of Art and Liverpool Polytechnic . He was a central figure in Liverpool 's artistic life , regularly exhibiting at the Walker Art Gallery , before his departure for London in 1982 . Cockrill 's Liverpool work was in line with that of John Baum , Sam Walsh and Adrian Henri , employing Pop and Photo-Realist styles , but later he moved towards Romantic Expressionism , as it was shown in his retrospective at the Walker Art Gallery , Liverpool in 1995 . His poetry was published in magazines such as Ambit '' and Poetry Review '' . He was formerly the Keeper of the Royal Academy , and as such managed the RA Schools of the Establishment as well as being a member of the Board and Executive Committee . who was maurice cockrill
    Nowa Dąbrowa -LSB- nowa-dom browa -RSB- is a village in the administrative district of Gmina Kwilcz , within Międzychód County , Greater Poland Voivodeship , in west-central Poland . It lies approximately 16 km south-east of Międzychód and 59 km west of the regional capital Poznań . The village has a population of 40 . where is nowa dbrowa poland
    Hymenoxys lemmonii is a species of flowering plant in the daisy family known by the common names Lemmon 's rubberweed , Lemmon 's bitterweed , and alkali hymenoxys . It is native to the western United States in and around the Great Basin in Utah , Nevada , northern California , and southeastern Oregon . Hymenoxys lemmonii is a biennial or perennial herb with one or more branching stems growing erect to a maximum height near 50 centimeters ( 20 inches ) . It produces straight , dark green leaves up to 9 centimeters ( 3.6 inches ) long and divided into a number of narrow , pointed lobes . The foliage and stem may be hairless to quite woolly . The daisy-like flower head is generally at least 1.5 centimeters ( 0.6 inches ) wide , with a center of 50 -- 125 thick golden disc florets and a shaggy fringe of 9 -- 12 golden ray florets . The species is named for John Gill Lemmon , husband of prominent American botanist Sarah Plummer Lemmon . what is hymenoxys lemmonii
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768
        ],
        "matryoshka_weights": [
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 2
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_map@100
0.0319 10 0.1626 -
0.0639 20 0.1168 -
0.0958 30 0.0543 -
0.1278 40 0.1227 -
0.1597 50 0.061 -
0.1917 60 0.0537 -
0.2236 70 0.0693 -
0.2556 80 0.1115 -
0.2875 90 0.0541 -
0.3195 100 0.0774 -
0.3514 110 0.0639 -
0.3834 120 0.0639 -
0.4153 130 0.0567 -
0.4473 140 0.0385 -
0.4792 150 0.0452 -
0.5112 160 0.0641 -
0.5431 170 0.042 -
0.5751 180 0.0243 -
0.6070 190 0.0405 -
0.6390 200 0.062 -
0.6709 210 0.0366 -
0.7029 220 0.0399 -
0.7348 230 0.0382 -
0.7668 240 0.0387 -
0.7987 250 0.0575 -
0.8307 260 0.0391 -
0.8626 270 0.0776 -
0.8946 280 0.0258 -
0.9265 290 0.0493 -
0.9585 300 0.037 -
0.9904 310 0.0499 -
1.0 313 - 0.9397
0.0319 10 0.0111 -
0.0639 20 0.007 -
0.0958 30 0.0023 -
0.1278 40 0.0109 -
0.1597 50 0.0046 -
0.1917 60 0.0043 -
0.2236 70 0.0037 -
0.2556 80 0.0118 -
0.2875 90 0.0026 -
0.3195 100 0.0079 -
0.3514 110 0.0045 -
0.3834 120 0.0163 -
0.4153 130 0.0058 -
0.4473 140 0.0154 -
0.4792 150 0.0051 -
0.5112 160 0.0152 -
0.5431 170 0.0058 -
0.5751 180 0.0041 -
0.6070 190 0.0118 -
0.6390 200 0.0165 -
0.6709 210 0.0088 -
0.7029 220 0.014 -
0.7348 230 0.0195 -
0.7668 240 0.024 -
0.7987 250 0.0472 -
0.8307 260 0.0341 -
0.8626 270 0.0684 -
0.8946 280 0.0193 -
0.9265 290 0.0488 -
0.9585 300 0.0388 -
0.9904 310 0.0485 -
1.0 313 - 0.9349
1.0224 320 0.0119 -
1.0543 330 0.013 -
1.0863 340 0.0024 -
1.1182 350 0.012 -
1.1502 360 0.0042 -
1.1821 370 0.0091 -
1.2141 380 0.0041 -
1.2460 390 0.0096 -
1.2780 400 0.0053 -
1.3099 410 0.0043 -
1.3419 420 0.0059 -
1.3738 430 0.0138 -
1.4058 440 0.0132 -
1.4377 450 0.0124 -
1.4696 460 0.0049 -
1.5016 470 0.0043 -
1.5335 480 0.0045 -
1.5655 490 0.0037 -
1.5974 500 0.0081 -
1.6294 510 0.0038 -
1.6613 520 0.0055 -
1.6933 530 0.003 -
1.7252 540 0.0022 -
1.7572 550 0.0042 -
1.7891 560 0.0158 -
1.8211 570 0.0088 -
1.8530 580 0.0154 -
1.8850 590 0.0057 -
1.9169 600 0.0086 -
1.9489 610 0.0069 -
1.9808 620 0.0076 -
2.0 626 - 0.9323
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}