Edit model card

SentenceTransformer based on BAAI/bge-small-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-small-en-v1.5. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-small-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("smokxy/embedding-finetuned")
# Run inference
sentences = [
    "What does the term 'shareholder members' refer to?",
    "'Date:  To, (i) The Managing Director Small Farmers' Agri-Business Consortium (SFAC), NCUI Auditorium, August Kranti Marg, Hauz Khas, New Delhi 110016. (ii)The Managing Director National Co-operative Development Corporation (NCDC), 4, Siri Institutional Area, Hauz Khas, New Delhi 110016. (iii) The Chief General Manager National Bank for Agriculture and Rural Development (NABARD), Regional Office --------------------------------------------------------------- (iv) To any other additional Implementing Agency allowed/designated, as the case may be. Sub: Application for Equity Grant under scheme of Formation and Promotion of 10,000  Farmer Producer Organizations (FPOs)  Dear Sir/Madam, We herewith apply for Equity Grant as per the provisions under the captioned scheme.  1. The details of the FPO are as under-   S. No.  Particulars to be furnished  Details  1.   Name of the FPO  2.   Correspondence address of FPO  3.   Contact details of FPO  4.   Registration Number  5.   Date of registration/incorporation of FPO  6.   Brief account of business of FPO  7.   Number of Shareholder Members  8.    Number of Small, Marginal and Landless Shareholder Members'",
    "'19.1   It has been seen, during first two years of implementation of PMFBY, there are various types of yield disputes, which unnecessarily delays the claim settlement. Following figure shows the procedures to  be adopted in various cases.    Figure. Procedures to be followed in different yield dispute cases     19.2   Wherever the yield estimates reported at IU level are abnormally low or high vis-à-vis the general crop  condition the Insurance Company in consultation with State Govt. can make use of various products (e.g. Satellite based Vegetation Index, Weather parameters, etc.) or other technologies (including  statistical test, crop models etc.) to confirm yield estimates. If Insurance Company witnesses any  anomaly/deficiency in the actual yield data(partial /consolidated) received from the State Govt., the  same shall be brought into the notice of concerned State department within 7 days from date of receipt of yield data with specific observations/remarks under intimation to Govt. of India and anomaly, if any, may be resolved  in next 7 days by the  State Level Coordination Committee (SLCC)  headed by Additional Chief Secretary/Principal Secretary/Secretary of the concerned department. This committee shall be authorized to decide all such cases and the decision in such cases shall be final. The SLCC may refer the case to State Level Technical Advisory Committee (STAC) for dispute resolution (Constitution of STAC is defined in Para 19.5). In case the matter stands unresolved even after examination by STAC, it may be escalated to TAC along with all relevant documents including minutes of meetings/records of discussion and report of the STAC and SLCC. Reference to TAC can be made thereafter only in conditions specified in Para 19.7.1 However, data with anomalies which is not reported within 7 days will be treated as accepted to insurance company.'",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.43
cosine_accuracy@5 0.87
cosine_accuracy@10 0.92
cosine_precision@1 0.43
cosine_precision@5 0.174
cosine_precision@10 0.092
cosine_recall@1 0.43
cosine_recall@5 0.87
cosine_recall@10 0.92
cosine_ndcg@5 0.6779
cosine_ndcg@10 0.6934
cosine_ndcg@100 0.7122
cosine_mrr@5 0.6127
cosine_mrr@10 0.6188
cosine_mrr@100 0.6234
cosine_map@100 0.6234
dot_accuracy@1 0.43
dot_accuracy@5 0.87
dot_accuracy@10 0.92
dot_precision@1 0.43
dot_precision@5 0.174
dot_precision@10 0.092
dot_recall@1 0.43
dot_recall@5 0.87
dot_recall@10 0.92
dot_ndcg@5 0.6779
dot_ndcg@10 0.6934
dot_ndcg@100 0.7122
dot_mrr@5 0.6127
dot_mrr@10 0.6188
dot_mrr@100 0.6234
dot_map@100 0.6234

Training Details

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • gradient_accumulation_steps: 4
  • learning_rate: 1e-05
  • weight_decay: 0.01
  • num_train_epochs: 1.0
  • warmup_ratio: 0.1
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 8
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 4
  • eval_accumulation_steps: None
  • learning_rate: 1e-05
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1.0
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss val_evaluator_cosine_map@100
0.531 15 0.511 0.1405 0.6234
0.9912 28 - 0.1405 0.6234
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.1
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.27.2
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

GISTEmbedLoss

@misc{solatorio2024gistembed,
    title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning}, 
    author={Aivin V. Solatorio},
    year={2024},
    eprint={2402.16829},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Downloads last month
13
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for smokxy/embedding-finetuned

Finetuned
(118)
this model

Evaluation results