Edit model card

SentenceTransformer based on intfloat/multilingual-e5-small

This is a sentence-transformers model finetuned from intfloat/multilingual-e5-small. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: intfloat/multilingual-e5-small
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("srikarvar/fine_tuned_model_4")
# Run inference
sentences = [
    'Who is the President of the United States?',
    'Who is the current US President?',
    'What is the velocity of sound?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Binary Classification

Metric Value
cosine_accuracy 0.6207
cosine_accuracy_threshold 0.9036
cosine_f1 0.7193
cosine_f1_threshold 0.9036
cosine_precision 0.5827
cosine_recall 0.9394
cosine_ap 0.6366
dot_accuracy 0.6207
dot_accuracy_threshold 0.9036
dot_f1 0.7193
dot_f1_threshold 0.9036
dot_precision 0.5827
dot_recall 0.9394
dot_ap 0.6366
manhattan_accuracy 0.6176
manhattan_accuracy_threshold 6.5018
manhattan_f1 0.7232
manhattan_f1_threshold 7.1429
manhattan_precision 0.5724
manhattan_recall 0.9818
manhattan_ap 0.6414
euclidean_accuracy 0.6207
euclidean_accuracy_threshold 0.4391
euclidean_f1 0.7193
euclidean_f1_threshold 0.4391
euclidean_precision 0.5827
euclidean_recall 0.9394
euclidean_ap 0.6366
max_accuracy 0.6207
max_accuracy_threshold 6.5018
max_f1 0.7232
max_f1_threshold 7.1429
max_precision 0.5827
max_recall 0.9818
max_ap 0.6414

Binary Classification

Metric Value
cosine_accuracy 0.8934
cosine_accuracy_threshold 0.777
cosine_f1 0.9034
cosine_f1_threshold 0.775
cosine_precision 0.8503
cosine_recall 0.9636
cosine_ap 0.9467
dot_accuracy 0.8934
dot_accuracy_threshold 0.777
dot_f1 0.9034
dot_f1_threshold 0.775
dot_precision 0.8503
dot_recall 0.9636
dot_ap 0.9467
manhattan_accuracy 0.8903
manhattan_accuracy_threshold 9.9086
manhattan_f1 0.9003
manhattan_f1_threshold 10.4374
manhattan_precision 0.8495
manhattan_recall 0.9576
manhattan_ap 0.9452
euclidean_accuracy 0.8934
euclidean_accuracy_threshold 0.6678
euclidean_f1 0.9034
euclidean_f1_threshold 0.6708
euclidean_precision 0.8503
euclidean_recall 0.9636
euclidean_ap 0.9467
max_accuracy 0.8934
max_accuracy_threshold 9.9086
max_f1 0.9034
max_f1_threshold 10.4374
max_precision 0.8503
max_recall 0.9636
max_ap 0.9467

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,273 training samples
  • Columns: sentence1, label, and sentence2
  • Approximate statistics based on the first 1000 samples:
    sentence1 label sentence2
    type string int string
    details
    • min: 6 tokens
    • mean: 10.93 tokens
    • max: 28 tokens
    • 0: ~48.90%
    • 1: ~51.10%
    • min: 5 tokens
    • mean: 10.29 tokens
    • max: 22 tokens
  • Samples:
    sentence1 label sentence2
    What are the main ingredients in a traditional pizza Margherita? 1 What ingredients are used in a classic pizza Margherita?
    Release date of the iPhone 14 0 Release date of the iPhone 13
    Who won the first Nobel Prize in Literature? 0 Who won the first Nobel Prize in Peace?
  • Loss: OnlineContrastiveLoss

Evaluation Dataset

Unnamed Dataset

  • Size: 319 evaluation samples
  • Columns: sentence1, label, and sentence2
  • Approximate statistics based on the first 1000 samples:
    sentence1 label sentence2
    type string int string
    details
    • min: 6 tokens
    • mean: 11.12 tokens
    • max: 22 tokens
    • 0: ~48.28%
    • 1: ~51.72%
    • min: 4 tokens
    • mean: 10.52 tokens
    • max: 21 tokens
  • Samples:
    sentence1 label sentence2
    How many bones are in the human body? 1 Total bones in an adult human
    What is the price of an iPhone 12? 0 What is the price of an iPhone 11?
    What are the different types of renewable energy? 1 What are the various forms of renewable energy?
  • Loss: OnlineContrastiveLoss

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • gradient_accumulation_steps: 2
  • num_train_epochs: 4
  • warmup_ratio: 0.1
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 2
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss pair-class-dev_max_ap pair-class-test_max_ap
0 0 - - 0.6414 -
0.5 10 1.9407 - - -
1.0 20 0.9729 0.6810 - -
1.475 30 0.4822 - - -
1.975 40 0.4062 - - -
2.025 41 - 0.5953 - -
2.45 50 0.2894 - - -
2.95 60 0.1977 - - -
3.0 61 - 0.5318 - -
3.425 70 0.1999 - - -
3.925 80 0.1491 0.5159 - 0.9467
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
9
Safetensors
Model size
118M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for srikarvar/fine_tuned_model_4

Finetuned
(57)
this model

Evaluation results