fine_tuned_model_12 / README.md
srikarvar's picture
Add new SentenceTransformer model.
a22076c verified
metadata
base_model: srikarvar/fine_tuned_model_5
library_name: sentence-transformers
metrics:
  - cosine_accuracy
  - cosine_accuracy_threshold
  - cosine_f1
  - cosine_f1_threshold
  - cosine_precision
  - cosine_recall
  - cosine_ap
  - dot_accuracy
  - dot_accuracy_threshold
  - dot_f1
  - dot_f1_threshold
  - dot_precision
  - dot_recall
  - dot_ap
  - manhattan_accuracy
  - manhattan_accuracy_threshold
  - manhattan_f1
  - manhattan_f1_threshold
  - manhattan_precision
  - manhattan_recall
  - manhattan_ap
  - euclidean_accuracy
  - euclidean_accuracy_threshold
  - euclidean_f1
  - euclidean_f1_threshold
  - euclidean_precision
  - euclidean_recall
  - euclidean_ap
  - max_accuracy
  - max_accuracy_threshold
  - max_f1
  - max_f1_threshold
  - max_precision
  - max_recall
  - max_ap
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:560
  - loss:OnlineContrastiveLoss
widget:
  - source_sentence: >-
      The `Garage` class has a `to_services` method which is used to transform
      tasks into a list of `ServiceRecord` objects that are scheduled.
    sentences:
      - >-
        The `to_services` method in the Garage class is used to convert Garage
        tasks to a list of scheduled `ServiceRecord` objects.
      - It returns a `Recipe` for the specified serving size.
      - >-
        The AI community is a group of individuals who collaborate on models,
        datasets, and tools to advance artificial intelligence research.
  - source_sentence: >-
      The main version of the guide contains the INSTALLATION page. Click the
      link to be directed there.
    sentences:
      - You can bake bread by following the Bake bread tutorial.
      - >-
        The base class for documents generated from a data stream is
        StreamBasedBuilder.
      - >-
        You can find the INSTALLATION page in the main version of the guide.
        Click on the provided link to redirect to the main version.
  - source_sentence: >-
      A major distinction between a ProductList and an InventoryList is that a
      ProductList allows for random access to the items, while an InventoryList
      updates gradually as it is navigated.
    sentences:
      - >-
        The how-to guides for the platform include Setup, Processing, Streaming,
        TensorFlow integration, PyTorch integration, Cache management, Cloud
        storage, Search index, Analytics, and Data Pipelines.
      - >-
        Yes, there is a tutorial for analyzing stock market data. You can find
        it at the link provided: /docs/stocks/v2.10.0/data_analysis.
      - >-
        The main difference between a ProductList and an InventoryList is that a
        ProductList provides random access to the items, while an InventoryList
        updates progressively as you browse the list.
  - source_sentence: >-
      ImageFolder is a dataset builder that eliminates the need for coding to
      quickly load a dataset with thousands of image files. It will
      automatically incorporate any extra data such as resolution, format, or
      tags, provided that it is included in a metadata file
      (metadata.csv/metadata.jsonl).
    sentences:
      - The function `calc_and_sum` returns the calculated value and sum.
      - >-
        Some examples of supported network drives are Network File System (NFS),
        Server Message Block (SMB), and WebDAV.
      - >-
        ImageFolder is a dataset builder designed to quickly load an image
        dataset with several thousand image files without requiring you to write
        any code. It automatically loads any additional information about your
        dataset, such as image resolution, format, or image tags, as long as you
        include this information in a metadata file
        (metadata.csv/metadata.jsonl).
  - source_sentence: The `num_services` method gives the quantity of services in the garage.
    sentences:
      - >-
        A signature in the sales database is a unique identifier for a
        transaction that is updated every time a change is made. It is computed
        by combining the previous signature and a hash of the latest update
        applied.
      - The `num_services` method returns the number of services in the garage.
      - It returns the number of entries in the dataset.
model-index:
  - name: SentenceTransformer based on srikarvar/fine_tuned_model_5
    results:
      - task:
          type: binary-classification
          name: Binary Classification
        dataset:
          name: pair class dev
          type: pair-class-dev
        metrics:
          - type: cosine_accuracy
            value: 0.9821428571428571
            name: Cosine Accuracy
          - type: cosine_accuracy_threshold
            value: 0.9922685623168945
            name: Cosine Accuracy Threshold
          - type: cosine_f1
            value: 0.9909909909909909
            name: Cosine F1
          - type: cosine_f1_threshold
            value: 0.9922685623168945
            name: Cosine F1 Threshold
          - type: cosine_precision
            value: 1
            name: Cosine Precision
          - type: cosine_recall
            value: 0.9821428571428571
            name: Cosine Recall
          - type: cosine_ap
            value: 1
            name: Cosine Ap
          - type: dot_accuracy
            value: 0.9821428571428571
            name: Dot Accuracy
          - type: dot_accuracy_threshold
            value: 0.9922685623168945
            name: Dot Accuracy Threshold
          - type: dot_f1
            value: 0.9909909909909909
            name: Dot F1
          - type: dot_f1_threshold
            value: 0.9922685623168945
            name: Dot F1 Threshold
          - type: dot_precision
            value: 1
            name: Dot Precision
          - type: dot_recall
            value: 0.9821428571428571
            name: Dot Recall
          - type: dot_ap
            value: 1
            name: Dot Ap
          - type: manhattan_accuracy
            value: 0.9821428571428571
            name: Manhattan Accuracy
          - type: manhattan_accuracy_threshold
            value: 1.8805665969848633
            name: Manhattan Accuracy Threshold
          - type: manhattan_f1
            value: 0.9909909909909909
            name: Manhattan F1
          - type: manhattan_f1_threshold
            value: 1.8805665969848633
            name: Manhattan F1 Threshold
          - type: manhattan_precision
            value: 1
            name: Manhattan Precision
          - type: manhattan_recall
            value: 0.9821428571428571
            name: Manhattan Recall
          - type: manhattan_ap
            value: 1
            name: Manhattan Ap
          - type: euclidean_accuracy
            value: 0.9821428571428571
            name: Euclidean Accuracy
          - type: euclidean_accuracy_threshold
            value: 0.12164457887411118
            name: Euclidean Accuracy Threshold
          - type: euclidean_f1
            value: 0.9909909909909909
            name: Euclidean F1
          - type: euclidean_f1_threshold
            value: 0.12164457887411118
            name: Euclidean F1 Threshold
          - type: euclidean_precision
            value: 1
            name: Euclidean Precision
          - type: euclidean_recall
            value: 0.9821428571428571
            name: Euclidean Recall
          - type: euclidean_ap
            value: 1
            name: Euclidean Ap
          - type: max_accuracy
            value: 0.9821428571428571
            name: Max Accuracy
          - type: max_accuracy_threshold
            value: 1.8805665969848633
            name: Max Accuracy Threshold
          - type: max_f1
            value: 0.9909909909909909
            name: Max F1
          - type: max_f1_threshold
            value: 1.8805665969848633
            name: Max F1 Threshold
          - type: max_precision
            value: 1
            name: Max Precision
          - type: max_recall
            value: 0.9821428571428571
            name: Max Recall
          - type: max_ap
            value: 1
            name: Max Ap
      - task:
          type: binary-classification
          name: Binary Classification
        dataset:
          name: pair class test
          type: pair-class-test
        metrics:
          - type: cosine_accuracy
            value: 0.9821428571428571
            name: Cosine Accuracy
          - type: cosine_accuracy_threshold
            value: 0.9922685623168945
            name: Cosine Accuracy Threshold
          - type: cosine_f1
            value: 0.9909909909909909
            name: Cosine F1
          - type: cosine_f1_threshold
            value: 0.9922685623168945
            name: Cosine F1 Threshold
          - type: cosine_precision
            value: 1
            name: Cosine Precision
          - type: cosine_recall
            value: 0.9821428571428571
            name: Cosine Recall
          - type: cosine_ap
            value: 1
            name: Cosine Ap
          - type: dot_accuracy
            value: 0.9821428571428571
            name: Dot Accuracy
          - type: dot_accuracy_threshold
            value: 0.9922685623168945
            name: Dot Accuracy Threshold
          - type: dot_f1
            value: 0.9909909909909909
            name: Dot F1
          - type: dot_f1_threshold
            value: 0.9922685623168945
            name: Dot F1 Threshold
          - type: dot_precision
            value: 1
            name: Dot Precision
          - type: dot_recall
            value: 0.9821428571428571
            name: Dot Recall
          - type: dot_ap
            value: 1
            name: Dot Ap
          - type: manhattan_accuracy
            value: 0.9821428571428571
            name: Manhattan Accuracy
          - type: manhattan_accuracy_threshold
            value: 1.8805665969848633
            name: Manhattan Accuracy Threshold
          - type: manhattan_f1
            value: 0.9909909909909909
            name: Manhattan F1
          - type: manhattan_f1_threshold
            value: 1.8805665969848633
            name: Manhattan F1 Threshold
          - type: manhattan_precision
            value: 1
            name: Manhattan Precision
          - type: manhattan_recall
            value: 0.9821428571428571
            name: Manhattan Recall
          - type: manhattan_ap
            value: 1
            name: Manhattan Ap
          - type: euclidean_accuracy
            value: 0.9821428571428571
            name: Euclidean Accuracy
          - type: euclidean_accuracy_threshold
            value: 0.12164457887411118
            name: Euclidean Accuracy Threshold
          - type: euclidean_f1
            value: 0.9909909909909909
            name: Euclidean F1
          - type: euclidean_f1_threshold
            value: 0.12164457887411118
            name: Euclidean F1 Threshold
          - type: euclidean_precision
            value: 1
            name: Euclidean Precision
          - type: euclidean_recall
            value: 0.9821428571428571
            name: Euclidean Recall
          - type: euclidean_ap
            value: 1
            name: Euclidean Ap
          - type: max_accuracy
            value: 0.9821428571428571
            name: Max Accuracy
          - type: max_accuracy_threshold
            value: 1.8805665969848633
            name: Max Accuracy Threshold
          - type: max_f1
            value: 0.9909909909909909
            name: Max F1
          - type: max_f1_threshold
            value: 1.8805665969848633
            name: Max F1 Threshold
          - type: max_precision
            value: 1
            name: Max Precision
          - type: max_recall
            value: 0.9821428571428571
            name: Max Recall
          - type: max_ap
            value: 1
            name: Max Ap

SentenceTransformer based on srikarvar/fine_tuned_model_5

This is a sentence-transformers model finetuned from srikarvar/fine_tuned_model_5 on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: srikarvar/fine_tuned_model_5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("srikarvar/fine_tuned_model_12")
# Run inference
sentences = [
    'The `num_services` method gives the quantity of services in the garage.',
    'The `num_services` method returns the number of services in the garage.',
    'It returns the number of entries in the dataset.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Binary Classification

Metric Value
cosine_accuracy 0.9821
cosine_accuracy_threshold 0.9923
cosine_f1 0.991
cosine_f1_threshold 0.9923
cosine_precision 1.0
cosine_recall 0.9821
cosine_ap 1.0
dot_accuracy 0.9821
dot_accuracy_threshold 0.9923
dot_f1 0.991
dot_f1_threshold 0.9923
dot_precision 1.0
dot_recall 0.9821
dot_ap 1.0
manhattan_accuracy 0.9821
manhattan_accuracy_threshold 1.8806
manhattan_f1 0.991
manhattan_f1_threshold 1.8806
manhattan_precision 1.0
manhattan_recall 0.9821
manhattan_ap 1.0
euclidean_accuracy 0.9821
euclidean_accuracy_threshold 0.1216
euclidean_f1 0.991
euclidean_f1_threshold 0.1216
euclidean_precision 1.0
euclidean_recall 0.9821
euclidean_ap 1.0
max_accuracy 0.9821
max_accuracy_threshold 1.8806
max_f1 0.991
max_f1_threshold 1.8806
max_precision 1.0
max_recall 0.9821
max_ap 1.0

Binary Classification

Metric Value
cosine_accuracy 0.9821
cosine_accuracy_threshold 0.9923
cosine_f1 0.991
cosine_f1_threshold 0.9923
cosine_precision 1.0
cosine_recall 0.9821
cosine_ap 1.0
dot_accuracy 0.9821
dot_accuracy_threshold 0.9923
dot_f1 0.991
dot_f1_threshold 0.9923
dot_precision 1.0
dot_recall 0.9821
dot_ap 1.0
manhattan_accuracy 0.9821
manhattan_accuracy_threshold 1.8806
manhattan_f1 0.991
manhattan_f1_threshold 1.8806
manhattan_precision 1.0
manhattan_recall 0.9821
manhattan_ap 1.0
euclidean_accuracy 0.9821
euclidean_accuracy_threshold 0.1216
euclidean_f1 0.991
euclidean_f1_threshold 0.1216
euclidean_precision 1.0
euclidean_recall 0.9821
euclidean_ap 1.0
max_accuracy 0.9821
max_accuracy_threshold 1.8806
max_f1 0.991
max_f1_threshold 1.8806
max_precision 1.0
max_recall 0.9821
max_ap 1.0

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 560 training samples
  • Columns: label, sentence2, and sentence1
  • Approximate statistics based on the first 560 samples:
    label sentence2 sentence1
    type int string string
    details
    • 1: 100.00%
    • min: 9 tokens
    • mean: 30.18 tokens
    • max: 98 tokens
    • min: 8 tokens
    • mean: 30.0 tokens
    • max: 98 tokens
  • Samples:
    label sentence2 sentence1
    1 It is not available in v2.10.0. No, it doesn't exist in v2.10.0.
    1 You can become a member of the research forum and pose questions to the AI community. You can join and ask questions in the AI research forum.
    1 No information regarding initializing a project for PyTorch is included in the guide. The guide does not provide information on how to initialize a project for PyTorch.
  • Loss: OnlineContrastiveLoss

Evaluation Dataset

json

  • Dataset: json
  • Size: 560 evaluation samples
  • Columns: label, sentence2, and sentence1
  • Approximate statistics based on the first 560 samples:
    label sentence2 sentence1
    type int string string
    details
    • 1: 100.00%
    • min: 15 tokens
    • mean: 32.29 tokens
    • max: 82 tokens
    • min: 14 tokens
    • mean: 31.96 tokens
    • max: 82 tokens
  • Samples:
    label sentence2 sentence1
    1 The how-to guides for the platform include instructions for Setup, Processing, Streaming, TensorFlow integration, PyTorch integration, Caching, Cloud storage, Indexing, Analytics, and Data Pipelines. The how-to guides for the platform include Setup, Processing, Streaming, TensorFlow integration, PyTorch integration, Cache management, Cloud storage, Search index, Analytics, and Data Pipelines.
    1 In the absence of a model script, all files in the supported formats will be loaded. However, if a model script is present, it will be downloaded and executed in order to download and prepare the model. If there’s no model script, all the files in the supported formats are loaded. If there’s a model script, it is downloaded and executed to download and prepare the model.
    1 React, Angular, and Vue are compatible with the Plugin library. The Plugin library can be used with React, Angular, and Vue.
  • Loss: OnlineContrastiveLoss

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • gradient_accumulation_steps: 2
  • num_train_epochs: 4
  • warmup_ratio: 0.1
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 2
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss pair-class-dev_max_ap pair-class-test_max_ap
0 0 - - 1.0 -
1.0 8 - 0.0028 1.0 -
1.25 10 0.1425 - - -
2.0 16 - 0.0003 1.0 -
2.5 20 0.002 - - -
3.0 24 - 0.0001 1.0 -
3.75 30 0.0008 - - -
4.0 32 - 0.0001 1.0 1.0
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.0
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.34.2
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}