SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("LeoChiuu/all-MiniLM-L6-v2")
# Run inference
sentences = [
'Do you see your scarf in the watering can?',
'Are these your footprints?',
'Magic user',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Binary Classification
- Dataset:
custom-arc-semantics-data
- Evaluated with
BinaryClassificationEvaluator
Metric | Value |
---|---|
cosine_accuracy | 0.9286 |
cosine_accuracy_threshold | 0.4293 |
cosine_f1 | 0.9425 |
cosine_f1_threshold | 0.227 |
cosine_precision | 0.9111 |
cosine_recall | 0.9762 |
cosine_ap | 0.9721 |
dot_accuracy | 0.9286 |
dot_accuracy_threshold | 0.4293 |
dot_f1 | 0.9425 |
dot_f1_threshold | 0.227 |
dot_precision | 0.9111 |
dot_recall | 0.9762 |
dot_ap | 0.9721 |
manhattan_accuracy | 0.9286 |
manhattan_accuracy_threshold | 16.6308 |
manhattan_f1 | 0.9432 |
manhattan_f1_threshold | 19.7401 |
manhattan_precision | 0.9022 |
manhattan_recall | 0.9881 |
manhattan_ap | 0.9728 |
euclidean_accuracy | 0.9286 |
euclidean_accuracy_threshold | 1.0682 |
euclidean_f1 | 0.9425 |
euclidean_f1_threshold | 1.2433 |
euclidean_precision | 0.9111 |
euclidean_recall | 0.9762 |
euclidean_ap | 0.9721 |
max_accuracy | 0.9286 |
max_accuracy_threshold | 16.6308 |
max_f1 | 0.9432 |
max_f1_threshold | 19.7401 |
max_precision | 0.9111 |
max_recall | 0.9881 |
max_ap | 0.9728 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 560 training samples
- Columns:
text1
,text2
, andlabel
- Approximate statistics based on the first 1000 samples:
text1 text2 label type string string int details - min: 3 tokens
- mean: 7.2 tokens
- max: 18 tokens
- min: 3 tokens
- mean: 7.26 tokens
- max: 18 tokens
- 0: ~36.07%
- 1: ~63.93%
- Samples:
text1 text2 label When it was dinner
Dinner time
1
Did you cook chicken noodle last night?
Did you make chicken noodle for dinner?
1
Someone who can change item
Someone who uses magic that turns something into something.
1
- Loss:
CoSENTLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Evaluation Dataset
Unnamed Dataset
- Size: 140 evaluation samples
- Columns:
text1
,text2
, andlabel
- Approximate statistics based on the first 1000 samples:
text1 text2 label type string string int details - min: 3 tokens
- mean: 6.99 tokens
- max: 18 tokens
- min: 3 tokens
- mean: 7.29 tokens
- max: 18 tokens
- 0: ~40.00%
- 1: ~60.00%
- Samples:
text1 text2 label Let's check inside
Let's search inside
1
Sohpie, are you okay?
Sophie Are you pressured?
0
This wine glass is related.
This sword looks important.
0
- Loss:
CoSENTLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochlearning_rate
: 2e-05num_train_epochs
: 13warmup_ratio
: 0.1fp16
: Truebatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 8per_device_eval_batch_size
: 8per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 13max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss | custom-arc-semantics-data_max_ap |
---|---|---|---|---|
None | 0 | - | - | 0.9254 |
1.0 | 70 | 2.9684 | 1.4087 | 0.9425 |
2.0 | 140 | 1.4461 | 1.0942 | 0.9629 |
3.0 | 210 | 0.6005 | 0.8398 | 0.9680 |
4.0 | 280 | 0.3021 | 0.7577 | 0.9703 |
5.0 | 350 | 0.2412 | 0.7216 | 0.9715 |
6.0 | 420 | 0.1816 | 0.7538 | 0.9722 |
7.0 | 490 | 0.1512 | 0.8049 | 0.9726 |
8.0 | 560 | 0.1208 | 0.7602 | 0.9726 |
9.0 | 630 | 0.0915 | 0.7286 | 0.9729 |
10.0 | 700 | 0.0553 | 0.7072 | 0.9729 |
11.0 | 770 | 0.0716 | 0.6984 | 0.9730 |
12.0 | 840 | 0.0297 | 0.7063 | 0.9725 |
13.0 | 910 | 0.0462 | 0.6997 | 0.9728 |
Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 2.20.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
CoSENTLoss
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
- Downloads last month
- 10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for LeoChiuu/all-MiniLM-L6-v2
Base model
sentence-transformers/all-MiniLM-L6-v2Evaluation results
- Cosine Accuracy on custom arc semantics dataself-reported0.929
- Cosine Accuracy Threshold on custom arc semantics dataself-reported0.429
- Cosine F1 on custom arc semantics dataself-reported0.943
- Cosine F1 Threshold on custom arc semantics dataself-reported0.227
- Cosine Precision on custom arc semantics dataself-reported0.911
- Cosine Recall on custom arc semantics dataself-reported0.976
- Cosine Ap on custom arc semantics dataself-reported0.972
- Dot Accuracy on custom arc semantics dataself-reported0.929
- Dot Accuracy Threshold on custom arc semantics dataself-reported0.429
- Dot F1 on custom arc semantics dataself-reported0.943