metadata
base_model: indobenchmark/indobert-large-p2
datasets:
- quarkss/stsb-indo-mt
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5749
- loss:CosineSimilarityLoss
widget:
- source_sentence: Dua ekor anjing berenang di kolam renang.
sentences:
- Anjing-anjing sedang berenang di kolam renang.
- Seekor binatang sedang berjalan di atas tanah.
- Seorang pria sedang menyeka pinggiran mangkuk.
- source_sentence: Seorang anak perempuan sedang mengiris mentega menjadi dua bagian.
sentences:
- Seorang wanita sedang mengiris tahu.
- Dua orang berkelahi.
- Seorang pria sedang menari.
- source_sentence: Seorang gadis sedang makan kue mangkuk.
sentences:
- Seorang pria sedang mengiris bawang putih dengan alat pengiris mandolin.
- Seorang pria sedang memotong dan memotong bawang.
- Seorang wanita sedang makan kue mangkuk.
- source_sentence: Sebuah helikopter mendarat di landasan helikopter.
sentences:
- Seorang pria sedang mengiris mentimun.
- Seorang pria sedang memotong batang pohon dengan kapak.
- Sebuah helikopter mendarat.
- source_sentence: Seorang pria sedang berjalan dengan seekor kuda.
sentences:
- Seorang pria sedang menuntun seekor kuda dengan tali kekang.
- Seorang pria sedang menembakkan pistol.
- Seorang wanita sedang memetik tomat.
model-index:
- name: SentenceTransformer based on indobenchmark/indobert-large-p2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.8691840566814281
name: Pearson Cosine
- type: spearman_cosine
value: 0.8676618157111291
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8591936899214765
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8625729388794413
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8599101625523397
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8632992102966184
name: Spearman Euclidean
- type: pearson_dot
value: 0.8440663965451926
name: Pearson Dot
- type: spearman_dot
value: 0.8392116432595296
name: Spearman Dot
- type: pearson_max
value: 0.8691840566814281
name: Pearson Max
- type: spearman_max
value: 0.8676618157111291
name: Spearman Max
- type: pearson_cosine
value: 0.8401688802461491
name: Pearson Cosine
- type: spearman_cosine
value: 0.8365597846163649
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8276067064758832
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8315689286193226
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8277930159560367
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.831557090168861
name: Spearman Euclidean
- type: pearson_dot
value: 0.8170329546065831
name: Pearson Dot
- type: spearman_dot
value: 0.8083098402255348
name: Spearman Dot
- type: pearson_max
value: 0.8401688802461491
name: Pearson Max
- type: spearman_max
value: 0.8365597846163649
name: Spearman Max
SentenceTransformer based on indobenchmark/indobert-large-p2
This is a sentence-transformers model finetuned from indobenchmark/indobert-large-p2. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
STSB Test
Model | Spearman Correlation |
---|---|
quarkss/indobert-large-stsb | 0.8366 |
quarkss/indobert-base-stsb | 0.8123 |
sentence-transformers/all-MiniLM-L6-v2 | 0.5952 |
indobenchmark/indobert-large-p2 | 0.5673 |
sentence-transformers/all-mpnet-base-v2 | 0.5531 |
sentence-transformers/stsb-bert-base | 0.5349 |
indobenchmark/indobert-base-p2 | 0.5309 |
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: indobenchmark/indobert-large-p2
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("quarkss/indobert-large-stsb")
# Run inference
sentences = [
'Seorang pria sedang berjalan dengan seekor kuda.',
'Seorang pria sedang menuntun seekor kuda dengan tali kekang.',
'Seorang pria sedang menembakkan pistol.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Semantic Similarity
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | Value |
---|---|
pearson_cosine | 0.8692 |
spearman_cosine | 0.8677 |
pearson_manhattan | 0.8592 |
spearman_manhattan | 0.8626 |
pearson_euclidean | 0.8599 |
spearman_euclidean | 0.8633 |
pearson_dot | 0.8441 |
spearman_dot | 0.8392 |
pearson_max | 0.8692 |
spearman_max | 0.8677 |
Semantic Similarity
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | Value |
---|---|
pearson_cosine | 0.8402 |
spearman_cosine | 0.8366 |
pearson_manhattan | 0.8276 |
spearman_manhattan | 0.8316 |
pearson_euclidean | 0.8278 |
spearman_euclidean | 0.8316 |
pearson_dot | 0.817 |
spearman_dot | 0.8083 |
pearson_max | 0.8402 |
spearman_max | 0.8366 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 5,749 training samples
- Columns:
sentence1
,sentence2
, andscore
- Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 6 tokens
- mean: 9.65 tokens
- max: 25 tokens
- min: 6 tokens
- mean: 9.59 tokens
- max: 24 tokens
- min: 0.0
- mean: 0.54
- max: 1.0
- Samples:
sentence1 sentence2 score Sebuah pesawat sedang lepas landas.
Sebuah pesawat terbang sedang lepas landas.
1.0
Seorang pria sedang memainkan seruling besar.
Seorang pria sedang memainkan seruling.
0.76
Seorang pria sedang mengoleskan keju parut di atas pizza.
Seorang pria sedang mengoleskan keju parut di atas pizza yang belum matang.
0.76
- Loss:
CosineSimilarityLoss
with these parameters:{ "loss_fct": "torch.nn.modules.loss.MSELoss" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 2e-05weight_decay
: 0.01num_train_epochs
: 5warmup_ratio
: 0.1fp16
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.01adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | spearman_cosine | spearman_max |
---|---|---|---|---|
0.2778 | 100 | 0.0867 | - | - |
0.5556 | 200 | 0.0351 | - | - |
0.8333 | 300 | 0.0303 | - | - |
1.1111 | 400 | 0.0202 | - | - |
1.3889 | 500 | 0.0154 | 0.8612 | - |
1.6667 | 600 | 0.0136 | - | - |
1.9444 | 700 | 0.0145 | - | - |
2.2222 | 800 | 0.0082 | - | - |
2.5 | 900 | 0.0072 | - | - |
2.7778 | 1000 | 0.0068 | 0.8660 | - |
3.0556 | 1100 | 0.0065 | - | - |
3.3333 | 1200 | 0.0044 | - | - |
3.6111 | 1300 | 0.0044 | - | - |
3.8889 | 1400 | 0.0045 | - | - |
4.1667 | 1500 | 0.0038 | 0.8677 | - |
4.4444 | 1600 | 0.0038 | - | - |
4.7222 | 1700 | 0.0035 | - | - |
5.0 | 1800 | 0.0034 | - | 0.8366 |
Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.0.1+cu117
- Accelerate: 0.32.1
- Datasets: 2.17.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}