MPNet base trained on Natural Questions pairs
This is a sentence-transformers model finetuned from microsoft/mpnet-base on the gooaq-hard-negatives dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: microsoft/mpnet-base
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/mpnet-base-nq-cgist-triplet-3-gte")
# Run inference
sentences = [
'what energy is released when coal is burned?',
'When coal is burned, it reacts with the oxygen in the air. This chemical reaction converts the stored solar energy into thermal energy, which is released as heat. But it also produces carbon dioxide and methane.',
'When coal is burned it releases a number of airborne toxins and pollutants. They include mercury, lead, sulfur dioxide, nitrogen oxides, particulates, and various other heavy metals.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Datasets:
NanoClimateFEVER
,NanoDBPedia
,NanoFEVER
,NanoFiQA2018
,NanoHotpotQA
,NanoMSMARCO
,NanoNFCorpus
,NanoNQ
,NanoQuoraRetrieval
,NanoSCIDOCS
,NanoArguAna
,NanoSciFact
andNanoTouche2020
- Evaluated with
InformationRetrievalEvaluator
Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cosine_accuracy@1 | 0.22 | 0.46 | 0.38 | 0.28 | 0.34 | 0.1 | 0.26 | 0.14 | 0.82 | 0.34 | 0.18 | 0.38 | 0.5306 |
cosine_accuracy@3 | 0.44 | 0.62 | 0.54 | 0.5 | 0.52 | 0.28 | 0.38 | 0.36 | 0.9 | 0.48 | 0.56 | 0.46 | 0.7551 |
cosine_accuracy@5 | 0.52 | 0.76 | 0.58 | 0.52 | 0.62 | 0.52 | 0.44 | 0.44 | 0.92 | 0.54 | 0.62 | 0.48 | 0.8571 |
cosine_accuracy@10 | 0.72 | 0.82 | 0.68 | 0.58 | 0.72 | 0.68 | 0.5 | 0.58 | 0.96 | 0.66 | 0.84 | 0.62 | 0.9388 |
cosine_precision@1 | 0.22 | 0.46 | 0.38 | 0.28 | 0.34 | 0.1 | 0.26 | 0.14 | 0.82 | 0.34 | 0.18 | 0.38 | 0.5306 |
cosine_precision@3 | 0.1667 | 0.3867 | 0.18 | 0.22 | 0.1933 | 0.0933 | 0.2133 | 0.12 | 0.3667 | 0.2467 | 0.1867 | 0.1667 | 0.4558 |
cosine_precision@5 | 0.12 | 0.388 | 0.12 | 0.164 | 0.144 | 0.104 | 0.196 | 0.088 | 0.244 | 0.212 | 0.124 | 0.104 | 0.4041 |
cosine_precision@10 | 0.094 | 0.344 | 0.07 | 0.098 | 0.092 | 0.068 | 0.138 | 0.06 | 0.134 | 0.148 | 0.084 | 0.068 | 0.3367 |
cosine_recall@1 | 0.0933 | 0.0307 | 0.37 | 0.1372 | 0.17 | 0.1 | 0.0112 | 0.13 | 0.7207 | 0.0707 | 0.18 | 0.345 | 0.0388 |
cosine_recall@3 | 0.195 | 0.0773 | 0.52 | 0.3227 | 0.29 | 0.28 | 0.0205 | 0.34 | 0.8553 | 0.1537 | 0.56 | 0.44 | 0.1001 |
cosine_recall@5 | 0.2333 | 0.1459 | 0.57 | 0.3682 | 0.36 | 0.52 | 0.0308 | 0.41 | 0.8993 | 0.2187 | 0.62 | 0.46 | 0.1398 |
cosine_recall@10 | 0.3723 | 0.2216 | 0.66 | 0.4307 | 0.46 | 0.68 | 0.0422 | 0.55 | 0.9567 | 0.3047 | 0.84 | 0.605 | 0.2297 |
cosine_ndcg@10 | 0.2744 | 0.3921 | 0.5157 | 0.342 | 0.3723 | 0.3608 | 0.1655 | 0.3322 | 0.8807 | 0.2897 | 0.4973 | 0.4701 | 0.3934 |
cosine_mrr@10 | 0.3594 | 0.567 | 0.4757 | 0.3841 | 0.4571 | 0.2616 | 0.3367 | 0.2734 | 0.8617 | 0.4286 | 0.3891 | 0.4409 | 0.6553 |
cosine_map@100 | 0.2018 | 0.2815 | 0.4762 | 0.2826 | 0.2995 | 0.2722 | 0.049 | 0.2765 | 0.8526 | 0.2299 | 0.3967 | 0.4384 | 0.3134 |
Nano BEIR
- Dataset:
NanoBEIR_mean
- Evaluated with
NanoBEIREvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.3408 |
cosine_accuracy@3 | 0.5227 |
cosine_accuracy@5 | 0.6013 |
cosine_accuracy@10 | 0.7153 |
cosine_precision@1 | 0.3408 |
cosine_precision@3 | 0.2304 |
cosine_precision@5 | 0.1855 |
cosine_precision@10 | 0.1334 |
cosine_recall@1 | 0.1844 |
cosine_recall@3 | 0.3196 |
cosine_recall@5 | 0.3828 |
cosine_recall@10 | 0.4887 |
cosine_ndcg@10 | 0.4066 |
cosine_mrr@10 | 0.4531 |
cosine_map@100 | 0.3362 |
Training Details
Training Dataset
gooaq-hard-negatives
- Dataset: gooaq-hard-negatives at 87594a1
- Size: 50,000 training samples
- Columns:
question
,answer
, andnegative
- Approximate statistics based on the first 1000 samples:
question answer negative type string string string details - min: 8 tokens
- mean: 11.53 tokens
- max: 28 tokens
- min: 14 tokens
- mean: 59.79 tokens
- max: 150 tokens
- min: 15 tokens
- mean: 58.76 tokens
- max: 143 tokens
- Samples:
question answer negative what is the difference between calories from fat and total fat?
Fat has more than twice as many calories per gram as carbohydrates and proteins. A gram of fat has about 9 calories, while a gram of carbohydrate or protein has about 4 calories. In other words, you could eat twice as much carbohydrates or proteins as fat for the same amount of calories.
Fat has more than twice as many calories per gram as carbohydrates and proteins. A gram of fat has about 9 calories, while a gram of carbohydrate or protein has about 4 calories. In other words, you could eat twice as much carbohydrates or proteins as fat for the same amount of calories.
what is the difference between return transcript and account transcript?
A tax return transcript usually meets the needs of lending institutions offering mortgages and student loans. ... Tax Account Transcript - shows basic data such as return type, marital status, adjusted gross income, taxable income and all payment types. It also shows changes made after you filed your original return.
Trial balance is not a financial statement whereas a balance sheet is a financial statement. Trial balance is solely used for internal purposes whereas a balance sheet is used for purposes other than internal i.e. external. In a trial balance, each and every account is divided into debit (dr.) and credit (cr.)
how long does my dog need to fast before sedation?
Now, guidelines are aimed towards 6-8 hours before surgery. This pre-op fasting time is much more beneficial for your pets because you have enough food in there to neutralize the stomach acid, preventing it from coming up the esophagus that causes regurgitation under anesthetic.
Try not to let your pooch rapidly wolf down his/her food! Do not let the dog play or exercise (e.g. go for a walk) for at least two hours after having a meal. Ensure continuous fresh water is available to avoid your pet gulping down a large amount after eating.
- Loss:
CachedGISTEmbedLoss
with these parameters:{'guide': SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ), 'temperature': 0.01}
Evaluation Dataset
gooaq-hard-negatives
- Dataset: gooaq-hard-negatives at 87594a1
- Size: 10,048,700 evaluation samples
- Columns:
question
,answer
, andnegative
- Approximate statistics based on the first 1000 samples:
question answer negative type string string string details - min: 8 tokens
- mean: 11.61 tokens
- max: 21 tokens
- min: 16 tokens
- mean: 58.16 tokens
- max: 131 tokens
- min: 14 tokens
- mean: 57.98 tokens
- max: 157 tokens
- Samples:
question answer negative how is height width and length written?
The Graphics' industry standard is width by height (width x height). Meaning that when you write your measurements, you write them from your point of view, beginning with the width.
The Graphics' industry standard is width by height (width x height). Meaning that when you write your measurements, you write them from your point of view, beginning with the width. That's important.
what is the difference between pork shoulder and loin?
All the recipes I've found for pulled pork recommends a shoulder/butt. Shoulders take longer to cook than a loin, because they're tougher. Loins are lean, while shoulders have marbled fat inside.
They are extracted from the loin, which runs from the hip to the shoulder, and it has a small strip of meat called the tenderloin. Unlike other pork, this pork chop is cut from four major sections, which are the shoulder, also known as the blade chops, ribs chops, loin chops, and the last, which is the sirloin chops.
is the yin yang symbol religious?
The ubiquitous yin-yang symbol holds its roots in Taoism/Daoism, a Chinese religion and philosophy. The yin, the dark swirl, is associated with shadows, femininity, and the trough of a wave; the yang, the light swirl, represents brightness, passion and growth.
Yin energy is in the calm colors around you, in the soft music, in the soothing sound of a water fountain, or the relaxing images of water. Yang (active energy) is the feng shui energy expressed in strong, vibrant sounds and colors, bright lights, upward moving energy, tall plants, etc.
- Loss:
CachedGISTEmbedLoss
with these parameters:{'guide': SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ), 'temperature': 0.01}
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 2048per_device_eval_batch_size
: 2048learning_rate
: 2e-05num_train_epochs
: 1warmup_ratio
: 0.1seed
: 12bf16
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 2048per_device_eval_batch_size
: 2048per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 12data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | Validation Loss | NanoClimateFEVER_cosine_ndcg@10 | NanoDBPedia_cosine_ndcg@10 | NanoFEVER_cosine_ndcg@10 | NanoFiQA2018_cosine_ndcg@10 | NanoHotpotQA_cosine_ndcg@10 | NanoMSMARCO_cosine_ndcg@10 | NanoNFCorpus_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoQuoraRetrieval_cosine_ndcg@10 | NanoSCIDOCS_cosine_ndcg@10 | NanoArguAna_cosine_ndcg@10 | NanoSciFact_cosine_ndcg@10 | NanoTouche2020_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.04 | 1 | 11.5141 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.2 | 5 | 9.4407 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.4 | 10 | 5.6005 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.6 | 15 | 3.7323 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.8 | 20 | 2.7976 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
1.0 | 25 | 2.1899 | 1.3429 | 0.2744 | 0.3921 | 0.5157 | 0.3420 | 0.3723 | 0.3608 | 0.1655 | 0.3322 | 0.8807 | 0.2897 | 0.4973 | 0.4701 | 0.3934 | 0.4066 |
Environmental Impact
Carbon emissions were measured using CodeCarbon.
- Energy Consumed: 0.105 kWh
- Carbon Emitted: 0.041 kg of CO2
- Hours Used: 0.3 hours
Training Hardware
- On Cloud: No
- GPU Model: 1 x NVIDIA GeForce RTX 3090
- CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
- RAM Size: 31.78 GB
Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.4.0.dev0
- Transformers: 4.46.2
- PyTorch: 2.5.0+cu121
- Accelerate: 0.35.0.dev0
- Datasets: 2.20.0
- Tokenizers: 0.20.3
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
- Downloads last month
- 10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for tomaarsen/mpnet-base-nq-cgist-triplet-3-gte
Base model
microsoft/mpnet-baseDataset used to train tomaarsen/mpnet-base-nq-cgist-triplet-3-gte
Evaluation results
- Cosine Accuracy@1 on NanoClimateFEVERself-reported0.220
- Cosine Accuracy@3 on NanoClimateFEVERself-reported0.440
- Cosine Accuracy@5 on NanoClimateFEVERself-reported0.520
- Cosine Accuracy@10 on NanoClimateFEVERself-reported0.720
- Cosine Precision@1 on NanoClimateFEVERself-reported0.220
- Cosine Precision@3 on NanoClimateFEVERself-reported0.167
- Cosine Precision@5 on NanoClimateFEVERself-reported0.120
- Cosine Precision@10 on NanoClimateFEVERself-reported0.094
- Cosine Recall@1 on NanoClimateFEVERself-reported0.093
- Cosine Recall@3 on NanoClimateFEVERself-reported0.195