YxBxRyXJx's picture
Add new SentenceTransformer model
91b19b4 verified
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5600
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: The Federal Energy Regulatory Commission (“FERC”) has also taken
steps to enable the participation of energy storage in wholesale energy markets.
sentences:
- What segment-specific regulations apply to CVS Health Corporation's Pharmacy &
Consumer Wellness segment?
- What types of contracts does the company have for its health insurance plans,
and how does premium revenue recognition function under these contracts?
- What federal agency has taken steps to facilitate energy storage participation
in wholesale energy markets?
- source_sentence: Investments in subsidiaries and partnerships which we do not control
but have significant influence are accounted for under the equity method.
sentences:
- How does the company aim to protect the health and well-being of the communities
it operates in?
- What are the key factors affecting the evaluation of the Economic Value of Equity
(EVE) at the Charles Schwab Corporation?
- What accounting method does the company use to account for investments in subsidiaries
and partnerships where it does not control but has significant influence?
- source_sentence: Item 8 of IBM's 2023 Annual Report includes financial statements
and supplementary data spanning pages 44 through 121.
sentences:
- What entities are included among the Guarantors that guarantee each other’s debt
securities as described in Comcast’s 2023 Annual Report?
- What uncertainties exist regarding projections of future cash needs and cash flows?
- How many pages in IBM's 2023 Annual Report to Stockholders are dedicated to financial
statements and supplementary data?
- source_sentence: 'Our compensation philosophy creates the framework for our rewards
strategy, which focuses on five key elements: pay-for-performance, external market-based
research, internal equity, fiscal responsibility, and legal compliance.'
sentences:
- What financial instruments does the company invest in that are sensitive to interest
rates?
- What elements are included in the company's compensation programs?
- What is the expected maximum potential loss from hurricane events for Chubb as
of the end of 2023?
- source_sentence: Outside of the U.S., many countries have established vehicle safety
standards and regulations and are likely to adopt additional, more stringent requirements
in the future.
sentences:
- What percentage of the company's sales categories in fiscal 2023 were failure
and maintenance related?
- What competitive factors influence Chubb International's international operations?
- What changes are occurring with vehicle safety regulations outside of the U.S.?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.6885714285714286
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8278571428571428
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8728571428571429
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9164285714285715
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6885714285714286
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.275952380952381
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17457142857142854
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09164285714285714
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6885714285714286
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8278571428571428
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8728571428571429
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9164285714285715
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8042449175537354
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.768181405895692
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7712863400405022
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.6864285714285714
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8292857142857143
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8728571428571429
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9135714285714286
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6864285714285714
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2764285714285714
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17457142857142854
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09135714285714285
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6864285714285714
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8292857142857143
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8728571428571429
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9135714285714286
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8024352620004916
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7665753968253971
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7697268174707245
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.68
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.825
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8635714285714285
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9042857142857142
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.68
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.275
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1727142857142857
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09042857142857141
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.68
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.825
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8635714285714285
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9042857142857142
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7955058944909328
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7603066893424041
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7637281364444245
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.6621428571428571
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7964285714285714
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8457142857142858
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8907142857142857
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6621428571428571
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2654761904761905
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16914285714285712
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08907142857142857
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6621428571428571
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7964285714285714
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8457142857142858
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8907142857142857
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7772894744328753
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7408999433106581
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7449491476160666
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6285714285714286
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7635714285714286
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8057142857142857
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8642857142857143
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6285714285714286
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2545238095238095
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16114285714285712
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08642857142857142
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6285714285714286
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7635714285714286
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8057142857142857
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8642857142857143
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7447153698860624
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7067037981859416
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7112341263725279
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("YxBxRyXJx/bge-base-financial-matryoshka")
# Run inference
sentences = [
'Outside of the U.S., many countries have established vehicle safety standards and regulations and are likely to adopt additional, more stringent requirements in the future.',
'What changes are occurring with vehicle safety regulations outside of the U.S.?',
"What competitive factors influence Chubb International's international operations?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.6886 | 0.6864 | 0.68 | 0.6621 | 0.6286 |
| cosine_accuracy@3 | 0.8279 | 0.8293 | 0.825 | 0.7964 | 0.7636 |
| cosine_accuracy@5 | 0.8729 | 0.8729 | 0.8636 | 0.8457 | 0.8057 |
| cosine_accuracy@10 | 0.9164 | 0.9136 | 0.9043 | 0.8907 | 0.8643 |
| cosine_precision@1 | 0.6886 | 0.6864 | 0.68 | 0.6621 | 0.6286 |
| cosine_precision@3 | 0.276 | 0.2764 | 0.275 | 0.2655 | 0.2545 |
| cosine_precision@5 | 0.1746 | 0.1746 | 0.1727 | 0.1691 | 0.1611 |
| cosine_precision@10 | 0.0916 | 0.0914 | 0.0904 | 0.0891 | 0.0864 |
| cosine_recall@1 | 0.6886 | 0.6864 | 0.68 | 0.6621 | 0.6286 |
| cosine_recall@3 | 0.8279 | 0.8293 | 0.825 | 0.7964 | 0.7636 |
| cosine_recall@5 | 0.8729 | 0.8729 | 0.8636 | 0.8457 | 0.8057 |
| cosine_recall@10 | 0.9164 | 0.9136 | 0.9043 | 0.8907 | 0.8643 |
| **cosine_ndcg@10** | **0.8042** | **0.8024** | **0.7955** | **0.7773** | **0.7447** |
| cosine_mrr@10 | 0.7682 | 0.7666 | 0.7603 | 0.7409 | 0.7067 |
| cosine_map@100 | 0.7713 | 0.7697 | 0.7637 | 0.7449 | 0.7112 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 5,600 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 44.34 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 20.46 tokens</li><li>max: 46 tokens</li></ul> |
* Samples:
| positive | anchor |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Z-net is AutoZone's proprietary electronic catalog and enables AutoZoners to efficiently look up parts that customers need, providing complete job solutions and information based on vehicle specifics. It also tracks inventory availability across different locations.</code> | <code>What is the purpose of Z-net in AutoZone stores?</code> |
| <code>In 2023, the allowance for loan and lease losses was $13.3 billion on total loans and leases of $1,050.2 billion, which excludes loans accounted for under the fair value option.</code> | <code>What was the total amount of loans and leases at Bank of America by the end of 2023, excluding those accounted for under the fair value option?</code> |
| <code>We significantly improved features in Service Manager™, which installers can use from their mobile devices to get service instantly. We continue to provide 24/7 support for installers and Enphase system owners globally across our phone, online chat, and email communications channel. We continue to train our customer service agents with a goal of reducing average customer wait times to under one minute, and we continue to expand our network of field service technicians in the United States, Europe and Australia to provide direct homeowner assistance.</code> | <code>What measures has Enphase Energy, Inc. taken to improve customer service in 2023?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.9143 | 10 | 1.4537 | 0.7992 | 0.7952 | 0.7900 | 0.7703 | 0.7350 |
| **1.8286** | **20** | **0.6857** | **0.8042** | **0.8024** | **0.7955** | **0.7773** | **0.7447** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.0
- Transformers: 4.46.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->