SentenceTransformer based on BAAI/bge-small-en-v1.5
This is a sentence-transformers model finetuned from BAAI/bge-small-en-v1.5 on the csv dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-small-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
- Training Dataset:
- csv
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jebish7/bge_MNSR")
# Run inference
sentences = [
'How does ADGM ensure that FinTech Participants remain compliant with evolving regulatory standards, particularly in the context of new and developing technologies?',
'The Guidance is applicable to the following Persons:\n(a)\tan applicant for a Financial Services Permission to carry on the Regulated Activity of Developing Financial Technology Services within the RegLab in or from ADGM; and/or\n(b)\ta FinTech Participant.',
'DIGITAL SECURITIES – INTERMEDIARIES\nConventional Intermediaries – Digital Securities\nIntermediaries intending to operate solely, in the context of Digital Securities, as a broker or dealer for Clients (including the operation of an OTC broking or dealing desk) are not permitted to structure their broking / dealing service or platform in such a way that would have it be considered as operating a RIE or MTF. The FSRA would consider features such as allowing for price discovery, displaying a public trading order book (accessible to any member of the public, regardless of whether they are Clients), and allowing trades to automatically be matched using an exchange-type matching engine as characteristic of a RIE or MTF, and not activities acceptable for an Digital Securities intermediary to undertake.\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
csv
- Dataset: csv
- Size: 29,545 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 19 tokens
- mean: 34.47 tokens
- max: 70 tokens
- min: 23 tokens
- mean: 113.88 tokens
- max: 512 tokens
- Samples:
anchor positive In the case of a cross-border transaction involving jurisdictions with differing sanctions regimes, how should a Relevant Person prioritize and reconcile these requirements?
Sanctions. UNSC Sanctions and Sanctions issued or administered by the U.A.E., including Targeted Financial Sanctions, apply in the ADGM. Relevant Persons must comply with Targeted Financial Sanctions. Sanctions compliance is emphasised by specific obligations contained in the AML Rulebook requiring Relevant Persons to establish and maintain effective systems and controls to comply with applicable Sanctions, including in particular Targeted Financial Sanctions, as set out in Chapter 11.
How does the FSRA monitor and assess the deployment scalability of a FinTech proposal within the UAE and ADGM beyond the RegLab validity period?
Evaluation Criteria. To qualify for authorisation under the RegLab framework, the applicant must demonstrate how it satisfies the following evaluation criteria:
(a) the FinTech Proposal promotes FinTech innovation, in terms of the business application and deployment model of the technology.
(b) the FinTech Proposal has the potential to:
i. promote significant growth, efficiency or competition in the financial sector;
ii. promote better risk management solutions and regulatory outcomes for the financial industry; or
iii. improve the choices and welfare of clients.
(c) the FinTech Proposal is at a sufficiently advanced stage of development to mount a live test.
(d) the FinTech Proposal can be deployed in the ADGM and the UAE on a broader scale or contribute to the development of ADGM as a financial centre, and, if so, how the applicant intends to do so on completion of the validity period.How does the ADGM define "distinct risks" that arise from conducting business entirely in an NFTF manner compared to a mix of face-to-face and NFTF interactions, and what specific risk mitigation strategies should be employed in these scenarios?
The risk assessment under Rule 6.2.1(c) should identify actions to mitigate risks associated with undertaking NFTF business generally, and the use of eKYC specifically. This is because distinct risks are often likely to arise where business is conducted entirely in an NFTF manner, compared to when the business relationship includes a mix of face-to-face and NFTF interactions. The assessment should make reference to risk mitigation measures recommended by the Regulator, a competent authority of the U.A.E., FATF, and other relevant bodies.
- Loss:
MultipleNegativesSymmetricRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Evaluation Dataset
csv
- Dataset: csv
- Size: 3,676 evaluation samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 18 tokens
- mean: 34.98 tokens
- max: 63 tokens
- min: 3 tokens
- mean: 114.8 tokens
- max: 512 tokens
- Samples:
anchor positive How should our firm approach the development and implementation of a risk management system that addresses the full spectrum of risks listed, including technology, compliance, and legal risks?
Management of particular risks
Without prejudice to the generality of Rule 2.4(1, a Captive Insurer must develop, implement and maintain a risk management system to identify and address risks, including but not limited to:
(a) reserving risk;
(b) investment risk (including risks associated with the use of Derivatives);
(c) underwriting risk;
(d) market risk;
(e) liquidity management risk;
(f) credit quality risk;
(g) fraud and other fiduciary risks;
(h) compliance risk;
(i) outsourcing risk; and
(j) reinsurance risk. Reinsurance risk refers to risks associated with the Captive Insurer's use of reinsurance arrangements as Cedant.What measures could an Authorised Person take to ensure non-repudiation and accountability, so that individuals or systems processing information cannot deny their actions?
In establishing its systems and controls to address information security risks, an Authorised Person should have regard to:
a. confidentiality: information should be accessible only to Persons or systems with appropriate authority, which may require firewalls within a system, as well as entry restrictions;
b. the risk of loss or theft of customer data;
c. integrity: safeguarding the accuracy and completeness of information and its processing;
d. non repudiation and accountability: ensuring that the Person or system that processed the information cannot deny their actions; and
e. internal security: including premises security, staff vetting; access rights and portable media, staff internet and email access, encryption, safe disposal of customer data, and training and awareness.What authority does the Regulator have over the terms and conditions applied to the escrow account holding funds from a Prospectus Offer?
The Regulator may, during the Offer Period or such other longer period as specified, impose a requirement that the monies held by a Person making a Prospectus Offer or his agent pursuant to the Prospectus Offer or issuance are held in an escrow account for a specified period and on specified terms.
- Loss:
MultipleNegativesSymmetricRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 64learning_rate
: 2e-05num_train_epochs
: 10warmup_ratio
: 0.1load_best_model_at_end
: Truebatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 64per_device_eval_batch_size
: 8per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss |
---|---|---|---|
0.8658 | 200 | 1.6059 | - |
1.2684 | 293 | - | 0.4773 |
1.4632 | 400 | 0.8247 | - |
2.2684 | 586 | - | 0.4313 |
2.0606 | 600 | 0.7352 | - |
2.9264 | 800 | 1.0011 | - |
3.2684 | 879 | - | 0.4038 |
3.5238 | 1000 | 0.646 | - |
4.2684 | 1172 | - | 0.3926 |
4.1212 | 1200 | 0.6207 | - |
4.9870 | 1400 | 0.8652 | - |
5.2684 | 1465 | - | 0.3769 |
5.5844 | 1600 | 0.5708 | - |
6.2684 | 1758 | - | 0.3691 |
6.1818 | 1800 | 0.5588 | - |
7.0476 | 2000 | 0.7551 | - |
7.2684 | 2051 | - | 0.3608 |
7.6450 | 2200 | 0.5758 | - |
8.1212 | 2310 | - | 0.3561 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
- Downloads last month
- 21
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for jebish7/bge_MNSR
Base model
BAAI/bge-small-en-v1.5