Edit model card

zenml/finetuned-snowflake-arctic-embed-m

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-m
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("zenml/finetuned-snowflake-arctic-embed-m")
# Run inference
sentences = [
    'What is the expiration time for the GCP OAuth2 token in the ZenML configuration?',
    '━━━━━┛\n\nConfiguration\n\n┏━━━━━━━━━━━━┯━━━━━━━━━━━━┓┃ PROPERTY   │ VALUE      ┃\n\n┠────────────┼────────────┨\n\n┃ project_id │ zenml-core ┃\n\n┠────────────┼────────────┨\n\n┃ token      │ [HIDDEN]   ┃\n\n┗━━━━━━━━━━━━┷━━━━━━━━━━━━┛\n\nNote the temporary nature of the Service Connector. It will expire and become unusable in 1 hour:\n\nzenml service-connector list --name gcp-oauth2-token\n\nExample Command Output\n\n┏━━━━━━━━┯━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━┓\n\n┃ ACTIVE │ NAME             │ ID                                   │ TYPE   │ RESOURCE TYPES        │ RESOURCE NAME │ SHARED │ OWNER   │ EXPIRES IN │ LABELS ┃\n\n┠────────┼──────────────────┼──────────────────────────────────────┼────────┼───────────────────────┼───────────────┼────────┼─────────┼────────────┼────────┨\n\n┃        │ gcp-oauth2-token │ ec4d7d85-c71c-476b-aa76-95bf772c90da │ 🔵 gcp │ 🔵 gcp-generic        │ <multiple>    │ ➖     │ default │ 59m35s     │        ┃\n\n┃        │                  │                                      │        │ 📦 gcs-bucket         │               │        │         │            │        ┃\n\n┃        │                  │                                      │        │ 🌀 kubernetes-cluster │               │        │         │            │        ┃\n\n┃        │                  │                                      │        │ 🐳 docker-registry    │               │        │         │            │        ┃\n\n┗━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━┛\n\nAuto-configuration\n\nThe GCP Service Connector allows auto-discovering and fetching credentials and configuration set up by the GCP CLI on your local host.',
    'Can you list the steps to set up a Docker registry on a Kubernetes cluster?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.2952
cosine_accuracy@3 0.5241
cosine_accuracy@5 0.5843
cosine_accuracy@10 0.6867
cosine_precision@1 0.2952
cosine_precision@3 0.1747
cosine_precision@5 0.1169
cosine_precision@10 0.0687
cosine_recall@1 0.2952
cosine_recall@3 0.5241
cosine_recall@5 0.5843
cosine_recall@10 0.6867
cosine_ndcg@10 0.4908
cosine_mrr@10 0.4284
cosine_map@100 0.4358

Information Retrieval

Metric Value
cosine_accuracy@1 0.259
cosine_accuracy@3 0.506
cosine_accuracy@5 0.5783
cosine_accuracy@10 0.6446
cosine_precision@1 0.259
cosine_precision@3 0.1687
cosine_precision@5 0.1157
cosine_precision@10 0.0645
cosine_recall@1 0.259
cosine_recall@3 0.506
cosine_recall@5 0.5783
cosine_recall@10 0.6446
cosine_ndcg@10 0.4548
cosine_mrr@10 0.3935
cosine_map@100 0.4034

Information Retrieval

Metric Value
cosine_accuracy@1 0.2711
cosine_accuracy@3 0.4699
cosine_accuracy@5 0.5663
cosine_accuracy@10 0.6145
cosine_precision@1 0.2711
cosine_precision@3 0.1566
cosine_precision@5 0.1133
cosine_precision@10 0.0614
cosine_recall@1 0.2711
cosine_recall@3 0.4699
cosine_recall@5 0.5663
cosine_recall@10 0.6145
cosine_ndcg@10 0.4443
cosine_mrr@10 0.3894
cosine_map@100 0.3989

Information Retrieval

Metric Value
cosine_accuracy@1 0.2169
cosine_accuracy@3 0.4217
cosine_accuracy@5 0.5181
cosine_accuracy@10 0.5843
cosine_precision@1 0.2169
cosine_precision@3 0.1406
cosine_precision@5 0.1036
cosine_precision@10 0.0584
cosine_recall@1 0.2169
cosine_recall@3 0.4217
cosine_recall@5 0.5181
cosine_recall@10 0.5843
cosine_ndcg@10 0.3964
cosine_mrr@10 0.3365
cosine_map@100 0.3466

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,490 training samples
  • Columns: positive, anchor, and negative
  • Approximate statistics based on the first 1000 samples:
    positive anchor negative
    type string string string
    details
    • min: 9 tokens
    • mean: 21.02 tokens
    • max: 64 tokens
    • min: 23 tokens
    • mean: 375.16 tokens
    • max: 512 tokens
    • min: 10 tokens
    • mean: 17.51 tokens
    • max: 31 tokens
  • Samples:
    positive anchor negative
    What details can you provide about the mlflow_training_pipeline runs listed in the ZenML documentation? mlflow_training_pipeline', ┃┃ │ │ │ 'zenml_pipeline_run_uuid': 'a5d4faae-ef70-48f2-9893-6e65d5e51e98', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.005'} ┃

    ┠────────────────────────┼───────────────┼─────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨

    ┃ tensorflow-mnist-model │ 2 │ Run #2 of the mlflow_training_pipeline. │ {'zenml_version': '0.34.0', 'zenml_run_name': 'mlflow_training_pipeline-2023_03_01-08_09_08_467212', 'zenml_pipeline_name': 'mlflow_training_pipeline', ┃

    ┃ │ │ │ 'zenml_pipeline_run_uuid': '11858dcf-3e47-4b1a-82c5-6fa25ba4e037', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.003'} ┃

    ┠────────────────────────┼───────────────┼─────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨

    ┃ tensorflow-mnist-model │ 1 │ Run #1 of the mlflow_training_pipeline. │ {'zenml_version': '0.34.0', 'zenml_run_name': 'mlflow_training_pipeline-2023_03_01-08_08_52_398499', 'zenml_pipeline_name': 'mlflow_training_pipeline', ┃

    ┃ │ │ │ 'zenml_pipeline_run_uuid': '29fb22c1-6e0b-4431-9e04-226226506d16', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.001'} ┃
    Can you explain how to configure the TensorFlow settings for a different project?
    How do you register a GCP Service Connector that uses account impersonation to access the zenml-bucket-sl GCS bucket? esource-id zenml-bucket-sl

    Example Command OutputError: Service connector 'gcp-empty-sa' verification failed: connector authorization failure: failed to fetch GCS bucket

    zenml-bucket-sl: 403 GET https://storage.googleapis.com/storage/v1/b/zenml-bucket-sl?projection=noAcl&prettyPrint=false:

    empty-connectors@zenml-core.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket.

    Permission 'storage.buckets.get' denied on resource (or it may not exist).

    Next, we'll register a GCP Service Connector that actually uses account impersonation to access the zenml-bucket-sl GCS bucket and verify that it can actually access the bucket:

    zenml service-connector register gcp-impersonate-sa --type gcp --auth-method impersonation --service_account_json=@empty-connectors@zenml-core.json --project_id=zenml-core --target_principal=zenml-bucket-sl@zenml-core.iam.gserviceaccount.com --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl

    Example Command Output

    Expanding argument value service_account_json to contents of file /home/stefan/aspyre/src/zenml/empty-connectors@zenml-core.json.

    Successfully registered service connector gcp-impersonate-sa with access to the following resources:

    ┏━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┓

    ┃ RESOURCE TYPE │ RESOURCE NAMES ┃

    ┠───────────────┼──────────────────────┨

    ┃ 📦 gcs-bucket │ gs://zenml-bucket-sl ┃

    ┗━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┛

    External Account (GCP Workload Identity)

    Use GCP workload identity federation to authenticate to GCP services using AWS IAM credentials, Azure Active Directory credentials or generic OIDC tokens.
    What is the process for setting up a ZenML pipeline using AWS IAM credentials?
    Can you explain how data validation helps in detecting data drift and model drift in ZenML pipelines? of your models at different stages of development.if you have pipelines that regularly ingest new data, you should use data validation to run regular data integrity checks to signal problems before they are propagated downstream.

    in continuous training pipelines, you should use data validation techniques to compare new training data against a data reference and to compare the performance of newly trained models against previous ones.

    when you have pipelines that automate batch inference or if you regularly collect data used as input in online inference, you should use data validation to run data drift analyses and detect training-serving skew, data drift and model drift.

    Data Validator Flavors

    Data Validator are optional stack components provided by integrations. The following table lists the currently available Data Validators and summarizes their features and the data types and model types that they can be used with in ZenML pipelines:

    Data Validator Validation Features Data Types Model Types Notes Flavor/Integration Deepchecks data quality
    data drift
    model drift
    model performance tabular: pandas.DataFrame CV: torch.utils.data.dataloader.DataLoader tabular: sklearn.base.ClassifierMixin CV: torch.nn.Module Add Deepchecks data and model validation tests to your pipelines deepchecks Evidently data quality
    data drift
    model drift
    model performance tabular: pandas.DataFrame N/A Use Evidently to generate a variety of data quality and data/model drift reports and visualizations evidently Great Expectations data profiling
    data quality tabular: pandas.DataFrame N/A Perform data testing, documentation and profiling with Great Expectations great_expectations Whylogs/WhyLabs data drift tabular: pandas.DataFrame N/A Generate data profiles with whylogs and upload them to WhyLabs whylogs

    If you would like to see the available flavors of Data Validator, you can use the command:

    zenml data-validator flavor list

    How to use it
    What are the best practices for deploying web applications using Docker and Kubernetes?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "TripletLoss",
        "matryoshka_dims": [
            384,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: True
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step dim_128_cosine_map@100 dim_256_cosine_map@100 dim_384_cosine_map@100 dim_64_cosine_map@100
0.6667 1 0.3884 0.4332 0.4464 0.3140
2.0 3 0.4064 0.4195 0.4431 0.3553
2.6667 4 0.3989 0.4034 0.4358 0.3466
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification}, 
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
7
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for zenml/finetuned-snowflake-arctic-embed-m

Finetuned
this model

Evaluation results