Edit model card

SetFit with BAAI/bge-base-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
1
  • 'Reasoning:\nThe answer correctly identifies Joan Gaspart as the individual who resigned from the presidency of Barcelona after the team's poor showing in the 2003 season. This is directly supported by the document, which explicitly states that "club president Joan Gaspart resigned, his position having been made completely untenable by such a disastrous season on top of the club's overall decline in fortunes since he became president three years prior." The answer is concise and directly relevant to the question without including any extraneous information.\n\nEvaluation:'
  • "Reasoning:\nThe provided answer directly addresses the question of why it is recommended to hire a professional residential electrician like O'Hara Electric for electrical work in your house. The answer highlights key points such as the hazards of working with electricity, the potential for injury, and the long-term implications of improperly done electrical work. It also mentions the risk involved even in seemingly simple tasks like smoke detector installation and emphasizes the benefits of having the job done correctly the first time by a professional. The details arewell-supported by the document.\n\nEvaluation:"
  • 'Reasoning:\nThe answer "The title of Aerosmith's 1987 comeback album was 'Permanent Vacation'" is directly supported by the provided document. The document explicitly states, "Aerosmith's comeback album Permanent Vacation (1987) would begin a decade long revival of their popularity." The answer is directly related to the question asked and does not deviate into unrelated topics, ensuring conciseness and relevance.\n\nEvaluation:'
0
  • 'Reasoning:\nThe answer provides a well-supported response that aligns directly with the content presented in the document. It addresses various strategies to combat smoking cravings, such as identifying and avoiding triggers, using distractions, and engaging in alternative activities. Specific triggers, like daily routines and social situations, are described in both the answer and the document. Additionally, the advice on using chewing licorice root and engaging in smoke-free activities is related to the suggestions given in the document. The answer is clear, concise, and stays relevant to the question throughout.\n\nFinal Evaluation: \nEvaluation:'
  • "Reasoning:\nThe provided answer accurately captures the challenges Amy Bloom faces when starting a significant writing project, as detailed in the document. Notably, it mentions the difficulty of getting started, the need to clear mental space, and to recalibrate her daily life, which are all points grounded in the text. The answer also covers her becoming less involved in everyday life and spending less time on domestic concerns, which aligns well with the provided passage. However, the part about traveling to a remote island with no internet access is not mentioned in the document and appears to be fabricated, which detracts from the answer's context grounding.\n\nFinal Result:"
  • 'Reasoning:\nThe provided answer incorrectly states the price and location of the 6 bedroom detached house. According to the document, the 6 bedroom detached house is for sale at a price of £950,000 and is located at Willow Drive, Twyford, Reading, Berkshire, RG10. The answer gives a different priceand an incorrect location.\n\nFinal Evaluation:'

Evaluation

Metrics

Label Accuracy
all 0.9333

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_rag_ds_gpt-4o_improved-cot-instructions_chat_few_shot_generated_remove_fi")
# Run inference
preds = model("Reasoning:
The answer directly addresses the question by stating that China's Ning Zhongyan won the gold medal in the men's 1,500m final at the speed skating World Cup. This information is clearly found in the document, which confirms Ning's achievement at the event in Stavanger, Norway. The answer is concise, relevant, and well-supported by the given context, avoiding extraneous details.

Final Evaluation:")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 33 76.9045 176
Label Training Sample Count
0 95
1 104

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0020 1 0.2375 -
0.1004 50 0.2548 -
0.2008 100 0.2339 -
0.3012 150 0.0973 -
0.4016 200 0.0347 -
0.5020 250 0.0125 -
0.6024 300 0.0058 -
0.7028 350 0.004 -
0.8032 400 0.0033 -
0.9036 450 0.0023 -

Framework Versions

  • Python: 3.10.14
  • SetFit: 1.1.0
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.0
  • PyTorch: 2.4.0+cu121
  • Datasets: 3.0.0
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
18
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Netta1994/setfit_baai_rag_ds_gpt-4o_improved-cot-instructions_chat_few_shot_generated_remove_fi

Finetuned
(292)
this model

Evaluation results