Edit model card

SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
Positive
  • 'Enfim,Bonfim 🥳🥳🥳🥳🥳'
  • '👏👏👏👏'
  • 'Pequenas ações fazem sonhos realidades #OhBrabo 💙💙💙'
Negative
  • '@jeronimorodriguesba quando terá uma segunda convocação do concurso SECBA?'
  • 'Cadê a MP do piso da enfermagem ministro'
  • 'Sim !! A escola municipal aqui do bairro liberdade,30 crianças esperando até hoje as profissionais ADI para crianças que necessita acompanhamento..'

Evaluation

Metrics

Label Accuracy
all 0.9043

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Tarssio/modelo_setfit_politica_BA")
# Run inference
preds = model("👏👏👏")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 19.4813 313
Label Training Sample Count
Negative 175
Positive 199

Training Hyperparameters

  • batch_size: (4, 4)
  • num_epochs: (4, 4)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 5
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: True

Training Results

Epoch Step Training Loss Validation Loss
0.0011 1 0.3616 -
0.0535 50 0.3129 -
0.1070 100 0.2912 -
0.1604 150 0.191 -
0.2139 200 0.0907 -
0.2674 250 0.0086 -
0.3209 300 0.0042 -
0.3743 350 0.0161 -
0.4278 400 0.0007 -
0.4813 450 0.0403 -
0.5348 500 0.0055 -
0.5882 550 0.0057 -
0.6417 600 0.0002 -
0.6952 650 0.0002 -
0.7487 700 0.0 -
0.8021 750 0.0026 -
0.8556 800 0.0002 -
0.9091 850 0.0002 -
0.9626 900 0.0004 -
1.0 935 - 0.1724
1.0160 950 0.0001 -
1.0695 1000 0.0006 -
1.1230 1050 0.0001 -
1.1765 1100 0.0008 -
1.2299 1150 0.0002 -
1.2834 1200 0.0001 -
1.3369 1250 0.0002 -
1.3904 1300 0.0002 -
1.4439 1350 0.0002 -
1.4973 1400 0.0002 -
1.5508 1450 0.0 -
1.6043 1500 0.0002 -
1.6578 1550 0.2178 -
1.7112 1600 0.0002 -
1.7647 1650 0.0001 -
1.8182 1700 0.0001 -
1.8717 1750 0.0003 -
1.9251 1800 0.0359 -
1.9786 1850 0.0001 -
2.0 1870 - 0.1601
2.0321 1900 0.0001 -
2.0856 1950 0.0002 -
2.1390 2000 0.0001 -
2.1925 2050 0.0001 -
2.2460 2100 0.0002 -
2.2995 2150 0.0002 -
2.3529 2200 0.0003 -
2.4064 2250 0.0001 -
2.4599 2300 0.0002 -
2.5134 2350 0.0001 -
2.5668 2400 0.0 -
2.6203 2450 0.0001 -
2.6738 2500 0.0 -
2.7273 2550 0.0001 -
2.7807 2600 0.0001 -
2.8342 2650 0.0 -
2.8877 2700 0.0 -
2.9412 2750 0.0 -
2.9947 2800 0.0001 -
3.0 2805 - 0.1568
3.0481 2850 0.0001 -
3.1016 2900 0.0001 -
3.1551 2950 0.0001 -
3.2086 3000 0.0001 -
3.2620 3050 0.0001 -
3.3155 3100 0.0045 -
3.3690 3150 0.0 -
3.4225 3200 0.0001 -
3.4759 3250 0.0002 -
3.5294 3300 0.0 -
3.5829 3350 0.0002 -
3.6364 3400 0.0 -
3.6898 3450 0.0 -
3.7433 3500 0.0002 -
3.7968 3550 0.0 -
3.8503 3600 0.0 -
3.9037 3650 0.0005 -
3.9572 3700 0.0001 -
4.0 3740 - 0.1574
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.0.3
  • Sentence Transformers: 2.2.2
  • Transformers: 4.35.2
  • PyTorch: 2.1.0+cu121
  • Datasets: 2.16.1
  • Tokenizers: 0.15.0

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
1
Safetensors
Model size
118M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Tarssio/modelo_setfit_politica_BA

Evaluation results