duwuonline's picture
Update README.md
8f8397e
|
raw
history blame
2.8 kB
metadata
license: mit
base_model: xlm-roberta-base
tags:
  - generated_from_trainer
metrics:
  - f1
  - accuracy
model-index:
  - name: mymodel-classify-category-news
    results: []

mymodel-classify-category-news

This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0370
  • F1: 0.9443
  • Roc Auc: 0.9677
  • Accuracy: 0.9401

Model description

Predict type of Vietnamese news :D

Intended uses & limitations

Input limit is 512 tokens so, when model try to predict long text it will error

from transformers import pipeline

# Split chunk with 512 token (max_len of tokenizer)
chunk_size = 512
chunks = [prompt[i:i + chunk_size] for i in range(0, len(prompt), chunk_size)]

# pipeline to call model uwu
pipe = pipeline("text-classification", model="duwuonline/mymodel-classify-category-news")

# Create list to save predict
results = []

# Call model to predict small chunk and save them in list
for chunk in chunks:
    result = pipe(chunk)
    results.append(result)

# Function to get most common label
def get_most_common_label(results_list):
    label_counts = {}
    for result in results_list:
        label = result[0]['label']
        label_counts[label] = label_counts.get(label, 0) + 1

    most_common_label = max(label_counts, key=label_counts.get)
    return most_common_label

# call funtion get_most_common_label
most_common_label = get_most_common_label(results)
print("The most label appear is:", most_common_label)

Training and evaluation data

I will update later

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss F1 Roc Auc Accuracy
No log 1.0 225 0.0466 0.9354 0.9560 0.9157
No log 2.0 450 0.0505 0.9215 0.9526 0.9113
0.0418 3.0 675 0.0426 0.9330 0.9607 0.9268
0.0418 4.0 900 0.0397 0.9410 0.9664 0.9379
0.0202 5.0 1125 0.0370 0.9443 0.9677 0.9401

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.13.3