File size: 3,265 Bytes
029b1cc 18e0d6d 029b1cc 18e0d6d 984d2ff 18e0d6d 984d2ff 029b1cc 97a9712 029b1cc 97a9712 011f21d 97a9712 011f21d 8f8397e 97a9712 029b1cc 97a9712 029b1cc 18e0d6d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: mymodel-classify-category-news
results: []
pipeline_tag: text-classification
---
widget:
- text: "Is this review positive or negative? Review: Best cast iron skillet you will ever buy."
example_title: "Sentiment analysis"
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had ..."
example_title: "Coreference resolution"
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book ..."
example_title: "Logic puzzles"
- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night ..."
example_title: "Reading comprehension"
# mymodel-classify-category-news
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0370
- F1: 0.9443
- Roc Auc: 0.9677
- Accuracy: 0.9401
## Model description
Predict type of Vietnamese news :D
## Intended uses & limitations
Input limit is 512 tokens so, when model try to predict long text it will error
```
from transformers import pipeline
# Split chunk with 512 token (max_len of tokenizer)
chunk_size = 512
chunks = [prompt[i:i + chunk_size] for i in range(0, len(prompt), chunk_size)]
# pipeline to call model uwu
pipe = pipeline("text-classification", model="duwuonline/mymodel-classify-category-news")
# Create list to save predict
results = []
# Call model to predict small chunk and save them in list
for chunk in chunks:
result = pipe(chunk)
results.append(result)
# Function to get most common label
def get_most_common_label(results_list):
label_counts = {}
for result in results_list:
label = result[0]['label']
label_counts[label] = label_counts.get(label, 0) + 1
most_common_label = max(label_counts, key=label_counts.get)
return most_common_label
# call funtion get_most_common_label
most_common_label = get_most_common_label(results)
print("The most label appear is:", most_common_label)
```
## Training and evaluation data
I will update later
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 225 | 0.0466 | 0.9354 | 0.9560 | 0.9157 |
| No log | 2.0 | 450 | 0.0505 | 0.9215 | 0.9526 | 0.9113 |
| 0.0418 | 3.0 | 675 | 0.0426 | 0.9330 | 0.9607 | 0.9268 |
| 0.0418 | 4.0 | 900 | 0.0397 | 0.9410 | 0.9664 | 0.9379 |
| 0.0202 | 5.0 | 1125 | 0.0370 | 0.9443 | 0.9677 | 0.9401 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3 |