File size: 1,357 Bytes
cfe845b 69824c8 cfe845b 40e087c 69824c8 cfe845b 40e087c e1ca516 69824c8 40e087c 69824c8 40e087c 69824c8 cfe845b 9b6b0dd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: true
datasets:
- Indo4B
---
# IndoBERT-Lite Large Model (phase2 - uncased) Finetuned on IndoNLU SmSA dataset
Finetuned the IndoBERT-Lite Large Model (phase2 - uncased) model on the IndoNLU SmSA dataset following the procedues stated in the paper [IndoNLU: Benchmark and Resources for Evaluating Indonesian
Natural Language Understanding](https://arxiv.org/pdf/2009.05387.pdf).
## How to use
```python
from transformers import pipeline
classifier = pipeline("text-classification",
model='tyqiangz/indobert-lite-large-p2-smsa',
return_all_scores=True)
text = "Penyakit koronavirus 2019"
prediction = classifier(text)
prediction
"""
Output:
[[{'label': 'positive', 'score': 0.0006000096909701824},
{'label': 'neutral', 'score': 0.01223431620746851},
{'label': 'negative', 'score': 0.987165629863739}]]
"""
```
**Finetuning hyperparameters:**
- learning rate: 2e-5
- batch size: 16
- no. of epochs: 5
- max sequence length: 512
- random seed: 42
**Classes:**
- 0: positive
- 1: neutral
- 2: negative
**Performance metrics on SmSA validation dataset**
- Validation accuracy: 0.94
- Validation F1: 0.91
- Validation Recall: 0.91
- Validation Precision: 0.93
|