xlm-roberta-base-finetuned-panx-all-langs
This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.3089
- F1 Score: 0.8140
Model description
"νΈλμ€ ν¬λ¨Έλ₯Ό νμ©ν μμ°μ΄ μ²λ¦¬" O'Reilly μ± μ νμ©νμμ΅λλ€.
xlm-roberta-base λ₯Ό μ΄μ©νμ¬ νμΈ νλμ ν κ°μ²΄λͺ μΈμ μ λλ€.
κ° μΈμ΄λ³ f1_score
ko : f1_score = 0.8611821192789028
en : f1_score = 0.7868391074180795
ja : f1_score = 0.6440401846320934
es : f1_score = 0.8533862565120316
Intended uses & limitations
μ½λλ μ± μ νμ© λ° μ°Έκ³ νμμ΅λλ€.
Training and evaluation data
XTREME λ°μ΄ν° μ μ μ΄μ©νμμ΅λλ€. νΉν PAN-X λ°μ΄ν° μ μ μ΄μ©νμ΅λλ€.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | F1 Score |
---|---|---|---|---|
0.5093 | 1.0 | 917 | 0.3530 | 0.7679 |
0.309 | 2.0 | 1834 | 0.3101 | 0.8029 |
0.2176 | 3.0 | 2751 | 0.3089 | 0.8140 |
Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for tommyjin/xlm-roberta-base-finetuned-panx-all-langs
Base model
FacebookAI/xlm-roberta-base