--- base_model: CAMeL-Lab/bert-base-arabic-camelbert-msa library_name: transformers license: apache-2.0 metrics: - accuracy - f1 - precision - recall tags: - generated_from_trainer model-index: - name: Monglish_Arabic_FAQ-V2 results: [] --- # Monglish_Arabic_FAQ-V2 This model is a fine-tuned version of [CAMeL-Lab/bert-base-arabic-camelbert-msa](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2239 - Accuracy: 0.9569 - F1: 0.9577 - Precision: 0.9598 - Recall: 0.9569 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.6886 | 1.0 | 135 | 1.6094 | 0.8448 | 0.8290 | 0.8701 | 0.8448 | | 0.3702 | 2.0 | 270 | 0.5415 | 0.9138 | 0.9146 | 0.9376 | 0.9138 | | 0.1273 | 3.0 | 405 | 0.2414 | 0.9569 | 0.9546 | 0.9741 | 0.9569 | | 0.1635 | 4.0 | 540 | 0.2381 | 0.9569 | 0.9581 | 0.9767 | 0.9569 | | 0.0423 | 5.0 | 675 | 0.1610 | 0.9569 | 0.9576 | 0.9595 | 0.9569 | | 0.0164 | 6.0 | 810 | 0.2177 | 0.9397 | 0.9418 | 0.9491 | 0.9397 | | 0.1931 | 7.0 | 945 | 0.2232 | 0.9569 | 0.9577 | 0.9598 | 0.9569 | | 0.0148 | 8.0 | 1080 | 0.2226 | 0.9483 | 0.9498 | 0.9537 | 0.9483 | | 0.0097 | 9.0 | 1215 | 0.2225 | 0.9569 | 0.9577 | 0.9598 | 0.9569 | | 0.0131 | 10.0 | 1350 | 0.2239 | 0.9569 | 0.9577 | 0.9598 | 0.9569 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Tokenizers 0.19.1