metadata
license: apache-2.0
base_model: facebook/convnextv2-large-1k-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnextv2-large-1k-224-finetuned-cassava-leaf-disease
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8691588785046729
convnextv2-large-1k-224-finetuned-cassava-leaf-disease
This model is a fine-tuned version of facebook/convnextv2-large-1k-224 on the imagefolder dataset. It achieves the following results on the evaluation set:
- Loss: 0.4210
- Accuracy: 0.8692
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 240
- eval_batch_size: 240
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 960
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
8.2962 | 0.49 | 10 | 5.4110 | 0.0033 |
3.1666 | 0.99 | 20 | 2.0615 | 0.5883 |
1.4693 | 1.48 | 30 | 1.0935 | 0.6084 |
0.8718 | 1.98 | 40 | 0.7291 | 0.7463 |
0.6252 | 2.47 | 50 | 0.5894 | 0.7916 |
0.5198 | 2.96 | 60 | 0.5204 | 0.8299 |
0.4517 | 3.46 | 70 | 0.4658 | 0.8393 |
0.4266 | 3.95 | 80 | 0.4664 | 0.8407 |
0.4049 | 4.44 | 90 | 0.4337 | 0.8579 |
0.3817 | 4.94 | 100 | 0.4247 | 0.8523 |
0.3696 | 5.43 | 110 | 0.4146 | 0.8621 |
0.3577 | 5.93 | 120 | 0.4058 | 0.8607 |
0.3577 | 6.42 | 130 | 0.4047 | 0.8636 |
0.3354 | 6.91 | 140 | 0.3985 | 0.8617 |
0.3356 | 7.41 | 150 | 0.4025 | 0.8645 |
0.3286 | 7.9 | 160 | 0.4054 | 0.8673 |
0.3225 | 8.4 | 170 | 0.4062 | 0.8631 |
0.317 | 8.89 | 180 | 0.4007 | 0.8692 |
0.3101 | 9.38 | 190 | 0.3931 | 0.8701 |
0.293 | 9.88 | 200 | 0.3928 | 0.8682 |
0.2992 | 10.37 | 210 | 0.3942 | 0.8668 |
0.2968 | 10.86 | 220 | 0.3892 | 0.8692 |
0.2794 | 11.36 | 230 | 0.3988 | 0.8701 |
0.2707 | 11.85 | 240 | 0.3865 | 0.8762 |
0.2883 | 12.35 | 250 | 0.4040 | 0.8640 |
0.2784 | 12.84 | 260 | 0.3930 | 0.8692 |
0.2667 | 13.33 | 270 | 0.3985 | 0.8701 |
0.2642 | 13.83 | 280 | 0.4160 | 0.8668 |
0.2612 | 14.32 | 290 | 0.4086 | 0.8687 |
0.2586 | 14.81 | 300 | 0.3990 | 0.8668 |
0.2483 | 15.31 | 310 | 0.4111 | 0.8720 |
0.254 | 15.8 | 320 | 0.4082 | 0.8748 |
0.2283 | 16.3 | 330 | 0.4165 | 0.8668 |
0.246 | 16.79 | 340 | 0.4264 | 0.8692 |
0.2365 | 17.28 | 350 | 0.4185 | 0.8692 |
0.2388 | 17.78 | 360 | 0.4152 | 0.8650 |
0.2401 | 18.27 | 370 | 0.4169 | 0.8659 |
0.2334 | 18.77 | 380 | 0.4187 | 0.8696 |
0.2245 | 19.26 | 390 | 0.4192 | 0.8692 |
0.2291 | 19.75 | 400 | 0.4210 | 0.8692 |
Framework versions
- Transformers 4.37.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.1