Edit model card

mobilenet_v2_1.0_224-cxr-view

This model is a fine-tuned version of google/mobilenet_v2_1.0_224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2278
  • Accuracy: 0.9294

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.7049 1.0 109 0.6746 0.7449
0.6565 2.0 219 0.6498 0.6743
0.5699 3.0 328 0.5730 0.7995
0.5702 4.0 438 0.5119 0.8087
0.4849 5.0 547 0.4356 0.8679
0.356 6.0 657 0.4641 0.8087
0.3713 7.0 766 0.3407 0.8679
0.4571 8.0 876 0.4896 0.7813
0.3896 9.0 985 0.3124 0.8884
0.3422 10.0 1095 0.2791 0.9271
0.3358 11.0 1204 0.3998 0.8246
0.3658 12.0 1314 0.2716 0.9066
0.4547 13.0 1423 0.5828 0.7973
0.2615 14.0 1533 0.3446 0.8542
0.377 15.0 1642 0.6322 0.7312
0.2846 16.0 1752 0.2621 0.9248
0.3433 17.0 1861 0.3709 0.8383
0.2851 18.0 1971 0.8134 0.7312
0.2298 19.0 2080 0.4324 0.8314
0.3916 20.0 2190 0.3631 0.8360
0.3049 21.0 2299 0.3405 0.8633
0.3068 22.0 2409 0.2585 0.9021
0.3091 23.0 2518 0.2278 0.9294
0.2749 24.0 2628 0.2963 0.9043
0.3543 25.0 2737 0.2637 0.8975
0.3024 26.0 2847 0.2966 0.8998
0.2593 27.0 2956 0.3842 0.8542
0.1979 28.0 3066 0.2711 0.8884
0.2549 29.0 3175 0.3145 0.8633
0.3216 29.86 3270 0.4565 0.8155

Framework versions

  • Transformers 4.28.0
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results