Edit model card

segformer-b5-finetuned-Hiking

This model is a fine-tuned version of nvidia/mit-b5 on the twdent/Hiking dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1401
  • Mean Iou: 0.6237
  • Mean Accuracy: 0.9673
  • Overall Accuracy: 0.9683
  • Accuracy Unlabeled: nan
  • Accuracy Traversable: 0.9641
  • Accuracy Non-traversable: 0.9705
  • Iou Unlabeled: 0.0
  • Iou Traversable: 0.9178
  • Iou Non-traversable: 0.9532

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 6e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Accuracy Unlabeled Accuracy Traversable Accuracy Non-traversable Iou Unlabeled Iou Traversable Iou Non-traversable
0.2675 1.33 20 0.2742 0.6058 0.9616 0.9550 nan 0.9826 0.9406 0.0 0.8853 0.9321
0.1418 2.67 40 0.1827 0.6073 0.9562 0.9566 nan 0.9548 0.9575 0.0 0.8858 0.9360
0.0949 4.0 60 0.1561 0.6002 0.9382 0.9523 nan 0.8931 0.9832 0.0 0.8692 0.9314
0.0684 5.33 80 0.1364 0.6135 0.9556 0.9614 nan 0.9369 0.9742 0.0 0.8967 0.9437
0.0627 6.67 100 0.1289 0.6122 0.9506 0.9610 nan 0.9177 0.9836 0.0 0.8928 0.9438
0.0625 8.0 120 0.1097 0.6208 0.9610 0.9658 nan 0.9458 0.9762 0.0 0.9113 0.9510
0.0371 9.33 140 0.1361 0.6130 0.9551 0.9610 nan 0.9361 0.9741 0.0 0.8959 0.9431
0.0409 10.67 160 0.1239 0.6194 0.9615 0.9653 nan 0.9494 0.9737 0.0 0.9086 0.9494
0.0457 12.0 180 0.0993 0.6281 0.9715 0.9713 nan 0.9723 0.9707 0.0 0.9264 0.9579
0.0368 13.33 200 0.1354 0.6146 0.9563 0.9617 nan 0.9389 0.9737 0.0 0.8993 0.9446
0.0667 14.67 220 0.1208 0.6171 0.9565 0.9644 nan 0.9316 0.9815 0.0 0.9032 0.9482
0.029 16.0 240 0.0946 0.6291 0.9695 0.9724 nan 0.9606 0.9785 0.0 0.9276 0.9596
0.0467 17.33 260 0.1188 0.6224 0.9655 0.9676 nan 0.9589 0.9721 0.0 0.9151 0.9522
0.0449 18.67 280 0.1201 0.6212 0.9638 0.9667 nan 0.9545 0.9731 0.0 0.9125 0.9511
0.0353 20.0 300 0.1285 0.6234 0.9687 0.9681 nan 0.9706 0.9668 0.0 0.9174 0.9527
0.025 21.33 320 0.1292 0.6204 0.9641 0.9659 nan 0.9582 0.9699 0.0 0.9114 0.9500
0.0244 22.67 340 0.1352 0.6208 0.9665 0.9664 nan 0.9667 0.9662 0.0 0.9124 0.9501
0.035 24.0 360 0.1260 0.6252 0.9699 0.9693 nan 0.9718 0.9681 0.0 0.9211 0.9544
0.0295 25.33 380 0.1190 0.6244 0.9669 0.9688 nan 0.9607 0.9730 0.0 0.9190 0.9543
0.032 26.67 400 0.1258 0.6253 0.9694 0.9695 nan 0.9693 0.9695 0.0 0.9211 0.9547
0.0241 28.0 420 0.1255 0.6230 0.9658 0.9678 nan 0.9593 0.9723 0.0 0.9164 0.9527
0.0246 29.33 440 0.1273 0.6238 0.9675 0.9683 nan 0.9651 0.9699 0.0 0.9179 0.9534
0.0214 30.67 460 0.1321 0.6233 0.9670 0.9675 nan 0.9652 0.9687 0.0 0.9171 0.9527
0.0236 32.0 480 0.1289 0.6241 0.9687 0.9685 nan 0.9695 0.9679 0.0 0.9189 0.9534
0.0238 33.33 500 0.1309 0.6234 0.9664 0.9680 nan 0.9612 0.9716 0.0 0.9172 0.9529
0.0204 34.67 520 0.1271 0.6249 0.9681 0.9693 nan 0.9643 0.9719 0.0 0.9201 0.9547
0.0243 36.0 540 0.1264 0.6248 0.9679 0.9693 nan 0.9636 0.9723 0.0 0.9196 0.9547
0.0259 37.33 560 0.1305 0.6226 0.9656 0.9679 nan 0.9582 0.9730 0.0 0.9154 0.9525
0.0341 38.67 580 0.1277 0.6245 0.9674 0.9690 nan 0.9623 0.9725 0.0 0.9192 0.9543
0.0275 40.0 600 0.1369 0.6221 0.9653 0.9672 nan 0.9590 0.9715 0.0 0.9147 0.9516
0.0303 41.33 620 0.1380 0.6235 0.9674 0.9681 nan 0.9650 0.9698 0.0 0.9175 0.9530
0.0207 42.67 640 0.1389 0.6237 0.9677 0.9682 nan 0.9662 0.9692 0.0 0.9180 0.9531
0.0231 44.0 660 0.1369 0.6243 0.9679 0.9688 nan 0.9652 0.9707 0.0 0.9190 0.9538
0.0249 45.33 680 0.1379 0.6237 0.9672 0.9683 nan 0.9640 0.9705 0.0 0.9179 0.9532
0.0382 46.67 700 0.1384 0.6239 0.9677 0.9685 nan 0.9650 0.9704 0.0 0.9182 0.9534
0.0238 48.0 720 0.1420 0.6230 0.9668 0.9677 nan 0.9640 0.9697 0.0 0.9166 0.9524
0.0212 49.33 740 0.1401 0.6237 0.9673 0.9683 nan 0.9641 0.9705 0.0 0.9178 0.9532

Framework versions

  • Transformers 4.35.0.dev0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.0
Downloads last month
14
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for twdent/segformer-b5-finetuned-Hiking

Base model

nvidia/mit-b5
Finetuned
(42)
this model