groderg commited on
Commit
a2aedd0
1 Parent(s): a35c0d1

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +148 -83
README.md CHANGED
@@ -1,103 +1,168 @@
 
1
  ---
2
- library_name: transformers
3
- license: apache-2.0
4
- base_model: facebook/dinov2-large
5
  tags:
 
 
6
  - generated_from_trainer
 
7
  model-index:
8
  - name: Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs
9
  results: []
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- # Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs
16
 
17
- This model is a fine-tuned version of [facebook/dinov2-large](https://huggingface.co/facebook/dinov2-large) on the None dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 0.6195
20
- - Rmse: 0.3419
21
- - Mae: 0.3068
22
- - R2: -1.6131
23
- - Explained Variance: 0.2071
24
- - Learning Rate: 1e-05
25
 
26
- ## Model description
 
27
 
28
- More information needed
29
 
30
- ## Intended uses & limitations
31
 
32
- More information needed
33
 
34
- ## Training and evaluation data
 
35
 
36
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
- ## Training procedure
39
 
40
- ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
43
- - learning_rate: 0.001
44
- - train_batch_size: 32
45
- - eval_batch_size: 32
46
- - seed: 42
47
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
- - lr_scheduler_type: linear
49
- - num_epochs: 150
50
- - mixed_precision_training: Native AMP
51
-
52
- ### Training results
53
-
54
- | Training Loss | Epoch | Step | Validation Loss | Rmse | Mae | R2 | Explained Variance | Rate |
55
- |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:--------:|:------------------:|:------:|
56
- | No log | 1.0 | 2 | 0.7150 | 0.4100 | 0.3849 | -20.2909 | 0.0364 | 0.001 |
57
- | No log | 2.0 | 4 | 0.7314 | 0.4163 | 0.3895 | -21.2182 | 0.0241 | 0.001 |
58
- | No log | 3.0 | 6 | 0.7726 | 0.4321 | 0.4041 | -24.8224 | -0.0469 | 0.001 |
59
- | No log | 4.0 | 8 | 0.7917 | 0.4380 | 0.4095 | -26.5816 | -0.0667 | 0.001 |
60
- | No log | 5.0 | 10 | 0.7853 | 0.4318 | 0.4021 | -26.9559 | -0.1362 | 0.001 |
61
- | No log | 6.0 | 12 | 0.7648 | 0.4224 | 0.3905 | -24.4015 | -0.1297 | 0.001 |
62
- | No log | 7.0 | 14 | 0.7392 | 0.4103 | 0.3760 | -22.5579 | -0.1098 | 0.001 |
63
- | No log | 8.0 | 16 | 0.7115 | 0.3983 | 0.3639 | -20.0674 | -0.1054 | 0.0001 |
64
- | No log | 9.0 | 18 | 0.6897 | 0.3879 | 0.3535 | -18.1665 | -0.0925 | 0.0001 |
65
- | No log | 10.0 | 20 | 0.6777 | 0.3818 | 0.3468 | -16.9447 | -0.1029 | 0.0001 |
66
- | No log | 11.0 | 22 | 0.6702 | 0.3780 | 0.3424 | -16.0375 | -0.1169 | 0.0001 |
67
- | No log | 12.0 | 24 | 0.6639 | 0.3744 | 0.3389 | -15.6052 | -0.1121 | 0.0001 |
68
- | No log | 13.0 | 26 | 0.6565 | 0.3703 | 0.3346 | -14.8051 | -0.1065 | 0.0001 |
69
- | No log | 14.0 | 28 | 0.6501 | 0.3668 | 0.3310 | -14.2312 | -0.0958 | 0.0001 |
70
- | No log | 15.0 | 30 | 0.6468 | 0.3648 | 0.3289 | -14.0799 | -0.0855 | 0.0001 |
71
- | No log | 16.0 | 32 | 0.6471 | 0.3650 | 0.3289 | -14.2557 | -0.0823 | 0.0001 |
72
- | No log | 17.0 | 34 | 0.6435 | 0.3631 | 0.3268 | -14.0598 | -0.0810 | 0.0001 |
73
- | No log | 18.0 | 36 | 0.6438 | 0.3634 | 0.3270 | -14.0369 | -0.0799 | 0.0001 |
74
- | No log | 19.0 | 38 | 0.6400 | 0.3614 | 0.3250 | -13.8152 | -0.0888 | 0.0001 |
75
- | No log | 20.0 | 40 | 0.6392 | 0.3609 | 0.3246 | -13.7104 | -0.0935 | 0.0001 |
76
- | No log | 21.0 | 42 | 0.6387 | 0.3606 | 0.3246 | -13.8099 | -0.0993 | 0.0001 |
77
- | No log | 22.0 | 44 | 0.6388 | 0.3606 | 0.3243 | -13.8497 | -0.1056 | 0.0001 |
78
- | No log | 23.0 | 46 | 0.6362 | 0.3590 | 0.3228 | -13.5622 | -0.1035 | 0.0001 |
79
- | No log | 24.0 | 48 | 0.6354 | 0.3585 | 0.3223 | -13.6453 | -0.1058 | 0.0001 |
80
- | No log | 25.0 | 50 | 0.6345 | 0.3578 | 0.3214 | -13.6023 | -0.1036 | 0.0001 |
81
- | No log | 26.0 | 52 | 0.6349 | 0.3581 | 0.3212 | -13.6304 | -0.1173 | 0.0001 |
82
- | No log | 27.0 | 54 | 0.6333 | 0.3571 | 0.3201 | -13.5613 | -0.1148 | 0.0001 |
83
- | No log | 28.0 | 56 | 0.6295 | 0.3548 | 0.3177 | -13.2331 | -0.1083 | 0.0001 |
84
- | No log | 29.0 | 58 | 0.6285 | 0.3543 | 0.3173 | -13.1623 | -0.1047 | 0.0001 |
85
- | No log | 30.0 | 60 | 0.6263 | 0.3532 | 0.3163 | -12.7132 | -0.0926 | 0.0001 |
86
- | No log | 31.0 | 62 | 0.6273 | 0.3538 | 0.3167 | -12.8739 | -0.0893 | 0.0001 |
87
- | No log | 32.0 | 64 | 0.6294 | 0.3550 | 0.3181 | -12.9355 | -0.0790 | 0.0001 |
88
- | No log | 33.0 | 66 | 0.6299 | 0.3554 | 0.3185 | -12.9352 | -0.0752 | 0.0001 |
89
- | No log | 34.0 | 68 | 0.6321 | 0.3564 | 0.3193 | -13.2672 | -0.0702 | 0.0001 |
90
- | No log | 35.0 | 70 | 0.6279 | 0.3541 | 0.3175 | -12.9995 | -0.0487 | 0.0001 |
91
- | No log | 36.0 | 72 | 0.6280 | 0.3541 | 0.3174 | -13.0074 | -0.0466 | 0.0001 |
92
- | No log | 37.0 | 74 | 0.6304 | 0.3554 | 0.3187 | -13.2310 | -0.0494 | 1e-05 |
93
- | No log | 38.0 | 76 | 0.6297 | 0.3551 | 0.3183 | -12.9830 | -0.0439 | 1e-05 |
94
- | No log | 39.0 | 78 | 0.6308 | 0.3558 | 0.3193 | -13.1598 | -0.0430 | 1e-05 |
95
- | No log | 40.0 | 80 | 0.6292 | 0.3548 | 0.3183 | -13.0698 | -0.0435 | 1e-05 |
96
-
97
-
98
- ### Framework versions
99
-
100
- - Transformers 4.44.2
101
- - Pytorch 2.4.1+cu121
102
- - Datasets 3.0.0
103
- - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
  ---
3
+ language:
4
+ - eng
5
+ license: cc0-1.0
6
  tags:
7
+ - multilabel-image-classification
8
+ - multilabel
9
  - generated_from_trainer
10
+ base_model: Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs
11
  model-index:
12
  - name: Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs
13
  results: []
14
  ---
15
 
16
+ Ziboiai is a fine-tuned version of [Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs](https://huggingface.co/Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs). It achieves the following results on the test set:
17
+
18
+
19
+ - Loss: 0.6188
20
+ - F1 Micro: 0.9261
21
+ - F1 Macro: 0.8546
22
+ - Accuracy: 0.1600
23
+ - RMSE: 0.3415
24
+ - MAE: 0.3061
25
+ - R2: -1.5955
26
+
27
+ | Class | F1 per class |
28
+ |----------|-------|
29
+ | Acropore_branched | 0.8966 |
30
+ | Acropore_digitised | 0.6301 |
31
+ | Acropore_tabular | 1.0000 |
32
+ | Algae | 1.0000 |
33
+ | Dead_coral | 0.8395 |
34
+ | Fish | 0.8861 |
35
+ | Millepore | 1.0000 |
36
+ | No_acropore_encrusting | 1.0000 |
37
+ | No_acropore_massive | 0.0000 |
38
+ | No_acropore_sub_massive | 0.8571 |
39
+ | Rock | 1.0000 |
40
+ | Rubble | 1.0000 |
41
+ | Sand | 1.0000 |
42
 
 
43
 
44
+ ---
 
 
 
 
 
 
 
45
 
46
+ # Model description
47
+ Ziboiai is a model built on top of Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs model for underwater multilabel image classification.The classification head is a combination of linear, ReLU, batch normalization, and dropout layers.
48
 
49
+ The source code for training the model can be found in this [Git repository](https://github.com/SeatizenDOI/DinoVdeau).
50
 
51
+ - **Developed by:** [lombardata](https://huggingface.co/lombardata), credits to [César Leblanc](https://huggingface.co/CesarLeblanc) and [Victor Illien](https://huggingface.co/groderg)
52
 
53
+ ---
54
 
55
+ # Intended uses & limitations
56
+ You can use the raw model for classify diverse marine species, encompassing coral morphotypes classes taken from the Global Coral Reef Monitoring Network (GCRMN), habitats classes and seagrass species.
57
 
58
+ ---
59
+
60
+ # Training and evaluation data
61
+ Details on the estimated number of images for each class are given in the following table:
62
+ | Class | train | test | val | Total |
63
+ |:------------------------|--------:|-------:|------:|--------:|
64
+ | Acropore_branched | 41 | 32 | 33 | 106 |
65
+ | Acropore_digitised | 15 | 14 | 14 | 43 |
66
+ | Acropore_tabular | 5 | 8 | 7 | 20 |
67
+ | Algae | 50 | 50 | 50 | 150 |
68
+ | Dead_coral | 25 | 28 | 30 | 83 |
69
+ | Fish | 34 | 24 | 31 | 89 |
70
+ | Millepore | 1 | 0 | 0 | 1 |
71
+ | No_acropore_encrusting | 1 | 0 | 0 | 1 |
72
+ | No_acropore_massive | 2 | 5 | 5 | 12 |
73
+ | No_acropore_sub_massive | 27 | 28 | 27 | 82 |
74
+ | Rock | 45 | 47 | 45 | 137 |
75
+ | Rubble | 40 | 45 | 44 | 129 |
76
+ | Sand | 42 | 46 | 45 | 133 |
77
+
78
+ ---
79
 
80
+ # Training procedure
81
 
82
+ ## Training hyperparameters
83
 
84
  The following hyperparameters were used during training:
85
+
86
+ - **Number of Epochs**: 40.0
87
+ - **Learning Rate**: 0.001
88
+ - **Train Batch Size**: 32
89
+ - **Eval Batch Size**: 32
90
+ - **Optimizer**: Adam
91
+ - **LR Scheduler Type**: ReduceLROnPlateau with a patience of 5 epochs and a factor of 0.1
92
+ - **Freeze Encoder**: Yes
93
+ - **Data Augmentation**: Yes
94
+
95
+
96
+ ## Data Augmentation
97
+ Data were augmented using the following transformations :
98
+
99
+ Train Transforms
100
+ - **PreProcess**: No additional parameters
101
+ - **Resize**: probability=1.00
102
+ - **RandomHorizontalFlip**: probability=0.25
103
+ - **RandomVerticalFlip**: probability=0.25
104
+ - **ColorJiggle**: probability=0.25
105
+ - **RandomPerspective**: probability=0.25
106
+ - **Normalize**: probability=1.00
107
+
108
+ Val Transforms
109
+ - **PreProcess**: No additional parameters
110
+ - **Resize**: probability=1.00
111
+ - **Normalize**: probability=1.00
112
+
113
+
114
+
115
+ ## Training results
116
+ Epoch | Validation Loss | MAE | RMSE | R2 | Learning Rate
117
+ --- | --- | --- | --- | --- | ---
118
+ 1 | 0.7150455713272095 | 0.3848940134048462 | 0.40997111797332764 | -20.29086685180664 | 0.001
119
+ 2 | 0.7314126491546631 | 0.3895121216773987 | 0.4163060486316681 | -21.218204498291016 | 0.001
120
+ 3 | 0.7726277112960815 | 0.40413352847099304 | 0.4320966601371765 | -24.822391510009766 | 0.001
121
+ 4 | 0.7917326092720032 | 0.4094983637332916 | 0.4379725754261017 | -26.581586837768555 | 0.001
122
+ 5 | 0.7852649092674255 | 0.402120441198349 | 0.43184274435043335 | -26.95589256286621 | 0.001
123
+ 6 | 0.7647674679756165 | 0.3905399441719055 | 0.42244094610214233 | -24.40153694152832 | 0.001
124
+ 7 | 0.7391812205314636 | 0.376028835773468 | 0.41028541326522827 | -22.557889938354492 | 0.001
125
+ 8 | 0.7115270495414734 | 0.36385056376457214 | 0.39825379848480225 | -20.067392349243164 | 0.0001
126
+ 9 | 0.6896975040435791 | 0.35347798466682434 | 0.3878582715988159 | -18.16646385192871 | 0.0001
127
+ 10 | 0.6777035593986511 | 0.34683120250701904 | 0.3818005323410034 | -16.94469451904297 | 0.0001
128
+ 11 | 0.6701759099960327 | 0.3423532247543335 | 0.3779585659503937 | -16.037521362304688 | 0.0001
129
+ 12 | 0.663905918598175 | 0.3388546407222748 | 0.37438222765922546 | -15.605177879333496 | 0.0001
130
+ 13 | 0.656491219997406 | 0.3345881700515747 | 0.3702985942363739 | -14.805088996887207 | 0.0001
131
+ 14 | 0.6501385569572449 | 0.33100754022598267 | 0.3668138384819031 | -14.231175422668457 | 0.0001
132
+ 15 | 0.6467865705490112 | 0.32885220646858215 | 0.36475783586502075 | -14.07986831665039 | 0.0001
133
+ 16 | 0.6471170783042908 | 0.3288896679878235 | 0.3650059998035431 | -14.255745887756348 | 0.0001
134
+ 17 | 0.6435126662254333 | 0.3268200755119324 | 0.36310678720474243 | -14.059813499450684 | 0.0001
135
+ 18 | 0.6437923908233643 | 0.3269612491130829 | 0.36342939734458923 | -14.036934852600098 | 0.0001
136
+ 19 | 0.6399621367454529 | 0.3249860107898712 | 0.36136963963508606 | -13.81522274017334 | 0.0001
137
+ 20 | 0.6391971707344055 | 0.3246455192565918 | 0.3608955144882202 | -13.710391998291016 | 0.0001
138
+ 21 | 0.6386714577674866 | 0.32462170720100403 | 0.3606450855731964 | -13.809860229492188 | 0.0001
139
+ 22 | 0.6388444304466248 | 0.3243348002433777 | 0.36056435108184814 | -13.849721908569336 | 0.0001
140
+ 23 | 0.6361631155014038 | 0.3227779269218445 | 0.35895633697509766 | -13.562189102172852 | 0.0001
141
+ 24 | 0.635435163974762 | 0.3223152160644531 | 0.35847193002700806 | -13.645319938659668 | 0.0001
142
+ 25 | 0.6344550848007202 | 0.32144099473953247 | 0.35783687233924866 | -13.602314949035645 | 0.0001
143
+ 26 | 0.6348865628242493 | 0.3211889863014221 | 0.3580625355243683 | -13.630416870117188 | 0.0001
144
+ 27 | 0.6332749724388123 | 0.32009246945381165 | 0.3570806384086609 | -13.561347007751465 | 0.0001
145
+ 28 | 0.6295092701911926 | 0.31767499446868896 | 0.35479238629341125 | -13.23308277130127 | 0.0001
146
+ 29 | 0.6285346746444702 | 0.3173280954360962 | 0.35434553027153015 | -13.162256240844727 | 0.0001
147
+ 30 | 0.6263097524642944 | 0.31627562642097473 | 0.3532228171825409 | -12.713174819946289 | 0.0001
148
+ 31 | 0.6272528767585754 | 0.316723495721817 | 0.35376670956611633 | -12.873921394348145 | 0.0001
149
+ 32 | 0.6294133067131042 | 0.31807586550712585 | 0.3550169765949249 | -12.935453414916992 | 0.0001
150
+ 33 | 0.6299176216125488 | 0.3185364603996277 | 0.35538923740386963 | -12.93520736694336 | 0.0001
151
+ 34 | 0.6320692300796509 | 0.3193182349205017 | 0.35644862055778503 | -13.267191886901855 | 0.0001
152
+ 35 | 0.6279481649398804 | 0.31752488017082214 | 0.3541102707386017 | -12.99951171875 | 0.0001
153
+ 36 | 0.6280075907707214 | 0.31736499071121216 | 0.35407301783561707 | -13.00741195678711 | 0.0001
154
+ 37 | 0.6303659081459045 | 0.3187006115913391 | 0.35543760657310486 | -13.230977058410645 | 1e-05
155
+ 38 | 0.6297122836112976 | 0.31833118200302124 | 0.3550592064857483 | -12.983016967773438 | 1e-05
156
+ 39 | 0.630845308303833 | 0.3193325996398926 | 0.35580796003341675 | -13.159842491149902 | 1e-05
157
+ 40 | 0.6291573643684387 | 0.3182610869407654 | 0.3547934889793396 | -13.069788932800293 | 1e-05
158
+
159
+
160
+ ---
161
+
162
+ # Framework Versions
163
+
164
+ - **Transformers**: 4.44.2
165
+ - **Pytorch**: 2.4.1+cu121
166
+ - **Datasets**: 3.0.0
167
+ - **Tokenizers**: 0.19.1
168
+