Upload README.md
Browse files
README.md
CHANGED
@@ -1,103 +1,168 @@
|
|
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
tags:
|
|
|
|
|
6 |
- generated_from_trainer
|
|
|
7 |
model-index:
|
8 |
- name: Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs
|
9 |
results: []
|
10 |
---
|
11 |
|
12 |
-
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
-
# Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs
|
16 |
|
17 |
-
|
18 |
-
It achieves the following results on the evaluation set:
|
19 |
-
- Loss: 0.6195
|
20 |
-
- Rmse: 0.3419
|
21 |
-
- Mae: 0.3068
|
22 |
-
- R2: -1.6131
|
23 |
-
- Explained Variance: 0.2071
|
24 |
-
- Learning Rate: 1e-05
|
25 |
|
26 |
-
|
|
|
27 |
|
28 |
-
|
29 |
|
30 |
-
|
31 |
|
32 |
-
|
33 |
|
34 |
-
|
|
|
35 |
|
36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
-
|
39 |
|
40 |
-
|
41 |
|
42 |
The following hyperparameters were used during training:
|
43 |
-
|
44 |
-
-
|
45 |
-
-
|
46 |
-
-
|
47 |
-
-
|
48 |
-
-
|
49 |
-
-
|
50 |
-
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
-
|
101 |
-
-
|
102 |
-
|
103 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
---
|
3 |
+
language:
|
4 |
+
- eng
|
5 |
+
license: cc0-1.0
|
6 |
tags:
|
7 |
+
- multilabel-image-classification
|
8 |
+
- multilabel
|
9 |
- generated_from_trainer
|
10 |
+
base_model: Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs
|
11 |
model-index:
|
12 |
- name: Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs
|
13 |
results: []
|
14 |
---
|
15 |
|
16 |
+
Ziboiai is a fine-tuned version of [Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs](https://huggingface.co/Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs). It achieves the following results on the test set:
|
17 |
+
|
18 |
+
|
19 |
+
- Loss: 0.6188
|
20 |
+
- F1 Micro: 0.9261
|
21 |
+
- F1 Macro: 0.8546
|
22 |
+
- Accuracy: 0.1600
|
23 |
+
- RMSE: 0.3415
|
24 |
+
- MAE: 0.3061
|
25 |
+
- R2: -1.5955
|
26 |
+
|
27 |
+
| Class | F1 per class |
|
28 |
+
|----------|-------|
|
29 |
+
| Acropore_branched | 0.8966 |
|
30 |
+
| Acropore_digitised | 0.6301 |
|
31 |
+
| Acropore_tabular | 1.0000 |
|
32 |
+
| Algae | 1.0000 |
|
33 |
+
| Dead_coral | 0.8395 |
|
34 |
+
| Fish | 0.8861 |
|
35 |
+
| Millepore | 1.0000 |
|
36 |
+
| No_acropore_encrusting | 1.0000 |
|
37 |
+
| No_acropore_massive | 0.0000 |
|
38 |
+
| No_acropore_sub_massive | 0.8571 |
|
39 |
+
| Rock | 1.0000 |
|
40 |
+
| Rubble | 1.0000 |
|
41 |
+
| Sand | 1.0000 |
|
42 |
|
|
|
43 |
|
44 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
+
# Model description
|
47 |
+
Ziboiai is a model built on top of Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs model for underwater multilabel image classification.The classification head is a combination of linear, ReLU, batch normalization, and dropout layers.
|
48 |
|
49 |
+
The source code for training the model can be found in this [Git repository](https://github.com/SeatizenDOI/DinoVdeau).
|
50 |
|
51 |
+
- **Developed by:** [lombardata](https://huggingface.co/lombardata), credits to [César Leblanc](https://huggingface.co/CesarLeblanc) and [Victor Illien](https://huggingface.co/groderg)
|
52 |
|
53 |
+
---
|
54 |
|
55 |
+
# Intended uses & limitations
|
56 |
+
You can use the raw model for classify diverse marine species, encompassing coral morphotypes classes taken from the Global Coral Reef Monitoring Network (GCRMN), habitats classes and seagrass species.
|
57 |
|
58 |
+
---
|
59 |
+
|
60 |
+
# Training and evaluation data
|
61 |
+
Details on the estimated number of images for each class are given in the following table:
|
62 |
+
| Class | train | test | val | Total |
|
63 |
+
|:------------------------|--------:|-------:|------:|--------:|
|
64 |
+
| Acropore_branched | 41 | 32 | 33 | 106 |
|
65 |
+
| Acropore_digitised | 15 | 14 | 14 | 43 |
|
66 |
+
| Acropore_tabular | 5 | 8 | 7 | 20 |
|
67 |
+
| Algae | 50 | 50 | 50 | 150 |
|
68 |
+
| Dead_coral | 25 | 28 | 30 | 83 |
|
69 |
+
| Fish | 34 | 24 | 31 | 89 |
|
70 |
+
| Millepore | 1 | 0 | 0 | 1 |
|
71 |
+
| No_acropore_encrusting | 1 | 0 | 0 | 1 |
|
72 |
+
| No_acropore_massive | 2 | 5 | 5 | 12 |
|
73 |
+
| No_acropore_sub_massive | 27 | 28 | 27 | 82 |
|
74 |
+
| Rock | 45 | 47 | 45 | 137 |
|
75 |
+
| Rubble | 40 | 45 | 44 | 129 |
|
76 |
+
| Sand | 42 | 46 | 45 | 133 |
|
77 |
+
|
78 |
+
---
|
79 |
|
80 |
+
# Training procedure
|
81 |
|
82 |
+
## Training hyperparameters
|
83 |
|
84 |
The following hyperparameters were used during training:
|
85 |
+
|
86 |
+
- **Number of Epochs**: 40.0
|
87 |
+
- **Learning Rate**: 0.001
|
88 |
+
- **Train Batch Size**: 32
|
89 |
+
- **Eval Batch Size**: 32
|
90 |
+
- **Optimizer**: Adam
|
91 |
+
- **LR Scheduler Type**: ReduceLROnPlateau with a patience of 5 epochs and a factor of 0.1
|
92 |
+
- **Freeze Encoder**: Yes
|
93 |
+
- **Data Augmentation**: Yes
|
94 |
+
|
95 |
+
|
96 |
+
## Data Augmentation
|
97 |
+
Data were augmented using the following transformations :
|
98 |
+
|
99 |
+
Train Transforms
|
100 |
+
- **PreProcess**: No additional parameters
|
101 |
+
- **Resize**: probability=1.00
|
102 |
+
- **RandomHorizontalFlip**: probability=0.25
|
103 |
+
- **RandomVerticalFlip**: probability=0.25
|
104 |
+
- **ColorJiggle**: probability=0.25
|
105 |
+
- **RandomPerspective**: probability=0.25
|
106 |
+
- **Normalize**: probability=1.00
|
107 |
+
|
108 |
+
Val Transforms
|
109 |
+
- **PreProcess**: No additional parameters
|
110 |
+
- **Resize**: probability=1.00
|
111 |
+
- **Normalize**: probability=1.00
|
112 |
+
|
113 |
+
|
114 |
+
|
115 |
+
## Training results
|
116 |
+
Epoch | Validation Loss | MAE | RMSE | R2 | Learning Rate
|
117 |
+
--- | --- | --- | --- | --- | ---
|
118 |
+
1 | 0.7150455713272095 | 0.3848940134048462 | 0.40997111797332764 | -20.29086685180664 | 0.001
|
119 |
+
2 | 0.7314126491546631 | 0.3895121216773987 | 0.4163060486316681 | -21.218204498291016 | 0.001
|
120 |
+
3 | 0.7726277112960815 | 0.40413352847099304 | 0.4320966601371765 | -24.822391510009766 | 0.001
|
121 |
+
4 | 0.7917326092720032 | 0.4094983637332916 | 0.4379725754261017 | -26.581586837768555 | 0.001
|
122 |
+
5 | 0.7852649092674255 | 0.402120441198349 | 0.43184274435043335 | -26.95589256286621 | 0.001
|
123 |
+
6 | 0.7647674679756165 | 0.3905399441719055 | 0.42244094610214233 | -24.40153694152832 | 0.001
|
124 |
+
7 | 0.7391812205314636 | 0.376028835773468 | 0.41028541326522827 | -22.557889938354492 | 0.001
|
125 |
+
8 | 0.7115270495414734 | 0.36385056376457214 | 0.39825379848480225 | -20.067392349243164 | 0.0001
|
126 |
+
9 | 0.6896975040435791 | 0.35347798466682434 | 0.3878582715988159 | -18.16646385192871 | 0.0001
|
127 |
+
10 | 0.6777035593986511 | 0.34683120250701904 | 0.3818005323410034 | -16.94469451904297 | 0.0001
|
128 |
+
11 | 0.6701759099960327 | 0.3423532247543335 | 0.3779585659503937 | -16.037521362304688 | 0.0001
|
129 |
+
12 | 0.663905918598175 | 0.3388546407222748 | 0.37438222765922546 | -15.605177879333496 | 0.0001
|
130 |
+
13 | 0.656491219997406 | 0.3345881700515747 | 0.3702985942363739 | -14.805088996887207 | 0.0001
|
131 |
+
14 | 0.6501385569572449 | 0.33100754022598267 | 0.3668138384819031 | -14.231175422668457 | 0.0001
|
132 |
+
15 | 0.6467865705490112 | 0.32885220646858215 | 0.36475783586502075 | -14.07986831665039 | 0.0001
|
133 |
+
16 | 0.6471170783042908 | 0.3288896679878235 | 0.3650059998035431 | -14.255745887756348 | 0.0001
|
134 |
+
17 | 0.6435126662254333 | 0.3268200755119324 | 0.36310678720474243 | -14.059813499450684 | 0.0001
|
135 |
+
18 | 0.6437923908233643 | 0.3269612491130829 | 0.36342939734458923 | -14.036934852600098 | 0.0001
|
136 |
+
19 | 0.6399621367454529 | 0.3249860107898712 | 0.36136963963508606 | -13.81522274017334 | 0.0001
|
137 |
+
20 | 0.6391971707344055 | 0.3246455192565918 | 0.3608955144882202 | -13.710391998291016 | 0.0001
|
138 |
+
21 | 0.6386714577674866 | 0.32462170720100403 | 0.3606450855731964 | -13.809860229492188 | 0.0001
|
139 |
+
22 | 0.6388444304466248 | 0.3243348002433777 | 0.36056435108184814 | -13.849721908569336 | 0.0001
|
140 |
+
23 | 0.6361631155014038 | 0.3227779269218445 | 0.35895633697509766 | -13.562189102172852 | 0.0001
|
141 |
+
24 | 0.635435163974762 | 0.3223152160644531 | 0.35847193002700806 | -13.645319938659668 | 0.0001
|
142 |
+
25 | 0.6344550848007202 | 0.32144099473953247 | 0.35783687233924866 | -13.602314949035645 | 0.0001
|
143 |
+
26 | 0.6348865628242493 | 0.3211889863014221 | 0.3580625355243683 | -13.630416870117188 | 0.0001
|
144 |
+
27 | 0.6332749724388123 | 0.32009246945381165 | 0.3570806384086609 | -13.561347007751465 | 0.0001
|
145 |
+
28 | 0.6295092701911926 | 0.31767499446868896 | 0.35479238629341125 | -13.23308277130127 | 0.0001
|
146 |
+
29 | 0.6285346746444702 | 0.3173280954360962 | 0.35434553027153015 | -13.162256240844727 | 0.0001
|
147 |
+
30 | 0.6263097524642944 | 0.31627562642097473 | 0.3532228171825409 | -12.713174819946289 | 0.0001
|
148 |
+
31 | 0.6272528767585754 | 0.316723495721817 | 0.35376670956611633 | -12.873921394348145 | 0.0001
|
149 |
+
32 | 0.6294133067131042 | 0.31807586550712585 | 0.3550169765949249 | -12.935453414916992 | 0.0001
|
150 |
+
33 | 0.6299176216125488 | 0.3185364603996277 | 0.35538923740386963 | -12.93520736694336 | 0.0001
|
151 |
+
34 | 0.6320692300796509 | 0.3193182349205017 | 0.35644862055778503 | -13.267191886901855 | 0.0001
|
152 |
+
35 | 0.6279481649398804 | 0.31752488017082214 | 0.3541102707386017 | -12.99951171875 | 0.0001
|
153 |
+
36 | 0.6280075907707214 | 0.31736499071121216 | 0.35407301783561707 | -13.00741195678711 | 0.0001
|
154 |
+
37 | 0.6303659081459045 | 0.3187006115913391 | 0.35543760657310486 | -13.230977058410645 | 1e-05
|
155 |
+
38 | 0.6297122836112976 | 0.31833118200302124 | 0.3550592064857483 | -12.983016967773438 | 1e-05
|
156 |
+
39 | 0.630845308303833 | 0.3193325996398926 | 0.35580796003341675 | -13.159842491149902 | 1e-05
|
157 |
+
40 | 0.6291573643684387 | 0.3182610869407654 | 0.3547934889793396 | -13.069788932800293 | 1e-05
|
158 |
+
|
159 |
+
|
160 |
+
---
|
161 |
+
|
162 |
+
# Framework Versions
|
163 |
+
|
164 |
+
- **Transformers**: 4.44.2
|
165 |
+
- **Pytorch**: 2.4.1+cu121
|
166 |
+
- **Datasets**: 3.0.0
|
167 |
+
- **Tokenizers**: 0.19.1
|
168 |
+
|