|
--- |
|
library_name: transformers |
|
base_model: openai/clip-vit-large-patch14-336 |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: clip-finetuned-csu-p14-336-e4l57-l |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# clip-finetuned-csu-p14-336-e4l57-l |
|
|
|
This model is a fine-tuned version of [openai/clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) on an unknown dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.1766 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 5e-07 |
|
- train_batch_size: 128 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 4.0 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:------:|:-----:|:---------------:| |
|
| 0.271 | 0.0921 | 500 | 1.0693 | |
|
| 0.2493 | 0.1842 | 1000 | 0.9427 | |
|
| 0.2348 | 0.2763 | 1500 | 0.8727 | |
|
| 0.1552 | 0.3685 | 2000 | 0.8326 | |
|
| 0.1753 | 0.4606 | 2500 | 0.7550 | |
|
| 0.1659 | 0.5527 | 3000 | 0.7192 | |
|
| 0.105 | 0.6448 | 3500 | 0.7118 | |
|
| 0.1336 | 0.7369 | 4000 | 0.6953 | |
|
| 0.1154 | 0.8290 | 4500 | 0.6745 | |
|
| 0.108 | 0.9211 | 5000 | 0.6560 | |
|
| 0.1106 | 1.0133 | 5500 | 0.6367 | |
|
| 0.0591 | 1.1054 | 6000 | 0.6259 | |
|
| 0.0745 | 1.1975 | 6500 | 0.6210 | |
|
| 0.0502 | 1.2896 | 7000 | 0.6133 | |
|
| 0.079 | 1.3817 | 7500 | 0.6007 | |
|
| 0.0776 | 1.4738 | 8000 | 0.5866 | |
|
| 0.0492 | 1.5660 | 8500 | 0.5679 | |
|
| 0.0794 | 1.6581 | 9000 | 0.5762 | |
|
| 0.0677 | 1.7502 | 9500 | 0.5566 | |
|
| 0.0566 | 1.8423 | 10000 | 0.5482 | |
|
| 0.0828 | 1.9344 | 10500 | 0.5500 | |
|
| 0.0573 | 2.0265 | 11000 | 0.5342 | |
|
| 0.0401 | 2.1186 | 11500 | 0.5351 | |
|
| 0.0152 | 2.2108 | 12000 | 0.5349 | |
|
| 0.0638 | 2.3029 | 12500 | 0.5318 | |
|
| 0.0488 | 2.3950 | 13000 | 0.5306 | |
|
| 0.0456 | 2.4871 | 13500 | 0.5211 | |
|
| 0.0264 | 2.5792 | 14000 | 0.5194 | |
|
| 0.0381 | 2.6713 | 14500 | 0.5206 | |
|
| 0.0413 | 2.7634 | 15000 | 0.5168 | |
|
| 0.0392 | 2.8556 | 15500 | 0.5149 | |
|
| 0.0352 | 2.9477 | 16000 | 0.5112 | |
|
| 0.0467 | 3.0398 | 16500 | 0.5098 | |
|
| 0.0366 | 3.1319 | 17000 | 0.5089 | |
|
| 0.0454 | 3.2240 | 17500 | 0.5104 | |
|
| 0.0209 | 3.3161 | 18000 | 0.5071 | |
|
| 0.0636 | 3.4083 | 18500 | 0.5045 | |
|
| 0.0159 | 3.5004 | 19000 | 0.5019 | |
|
| 0.0303 | 3.5925 | 19500 | 0.4985 | |
|
| 0.0353 | 3.6846 | 20000 | 0.4975 | |
|
| 0.0261 | 3.7767 | 20500 | 0.4962 | |
|
| 0.0291 | 3.8688 | 21000 | 0.4956 | |
|
| 0.0338 | 3.9609 | 21500 | 0.4956 | |
|
| 0.0432 | 3.6845 | 22000 | 0.1784 | |
|
| 0.0461 | 3.7682 | 22500 | 0.1767 | |
|
| 0.0513 | 3.8520 | 23000 | 0.1774 | |
|
| 0.0326 | 3.9357 | 23500 | 0.1766 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.45.0.dev0 |
|
- Pytorch 1.12.1 |
|
- Datasets 2.21.0 |
|
- Tokenizers 0.19.1 |
|
|