Update README.md
Browse files
README.md
CHANGED
@@ -4,11 +4,12 @@ datasets:
|
|
4 |
- pcuenq/oxford-pets
|
5 |
metrics:
|
6 |
- accuracy
|
|
|
7 |
---
|
8 |
|
9 |
# CLIP ViT Base Patch32 Fine-tuned on Oxford Pets
|
10 |
|
11 |
-
This model is a fine-tuned version of OpenAI's CLIP model on the Oxford Pets dataset.
|
12 |
|
13 |
## Training Information
|
14 |
|
@@ -17,8 +18,15 @@ This model is a fine-tuned version of OpenAI's CLIP model on the Oxford Pets dat
|
|
17 |
- **Training Epochs**: 4
|
18 |
- **Batch Size**: 256
|
19 |
- **Learning Rate**: 3e-6
|
20 |
-
- **Accuracy**: 93.74%
|
21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
## License
|
24 |
[MIT]
|
|
|
4 |
- pcuenq/oxford-pets
|
5 |
metrics:
|
6 |
- accuracy
|
7 |
+
pipeline_tag: image-classification
|
8 |
---
|
9 |
|
10 |
# CLIP ViT Base Patch32 Fine-tuned on Oxford Pets
|
11 |
|
12 |
+
This model is a fine-tuned version of OpenAI's CLIP model on the Oxford Pets dataset, intended for pets classification.
|
13 |
|
14 |
## Training Information
|
15 |
|
|
|
18 |
- **Training Epochs**: 4
|
19 |
- **Batch Size**: 256
|
20 |
- **Learning Rate**: 3e-6
|
21 |
+
- **Test Accuracy**: 93.74%
|
22 |
|
23 |
+
## Parameters Information
|
24 |
+
|
25 |
+
Trainable params: 151.2773M || All params: 151.2773M || Trainable%: 100.00%
|
26 |
+
|
27 |
+
## Bias, Risks, and Limitations
|
28 |
+
|
29 |
+
Refer to the original [CLIP repository](https://huggingface.co/openai/clip-vit-base-patch32).
|
30 |
|
31 |
## License
|
32 |
[MIT]
|