AkshatSurolia
commited on
Commit
•
610e542
1
Parent(s):
90a02b8
Update README.md
Browse files
README.md
CHANGED
@@ -6,9 +6,9 @@ datasets:
|
|
6 |
- Face-Mask18K
|
7 |
---
|
8 |
|
9 |
-
#
|
10 |
|
11 |
-
|
12 |
Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Dosovitskiy et al.
|
13 |
|
14 |
## Model description
|
|
|
6 |
- Face-Mask18K
|
7 |
---
|
8 |
|
9 |
+
# Vision Transformer (ViT) for Face Mask Detection
|
10 |
|
11 |
+
Vision Transformer (ViT) model model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al.
|
12 |
Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Dosovitskiy et al.
|
13 |
|
14 |
## Model description
|