rawalkhirodkar
commited on
Commit
•
1f64c65
1
Parent(s):
857e99f
Update model card for Sapiens with architecture details
Browse files
README.md
CHANGED
@@ -3,32 +3,31 @@ language: en
|
|
3 |
license: cc-by-nc-4.0
|
4 |
---
|
5 |
|
6 |
-
# Sapiens-
|
7 |
|
8 |
-
## Model Card
|
9 |
-
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
## Model Details
|
13 |
-
|
14 |
-
|
15 |
-
Sapiens-2b natively support 1K high-resolution inference and are extremely easy to adapt for individual tasks by simply fine-tuning models pretrained on over 300 million in-the-wild human images. The resulting models exhibit remarkable generalization to in-the-wild data, even when labeled data is scarce or entirely synthetic. Our simple model design also brings scalability - model performance across tasks improves as we scale the parameters from 0.3 to 2 billion. Sapiens consistently surpasses existing baselines across various human-centric benchmarks.
|
16 |
|
17 |
- **Developed by:** Meta
|
18 |
- **Model type:** Vision Transformer
|
19 |
- **License:** Creative Commons Attribution-NonCommercial 4.0
|
20 |
-
- **Model Size:** 2b
|
21 |
- **Task:** pretrain
|
22 |
- **Format:** torchscript
|
23 |
- **File:** sapiens_2b_epoch_660_torchscript.pt2
|
24 |
|
25 |
-
|
26 |
-
|
27 |
### Model Sources
|
28 |
-
|
29 |
- **Repository:** [https://github.com/facebookresearch/sapiens](https://github.com/facebookresearch/sapiens)
|
30 |
- **Paper:** [https://arxiv.org/abs/2408.12569](https://arxiv.org/abs/2408.12569)
|
31 |
|
32 |
## Uses
|
33 |
-
|
34 |
-
Pretrained 2b model can be used for feature extraction, fine-tuning, or as a starting point for training new models.
|
|
|
3 |
license: cc-by-nc-4.0
|
4 |
---
|
5 |
|
6 |
+
# Sapiens-2B-torchscript
|
7 |
|
8 |
+
## Model Card
|
9 |
+
- **Embedding Dimensions:** N/A
|
10 |
+
- **Num Layers:** N/A
|
11 |
+
- **Num Heads:** N/A
|
12 |
+
- **Feedforward Channels:** N/A
|
13 |
+
- **Num Parameters:** 2B
|
14 |
+
- **Input Image Size:** 1024 x 1024
|
15 |
+
- **Patch Size:** 16 x 16
|
16 |
|
17 |
## Model Details
|
18 |
+
Sapiens is a family of vision transformers pretrained on 300 million human images at 1024 x 1024 image resolution. The pretrained models, when finetuned for human-centric vision tasks, generalize to in-the-wild conditions.
|
19 |
+
Sapiens-2B natively support 1K high-resolution inference. The resulting models exhibit remarkable generalization to in-the-wild data, even when labeled data is scarce or entirely synthetic.
|
|
|
20 |
|
21 |
- **Developed by:** Meta
|
22 |
- **Model type:** Vision Transformer
|
23 |
- **License:** Creative Commons Attribution-NonCommercial 4.0
|
|
|
24 |
- **Task:** pretrain
|
25 |
- **Format:** torchscript
|
26 |
- **File:** sapiens_2b_epoch_660_torchscript.pt2
|
27 |
|
|
|
|
|
28 |
### Model Sources
|
|
|
29 |
- **Repository:** [https://github.com/facebookresearch/sapiens](https://github.com/facebookresearch/sapiens)
|
30 |
- **Paper:** [https://arxiv.org/abs/2408.12569](https://arxiv.org/abs/2408.12569)
|
31 |
|
32 |
## Uses
|
33 |
+
Pretrained 2B model can be used for feature extraction, fine-tuning, or as a starting point for training new models.
|
|