Update README.md
Browse filesUpdate the training epochs for this model.
README.md
CHANGED
@@ -39,7 +39,7 @@ metrics:
|
|
39 |
|
40 |
EfficientFormer-L3, developed by [Snap Research](https://github.com/snap-research), is one of three EfficientFormer models. The EfficientFormer models were released as part of an effort to prove that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance.
|
41 |
|
42 |
-
This checkpoint of EfficientFormer-L3 was trained for
|
43 |
|
44 |
- Developed by: Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren
|
45 |
- Language(s): English
|
@@ -118,7 +118,7 @@ See the [data card](https://huggingface.co/datasets/imagenet-1k) for additional
|
|
118 |
#### Training Procedure
|
119 |
|
120 |
* Parameters: 31.4 M
|
121 |
-
* Train. Epochs:
|
122 |
|
123 |
Trained on a cluster with NVIDIA A100 and V100 GPUs.
|
124 |
|
|
|
39 |
|
40 |
EfficientFormer-L3, developed by [Snap Research](https://github.com/snap-research), is one of three EfficientFormer models. The EfficientFormer models were released as part of an effort to prove that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance.
|
41 |
|
42 |
+
This checkpoint of EfficientFormer-L3 was trained for 300 epochs.
|
43 |
|
44 |
- Developed by: Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren
|
45 |
- Language(s): English
|
|
|
118 |
#### Training Procedure
|
119 |
|
120 |
* Parameters: 31.4 M
|
121 |
+
* Train. Epochs: 300
|
122 |
|
123 |
Trained on a cluster with NVIDIA A100 and V100 GPUs.
|
124 |
|