BEiT (base-sized model, fine-tuned on ImageNet-1k after being intermediately fine-tuned on ImageNet-22k)
BEiT (BERT pre-training of Image Transformers) model pre-trained in a self-supervised way on ImageNet-22k (14 million images, 21,841 classes) at resolution 224x224, and also fine-tuned on the same dataset at the same resolution. It was introduced in the paper BEiT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei and first released in this repository.
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.