Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ widget:
|
|
16 |
|
17 |
# ConvNeXt V2 (base-sized model)
|
18 |
|
19 |
-
ConvNeXt V2 model pretrained using the FCMAE framework and fine-tuned on the ImageNet-22K dataset at resolution
|
20 |
|
21 |
Disclaimer: The team releasing ConvNeXT V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
|
22 |
|
@@ -43,8 +43,8 @@ from datasets import load_dataset
|
|
43 |
dataset = load_dataset("huggingface/cats-image")
|
44 |
image = dataset["test"]["image"][0]
|
45 |
|
46 |
-
preprocessor = AutoImageProcessor.from_pretrained("facebook/convnextv2-base-22k-
|
47 |
-
model = ConvNextV2ForImageClassification.from_pretrained("facebook/convnextv2-base-22k-
|
48 |
|
49 |
inputs = preprocessor(image, return_tensors="pt")
|
50 |
|
|
|
16 |
|
17 |
# ConvNeXt V2 (base-sized model)
|
18 |
|
19 |
+
ConvNeXt V2 model pretrained using the FCMAE framework and fine-tuned on the ImageNet-22K dataset at resolution 384x384. It was introduced in the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Woo et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt-V2).
|
20 |
|
21 |
Disclaimer: The team releasing ConvNeXT V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
|
22 |
|
|
|
43 |
dataset = load_dataset("huggingface/cats-image")
|
44 |
image = dataset["test"]["image"][0]
|
45 |
|
46 |
+
preprocessor = AutoImageProcessor.from_pretrained("facebook/convnextv2-base-22k-384")
|
47 |
+
model = ConvNextV2ForImageClassification.from_pretrained("facebook/convnextv2-base-22k-384")
|
48 |
|
49 |
inputs = preprocessor(image, return_tensors="pt")
|
50 |
|