Error occurs when loading models
I tried to load the model in 3 ways but none of them worked.
I followed the Quick start
in the github repo, which is:
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
# hf_repo = "nvidia/E-RADIO" # For E-RADIO.
hf_repo = "nvidia/RADIO" # For RADIO.
image_processor = CLIPImageProcessor.from_pretrained(hf_repo)
model = AutoModel.from_pretrained(hf_repo, trust_remote_code=True)
model.eval().cuda()
image = Image.open('./assets/radio.png').convert('RGB')
pixel_values = image_processor(images=image, return_tensors='pt', do_resize=True).pixel_values
pixel_values = pixel_values.cuda()
summary, features = model(pixel_values)
First, I set hf_repo = "nvidia/RADIO"
, and got an error
No pretrained configuration specified for vit_huge_patch16_224 model. Using a default. Please add a config to the model pretrained_cfg registry or pass explicitly.
Then, I swtiched to hf_repo = "nvidia/E-RADIO"
, but still got an error which is quite similar to the existing issue in another disscussion
ModuleNotFoundError: No module named 'transformers_modules.nvidia.E -RADIO.91daee99be8a46e408afd869884bd30b9c6fcddf.vit_patch_generator'
Finally, I downloaded the weights and tried to load the model offline. Another error occured
ModuleNotFoundError: No module named 'transformers_modules.local.enable_cpe_support'
Anyone successfully loaded the model? Really appreciate if somebody could help me out.
Hello, thanks for trying RADIO out! The first message you quote (No pretrained configuration specified for vit_huge_patch16_224 model.
) is in fact a mere warning thus you should be able to use the RADIO model anyway:
I am unable to reproduce the second error (after the switch to hf_repo = "nvidia/E-RADIO"
). For what it's worth, I am using transformers==4.37.2
.
Apologies for the delayed response.
Thank you for the patient reply!
I tried hf_repo = "nvidia/RADIO"
again on another machine and it worked pretty well!
Really appreciate for the excellent work!