How to use NVLM only for text task? [wrong pixel_values size: torch.Size([1, 5])]

#28
by vedantbahel - opened

Getting the error: ValueError: wrong pixel_values size: torch.Size([1, 5])

Following is my code:
I need to use it on CPU

from transformers import AutoModel, AutoTokenizer
import torch

Load model with device map set to "cpu" for CPU usage

path = "nvidia/NVLM-D-72B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=False,
trust_remote_code=True,
device_map={"": "cpu"} # Ensure everything is on CPU
).eval()

Set device to CPU, since CUDA is not available

device = 'cpu' # Force device to CPU if no GPU available
print(device) # Ensure it prints 'cpu'
model = model.to(device) # Explicitly set the model to CPU

Load tokenizer (no need to move to device)

tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)

Set up generation configuration

generation_config = dict(max_new_tokens=1024, do_sample=False)

Query for the model

query = 'What is transformer model?'

Tokenize the query

inputs = tokenizer(query, return_tensors="pt").to(device)

Generate response using the model

with torch.no_grad():
outputs = model.generate(inputs["input_ids"], max_length=1024)

Decode and print the response

response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Sign up or log in to comment