Cannot download the model.
#2
by
hbh234
- opened
When I tried to run the example code , an error happend:
this line:model = AutoModel.from_pretrained('castorini/repllama-v1-mrl-7b-lora-passage')
it errors: castorini/repllama-v1-mrl-7b-lora-passage is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo either by logging in with huggingface-cli login
or by passing token=<your_token>
By the way, we noticed that in the source code
class RepLLaMA(EncoderModel):
def __init__(self,
lm_q: PreTrainedModel,
lm_p: PreTrainedModel,
pooler: nn.Module = None,
untie_encoder: bool = False,
negatives_x_device: bool = False
):
super().__init__(lm_q, lm_p, pooler, untie_encoder, negatives_x_device)
self.config = lm_q.config
def encode_passage(self, psg):
if psg is None:
return None
psg_out = self.lm_p(**psg, output_hidden_states=True)
p_hidden = psg_out.hidden_states[-1]
attention_mask = psg['attention_mask']
# p_reps is the last token representation that is not padding
sequence_lengths = attention_mask.sum(dim=1)
last_token_indices = sequence_lengths - 1
p_reps = p_hidden[torch.arange(p_hidden.size(0)), last_token_indices]
p_reps = nn.functional.normalize(p_reps, p=2, dim=-1)
return p_reps
The query LM and passage LM seems to be different. However, the example code seems you use one model to encode both them .
import torch
from transformers import AutoModel, AutoTokenizer
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')
model = AutoModel.from_pretrained('castorini/repllama-v1-mrl-7b-lora-passage')
dim = 512
# Define query and passage inputs
query = "What is llama?"
title = "Llama"
passage = "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era."
query_input = tokenizer(f'query: {query}</s>', return_tensors='pt')
passage_input = tokenizer(f'passage: {title} {passage}</s>', return_tensors='pt')
# Run the model forward to compute embeddings and query-passage similarity score
with torch.no_grad():
# compute query embedding
query_outputs = model(**query_input)
query_embedding = query_outputs.last_hidden_state[0][-1][:dim]
query_embedding = torch.nn.functional.normalize(query_embedding, p=2, dim=0)
# compute passage embedding
passage_outputs = model(**passage_input)
passage_embeddings = passage_outputs.last_hidden_state[0][-1][:dim]
passage_embeddings = torch.nn.functional.normalize(passage_embeddings, p=2, dim=0)
# compute similarity score
score = torch.dot(query_embedding, passage_embeddings)
print(score)
Hi @hbh234 ,
- For the first issue, it seems there is a typo when you load the model, v1 -> v1.1
- For lm_q and lm_p, they are pointing to the same model. (not a clone).https://github.com/texttron/tevatron/blob/7d298b428234f1c1065e98244827824753361815/examples/repllama/repllama.py#L93C1-L94C26
Thank you! Your answer has solved all my problems!