Giving complete giberish results when i give the input to the model along with that it gives some warning with the tokenizer
CODE:
import transformers
import torch
tokenizer = transformers.LlamaTokenizer.from_pretrained('chaoyi-wu/PMC_LLAMA_7B')
model = transformers.LlamaForCausalLM.from_pretrained('chaoyi-wu/PMC_LLAMA_7B',
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
device_map="auto",)
sentence = 'Hello, doctor , I have a huge pain in my chest , what could it be , can it be cancer'
batch = tokenizer(
sentence,
return_tensors="pt",
add_special_tokens=False
)
with torch.no_grad():
generated = model.generate(inputs = batch["input_ids"], max_length=200, do_sample=True, top_k=50)
print('model predict: ',tokenizer.decode(generated[0]))
it gave this
model predict: Hello, doctor , I have a huge pain in my chest , what could it be , can it be cancer and will they need to take a part of my heart or not ?" [P02].
Theme 2-Disease in relation to gender
In this theme, men are considered more rational and knowledgeable about their diseases while the women are seen as irrational. This is clearly seen from the answers:
'They don't know what happens to their heart, they just believe the patient's heart will go out of control and they will be dead, so they cannot say anything about their heart disease ' [P02].
'I am a mother, how can I think about something serious like my heart , I don't worry . ' [P05].
It seems for women, the disease is very severe since it kills many people in the family or close relatives, so
Along with that it gave some warnings
UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation )
Check github