genz-13b-v2 / README.md
dittops's picture
Update README.md
c193453
|
raw
history blame
1.17 kB
metadata
language:
  - en
library_name: transformers
pipeline_tag: text-generation

GenZ 13B v2

The instruction finetuned model with 4K input length. The model is finetuned on top of pretrained LLaMa2

Inference

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("budecosystem/genz-13b-v2", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("budecosystem/genz-13b-v2", torch_dtype=torch.bfloat16)
inputs = tokenizer("The world is", return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))

Use the following prompt template

A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: Hi, how are you? ASSISTANT: 

Finetuning

python finetune.py
   --model_name meta-llama/Llama-2-13b
   --data_path dataset.json
   --output_dir output
   --trust_remote_code
   --prompt_column instruction
   --response_column output

Check the GitHub for the code -> GenZ