benchang1110's picture
Update README.md
0541848 verified
|
raw
history blame
1.28 kB
metadata
datasets:
  - benchang1110/Guanaco-Taide
  - benchang1110/TaiwanChat-Taide
  - benchang1110/Belle-Taide
  - benchang1110/ChatTaiwan
library_name: transformers

Model Card for Model ID

This model is the instruction finetuning version of benchang1110/SmolLM-135M-Taiwan.

Usage

import torch, transformers  
def generate_response():
    model = transformers.AutoModelForCausalLM.from_pretrained("benchang1110/SmolLM-135M-Taiwan-Instruct-v0.1").to(device)
    tokenizer = transformers.AutoTokenizer.from_pretrained("benchang1110/SmolLM-135M-Taiwan-Instruct-v0.1")
    streamer = transformers.TextStreamer(tokenizer,skip_prompt=True)
    while(1):
        prompt = input('USER:')
        if prompt == "exit":
            break
        print("Assistant: ")
        message = [
           {'content': prompt, 'role': 'user'},
        ]
        formatted_chat = tokenizer.apply_chat_template(message,tokenize=True,add_generation_prompt=True,return_tensors='pt').to(device)   
        _ = model.generate(formatted_chat,streamer=streamer,use_cache=True,max_new_tokens=1024,do_sample=True)

if __name__ == '__main__':
    device = 'cuda' if torch.cuda.is_available() else 'cpu'
    generate_response()