Text Generation
Transformers
Safetensors
English
llama
nvidia
llama3.1
conversational
text-generation-inference

Templete Prompt

#20
by sadra - opened

Is there any recommended template to get the best out of the model?

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("nvidia/Llama-3.1-Nemotron-70B-Instruct-HF")

tokenizer.default_chat_template

Sign up or log in to comment