Chat template question
Hello ibm granite team,
I'm trying to extract the parsed template so that this model can be used by different inference software like llama.cpp.
I've noticed that you use the following jinja chat template in tokenizer_config.json:
"chat_template": "{%- if tools %}\n {{- '<|start_of_role|>available_tools<|end_of_role|>\n' }}\n {%- for tool in tools %}\n {{- tool | tojson(indent=4) }}\n {%- if not loop.last %}\n {{- '\n\n' }}\n {%- endif %}\n {%- endfor %}\n {{- '<|end_of_text|>\n' }}\n{%- endif %}\n{%- for message in messages %}\n {%- if message['role'] == 'system' %}\n {{- '<|start_of_role|>system<|end_of_role|>' + message['content'] + '<|end_of_text|>\n' }}\n {%- elif message['role'] == 'user' %}\n {{- '<|start_of_role|>user<|end_of_role|>' + message['content'] + '<|end_of_text|>\n' }}\n {%- elif message['role'] == 'assistant' %}\n {{- '<|start_of_role|>assistant<|end_of_role|>' + message['content'] + '<|end_of_text|>\n' }}\n {%- elif message['role'] == 'assistant_tool_call' %}\n {{- '<|start_of_role|>assistant<|end_of_role|><|tool_call|>' + message['content'] + '<|end_of_text|>\n' }}\n {%- elif message['role'] == 'tool_response' %}\n {{- '<|start_of_role|>tool_response<|end_of_role|>' + message['content'] + '<|end_of_text|>\n' }}\n {%- endif %}\n {%- if loop.last and add_generation_prompt %}\n {{- '<|start_of_role|>assistant<|end_of_role|>' }}\n {%- endif %}\n{%- endfor %}",
So I wanted to verify with you, am I right to interpret the parsed template (if no tools are called) to be either of these 2:
Option 1:
\n\n\n\n<|start_of_role|>system<|end_of_role|>You are granite, an AI model by IBM<|end_of_text|>
\n\n\n\n\n<|start_of_role|>user<|end_of_role|>Hello!<|end_of_text|>
\n\n\n\n\n<|start_of_role|>assistant<|end_of_role|>Hi there How can I help you?<|end_of_text|>
Option 2:
\n\n\n\n<|start_of_role|>system<|end_of_role|>You are granite, an AI model by IBM<|end_of_text|>
\n\n\n\n\n<|start_of_role|>user<|end_of_role|>Hello!<|end_of_text|>
\n\n\n\n\n<|start_of_role|>assistant<|end_of_role|>Hi there How can I help you?<|end_of_text|>
\n\n\n
As you can see, I'm trying to figure out if 4th line (\n\n\n)
is needed before starting the next message. Or am I misunderstanding the new line characters completely?
My second question is what system prompt was this model trained with? (ie. "You are granite, an AI model by IBM")
Thank you for your time.
Hi, thanks for your interest in the Granite models––we are excited to see what you build!
The newlines in the chat template are just there for formatting in the Jinja language and are not a part of the chat template itself. You can view the formatted string with print(tokenizer.chat_template)
In your example, it would actually just be:
<|start_of_role|>system<|end_of_role|>You are granite, an AI model by IBM<|end_of_text|>
<|start_of_role|>user<|end_of_role|>Hello!<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>Hi there How can I help you?<|end_of_text|>
with no extra newlines in the beginning. Additionally, the generation prompt is simply just <|start_of_role|>assistant<|end_of_role|>
with no newlines after <|end_of_role|>
.
No specific system prompt is recommended at this time, and a system prompt may not be needed for many use cases. However, of course, one can be added :)
Thank you very much for your help, this is exactly the information that I was looking for!
It's also really helpful to know about print(tokenizer.chat_template)
, I did not know that existed.
I look forward to using this model and I hope you have a great day ahead 👍