Missing ChatML tokens and unexpected symbols in the response
With llama.cpp "--chatml", I noticed a response intermixed with strings of "/******/", a list of words related to the response intermixed with strings of "/:******/", and finally after "###########", the response again as normal text.
Investigating further, although the model card states that this model used a dataset following the ChatML template, this model lacks the ChatML tokens that llama.cpp supports. For comparison, OpenHermes was trained with an expanded "vocab_size" of 32002 and an "added_tokens.json" with
{ "<|im_end|>": 32000, "<|im_start|>": 32001 }
OpenHermes-2.5-Mistral-7B uses a specific chat template, called ChatML. Here is an example of a conversation formatted with this template:
<|im_start|>system
You are a helpful chatbot assistant.<|im_end|>
<|im_start|>user
Hi<|im_end|>
<|im_start|>assistant
Hi, how can I help you?<|im_end|>
As you can see, ChatML defines different roles (system, user, assistant) and appends special tokens (<|im_start|> and <|im_end|>) to separate them. Moreover, DPOTrainer also requires a specific format with three columns: prompt, chosen, and rejected.
Our dataset contains four columns: system, question, chatgpt, and llama2–13b-chat. We’ll simply concatenate the system and question columns to the prompt column. We’ll also map the chatgpt column to “chosen” and llama2–13b-chat to “rejected”. To format the dataset in a reliable way, we’ll use the tokenizer’s apply_chat_template() function, which already uses ChatML.
To be more specific, given "<|im_start|>user" the tokenizer outputs 8 tokens for this model,
'<':28789, '|':28766, 'im':321, '_':28730, 'start':2521, '|':28766, '>':28767, 'user':1838
because the input and output weights of this model were not expanded to include those special ChatML tokens before this model was trained.
In contrast, in models like OpenHermes that have those special tokens added to the input and output weights, given "<|im_start|>user" the tokenizer outputs 2 tokens,
'<|im_start|>':32001, 'user':1838
I say this because these missing tokens may have something to do with the unexpected behavior that I observed while using a ChatML template with this model.