stop the fine-tuned model to keep generating tokens
#45
by
AlbelTec
- opened
Hello,
I fine-tuned phi-1.5 but when trying to infer a question I'm getting the answer and the model keep generating tokens until it reach max_lenght tokens. As a newbie, I wonder how to prevent such behavior. any insights ?
kr,
Are you setting the EOS token? What template did you use on your finetune data?
Idem.
Actually, I used a subset of openorca dataset and my template is based on :
system_prompt:
question:
answer:
AlbelTec
changed discussion status to
closed
You ever find an answer to this question?