Can llama3.1 8b base model stop generation naturally?

#50
by Sudai - opened

I've been working on the model parameter to try and stop the model generation naturally, but it seems that it only stops generation when The max token length is reached.

I also found out a statement on the github repo of the llama,

" max_gen_len is optional because finetuned models are able to stop generations naturally."

Does this means that the instruct model is able to stop naturally?

Sign up or log in to comment