EOS edit
#3
by
LLuke777
- opened
Some inference engines only take the EOS token from generation_config.json
This should also include the chatML token 32000 to avoid looping issues
Some inference engines only take the EOS token from generation_config.json
This should also include the chatML token 32000 to avoid looping issues