mpt-7b-8k-chat-gptq / generation_config.json
casperhansen's picture
GPTQ quantized MPT model
081aec9
raw
history blame contribute delete
121 Bytes
{
"_from_model_config": true,
"transformers_version": "4.30.2",
"use_cache": false,
"eos_token_id": [0, 50278]
}