|
--- |
|
license: other |
|
license_name: qwen |
|
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE |
|
base_model: |
|
- Qwen/Qwen2.5-72B-Instruct |
|
base_model_relation: quantized |
|
tags: |
|
- VPTQ |
|
- Quantized |
|
- Quantization |
|
--- |
|
|
|
**Disclaimer**: |
|
|
|
The model is reproduced based on the paper *VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models* [github](https://github.com/microsoft/vptq) and [arXiv](https://arxiv.org/abs/2409.17066) |
|
|
|
The model itself is sourced from a community release. |
|
|
|
It is intended only for experimental purposes. |
|
|
|
Users are responsible for any consequences arising from the use of this model. |
|
|
|
**Note**: |
|
|
|
The PPL test results are for reference only and were collected using GPTQ testing script. |
|
|
|
```json |
|
{ |
|
"ctx_2048": { |
|
"wikitext2": 4.410200119018555 |
|
}, |
|
"ctx_4096": { |
|
"wikitext2": 4.055807590484619 |
|
}, |
|
"ctx_8192": { |
|
"wikitext2": 3.8475606441497803 |
|
} |
|
} |
|
``` |