license: apache-2.0 | |
language: | |
- en | |
base_model: FuseAI/FuseChat-Qwen-2.5-7B-Instruct | |
base_model_relation: quantized | |
library_name: mlc-llm | |
pipeline_tag: text-generation | |
tags: | |
- chat | |
4-bit [GPTQ](https://arxiv.org/abs/2210.17323) quantized version of [FuseChat-Qwen-2.5-7B-Instruct](https://huggingface.co/FuseAI/FuseChat-Qwen-2.5-7B-Instruct) for inference with the [Private LLM](https://privatellm.app/) app. | |