File size: 928 Bytes
0fd3ee5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Chinese-Alpaca-7B-GPTQ

Chinese-Alpaca-7B-GPTQ is based on the [Chinese-LLaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca) model, and was quantized using [GPTQ](https://github.com/IST-DASLab/gptq) for faster inference with reduced memory requirements.


We used [bigscience-data/roots_zh-cn_wikipedia](https://huggingface.co/datasets/bigscience-data/roots_zh-cn_wikipedia) for calibration.

## Usage

To use Chinese-Alpaca-7B-GPTQ, you will need to use the [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) repository to load the model.

```
python llama_inference.py ./chinese-alpaca-7b-gptq --wbits 4 --groupsize 128 --load chinese-alpaca-7b-gptq/llama7b-4bit-128g.pt --text "### Instruction: 为什么苹果支付 没有在中国流行?\n\n### Response:"
```

## Acknowledgments
We would like to thank the original authors of above-mentioned projects for their contributions to the NLP community.