Edit model card

Model Details

This model is an int4 auto-round model with group_size 128 of Qwen/Qwen2-7B generated by intel/auto-round. If you need AutoGPTQ format, please load the model with revision 07a117c

How To Use

INT4 Inference

##pip install auto-round (cpu needs version > 0.3.1))
from auto_round import AutoRoundConfig ##must import for auto_round format
from transformers import AutoModelForCausalLM,AutoTokenizer
quantized_model_dir = "Intel/Qwen2-7B-int4-inc"
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
model = AutoModelForCausalLM.from_pretrained(quantized_model_dir,
                                             device_map="auto"
                                             ## revision="07a117c" ##AutoGPTQ format
                                             )
text = "下面我来介绍一下阿里巴巴公司,"
text = "9.8和9.11哪个数字大?答案是"
text = "Once upon a time,"
text = "There is a girl who likes adventure,"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50, do_sample=False)[0]))
##下面我来介绍一下阿里巴巴公司,阿里巴巴公司是全球领先的电子商务公司,成立于1999年,总部位于中国杭州。阿里巴巴公司致力于为全球中小企业提供一个在线交易平台,帮助他们拓展业务,提高销售额。阿里巴巴公司拥有多个业务板块,包括淘宝、天猫
##
##9.8和9.11哪个数字大?答案是9.8,因为9.8比9.11大0.7。
##Once upon a time, there was a little girl named Alice who loved to read. She had a special book that she had inherited from her grandmother, and it was filled with stories of magical creatures and far-off lands. One day, Alice decided to read the book in a
##There is a girl who likes adventure, and she is always looking for new experiences. She is a bit of a thrill-seeker, and she loves to push herself to the limit. She is always up for a challenge, and she is not afraid to take risks. She is a bit

Intel Gaudi-2 INT4 Inference

docker image with Gaudi Software Stack is recommended. More details can be found in Gaudi Guide.

import habana_frameworks.torch.core as htcore
import habana_frameworks.torch.hpu as hthpu

from auto_round import AutoRoundConfig
from transformers import AutoModelForCausalLM,AutoTokenizer

quantized_model_dir = "Intel/Qwen2-7B-int4-inc"
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
model = AutoModelForCausalLM.from_pretrained(quantized_model_dir).to('hpu').to(bfloat16)
text = "下面我来介绍一下阿里巴巴公司,"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50, do_sample=False)[0]))

Evaluate the model

pip3 install lm-eval==0.4.4,auto-round

auto-round  --model "Intel/Qwen2-7B-int4-inc"  --eval --eval_bs 16  --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k,cmmlu,ceval-valid
Metric BF16 INT4
Avg 0.6659 0.6604
mmlu 0.6697 0.6646
cmmlu 0.8254 0.8118
ceval-valid 0.8339 0.8053
lambada_openai 0.7182 0.7136
hellaswag 0.5823 0.5752
winogrande 0.7222 0.7277
piqa 0.7911 0.7933
truthfulqa_mc1 0.3647 0.3476
openbookqa 0.3520 0.3440
boolq 0.8183 0.8223
arc_easy 0.7660 0.7635
arc_challenge 0.4505 0.4633
gsm8k 5 shots(strict match) 0.7619 0.7528

Generate the model

Here is the sample command to reproduce the model. We observed a larger accuracy drop in Chinese tasks and recommend using a high-quality Chinese dataset for calibration. However, we did not achieve better accuracy with some public datasets.

auto-round
--model_name  Qwen/Qwen2-7B \
--device 0 \
--group_size 128 \
--nsamples 512 \
--bits 4 \
--iter 1000 \
--disable_eval \
--model_dtype "float16" \
--format 'auto_round' \
--output_dir "./tmp_autoround" 

Ethical Considerations and Limitations

The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Therefore, before deploying any applications of the model, developers should perform safety testing.

Caveats and Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Here are a couple of useful links to learn more about Intel's AI software:

  • Intel Neural Compressor link
  • Intel Extension for Transformers link

Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.

Cite

@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }

arxiv github

Downloads last month
470
Safetensors
Model size
1.96B params
Tensor type
FP16
·
I32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Intel/Qwen2-7B-int4-inc