AceGPT-7b-chat-GPTQ / README.md
hh2017's picture
Update README.md
d59fa20
metadata
language:
  - ar
  - en
model_name: AceGPT 7B Chat
model_type: AceGPT
quantized_by: FreedomIntelligence

Updates

Add arg ’use_safetensors=False‘ in from_quanted(), while this arg is set to False as defauly in previous Auto-GPTQ. If there are any problems to load model directly by HF, someone can try git clone. (Dec 15, 2023)

Description

This repo contains int4 model(GPTQ) for AceGPT-7B-Chat.

The performance of the int4 version has experienced some degradation. For a better user experience, please use the fp16 version. For details, see AceGPT-7B-Chat and AceGPT-13B-Chat.

How to use this GPTQ model from Python code

Install the necessary packages

Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.

pip3 install transformers>=4.32.0 optimum>=1.12.0   #See requirements.py for verified versions.
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/  # Use cu117 if on CUDA 11.7

You can then generate a simple gradioweb with_quant.py

python web_quant.py --model-name ${model-path}

You can get more details at https://github.com/FreedomIntelligence/AceGPT/tree/main