--- license: apache-2.0 ---
TigerBot

A cutting-edge foundation for your very own LLM.

🌐 TigerBot • 🤗 Hugging Face

This is a 4-bit GPTQ version of the [Tigerbot 7B sft](https://huggingface.co/TigerResearch/tigerbot-7b-sft). It was quantized to 4bit using [GPTQ](https://github.com/TigerResearch/TigerBot/tree/main/gptq). ## How to download and use this model in [TigerBot](https://github.com/TigerResearch/TigerBot) File `tigerbot-7b-4bit-128g.pt` can be loaded with [GPTQ in TigerBot](https://github.com/TigerResearch/TigerBot/tree/main/gptq). Here are commands to clone the TigerBot and install. ``` conda create --name tigerbot python=3.8 conda activate tigerbot conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia git clone https://github.com/TigerResearch/TigerBot cd TigerBot pip install -r requirements.txt ``` Inference with command line interface ``` cd TigerBot/gptq CUDA_VISIBLE_DEVICES=0 python tigerbot_infer.py TigerResearch/tigerbot-7b-sft-4bit-128g --wbits 4 --groupsize 128 --load tigerbot-7b-4bit-128g.pt ```