File size: 1,361 Bytes
2bfdad8
 
 
6e24ce6
20d1f05
6e24ce6
 
83fe65e
6e24ce6
 
 
 
 
 
 
 
 
db32f9f
6e24ce6
db32f9f
6e24ce6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0f8b141
6e24ce6
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
license: apache-2.0
---
<div style="width: 100%;">
    <img src="https://github.com/TigerResearch/TigerBot/blob/main/image/logo_core.png" alt="TigerBot" style="width: 20%; display: block; margin: auto;">
</div>
<p align="center">
<font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font>
</p>
<p align="center">
   🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a>
</p>



This is a 4-bit GPTQ version of the [Tigerbot 7B sft](https://huggingface.co/TigerResearch/tigerbot-7b-sft).

It was quantized to 4bit using: https://github.com/TigerResearch/TigerBot/tree/main/gptq

## How to download and use this model in github: https://github.com/TigerResearch/TigerBot

Here are commands to clone the TigerBot and install.

```
conda create --name tigerbot python=3.8
conda activate tigerbot
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

git clone https://github.com/TigerResearch/TigerBot
cd TigerBot
pip install -r requirements.txt
```

Inference with command line interface

```
cd TigerBot/gptq
CUDA_VISIBLE_DEVICES=0 python tigerbot_infer.py TigerResearch/tigerbot-7b-sft-4bit-128g --wbits 4 --groupsize 128 --load TigerResearch/tigerbot-7b-sft-4bit-128g/tigerbot-7b-4bit-128g.pt
```