RichardErkhov commited on
Commit
efa2ed8
1 Parent(s): 33162dd

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +118 -0
README.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Mistral-7B-v0.1-4bit-32rank - GGUF
11
+ - Model creator: https://huggingface.co/LoftQ/
12
+ - Original model: https://huggingface.co/LoftQ/Mistral-7B-v0.1-4bit-32rank/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Mistral-7B-v0.1-4bit-32rank.Q2_K.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q2_K.gguf) | Q2_K | 2.53GB |
18
+ | [Mistral-7B-v0.1-4bit-32rank.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
19
+ | [Mistral-7B-v0.1-4bit-32rank.IQ3_S.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.IQ3_S.gguf) | IQ3_S | 2.96GB |
20
+ | [Mistral-7B-v0.1-4bit-32rank.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
21
+ | [Mistral-7B-v0.1-4bit-32rank.IQ3_M.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.IQ3_M.gguf) | IQ3_M | 3.06GB |
22
+ | [Mistral-7B-v0.1-4bit-32rank.Q3_K.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q3_K.gguf) | Q3_K | 3.28GB |
23
+ | [Mistral-7B-v0.1-4bit-32rank.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
24
+ | [Mistral-7B-v0.1-4bit-32rank.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
25
+ | [Mistral-7B-v0.1-4bit-32rank.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
26
+ | [Mistral-7B-v0.1-4bit-32rank.Q4_0.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q4_0.gguf) | Q4_0 | 3.83GB |
27
+ | [Mistral-7B-v0.1-4bit-32rank.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
28
+ | [Mistral-7B-v0.1-4bit-32rank.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
29
+ | [Mistral-7B-v0.1-4bit-32rank.Q4_K.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q4_K.gguf) | Q4_K | 4.07GB |
30
+ | [Mistral-7B-v0.1-4bit-32rank.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
31
+ | [Mistral-7B-v0.1-4bit-32rank.Q4_1.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q4_1.gguf) | Q4_1 | 4.24GB |
32
+ | [Mistral-7B-v0.1-4bit-32rank.Q5_0.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q5_0.gguf) | Q5_0 | 4.65GB |
33
+ | [Mistral-7B-v0.1-4bit-32rank.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
34
+ | [Mistral-7B-v0.1-4bit-32rank.Q5_K.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q5_K.gguf) | Q5_K | 4.78GB |
35
+ | [Mistral-7B-v0.1-4bit-32rank.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
36
+ | [Mistral-7B-v0.1-4bit-32rank.Q5_1.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q5_1.gguf) | Q5_1 | 5.07GB |
37
+ | [Mistral-7B-v0.1-4bit-32rank.Q6_K.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q6_K.gguf) | Q6_K | 5.53GB |
38
+ | [Mistral-7B-v0.1-4bit-32rank.Q8_0.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Mistral-7B-v0.1-4bit-32rank-gguf/blob/main/Mistral-7B-v0.1-4bit-32rank.Q8_0.gguf) | Q8_0 | 7.17GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: mit
46
+ language:
47
+ - en
48
+ pipeline_tag: text-generation
49
+ tags:
50
+ - 'quantization '
51
+ - lora
52
+ ---
53
+ # LoftQ Initialization
54
+
55
+ | [Paper](https://arxiv.org/abs/2310.08659) | [Code](https://github.com/yxli2123/LoftQ) | [PEFT Example](https://github.com/huggingface/peft/tree/main/examples/loftq_finetuning) |
56
+
57
+ LoftQ (LoRA-fine-tuning-aware Quantization) provides a quantized backbone Q and LoRA adapters A and B, given a full-precision pre-trained weight W.
58
+
59
+ This model, `Mistral-7B-v0.1-4bit-32rank`, is obtained from [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
60
+ The backbone is under `LoftQ/Mistral-7B-v0.1-4bit-32rank` and LoRA adapters are under the `subfolder='loftq_init'`.
61
+
62
+ ## Model Info
63
+ ### Backbone
64
+ - Stored format: `torch.bfloat16`
65
+ - Size: ~ 14 GiB
66
+ - Loaded format: bitsandbytes nf4
67
+ - Size loaded on GPU: ~3.5 GiB
68
+
69
+ ### LoRA adapters
70
+ - rank: 32
71
+ - lora_alpha: 16
72
+ - target_modules: ["down_proj", "up_proj", "q_proj", "k_proj", "v_proj", "o_proj", "gate_proj"]
73
+
74
+ ## Usage
75
+
76
+ **Training.** Here's an example of loading this model and preparing for the LoRA fine-tuning.
77
+
78
+ ```python
79
+ import torch
80
+ from transformers import AutoModelForCausalLM, BitsAndBytesConfig
81
+ from peft import PeftModel
82
+
83
+ MODEL_ID = "LoftQ/Mistral-7B-v0.1-4bit-32rank"
84
+
85
+ base_model = AutoModelForCausalLM.from_pretrained(
86
+ MODEL_ID,
87
+ torch_dtype=torch.bfloat16, # you may change it with different models
88
+ quantization_config=BitsAndBytesConfig(
89
+ load_in_4bit=True,
90
+ bnb_4bit_compute_dtype=torch.bfloat16, # bfloat16 is recommended
91
+ bnb_4bit_use_double_quant=False,
92
+ bnb_4bit_quant_type='nf4',
93
+ ),
94
+ )
95
+ peft_model = PeftModel.from_pretrained(
96
+ base_model,
97
+ MODEL_ID,
98
+ subfolder="loftq_init",
99
+ is_trainable=True,
100
+ )
101
+
102
+ # Do training with peft_model ...
103
+ ```
104
+
105
+ See the full code at our [Github Repo]((https://github.com/yxli2123/LoftQ))
106
+
107
+
108
+ ## Citation
109
+
110
+ ```bibtex
111
+ @article{li2023loftq,
112
+ title={Loftq: Lora-fine-tuning-aware quantization for large language models},
113
+ author={Li, Yixiao and Yu, Yifan and Liang, Chen and He, Pengcheng and Karampatziakis, Nikos and Chen, Weizhu and Zhao, Tuo},
114
+ journal={arXiv preprint arXiv:2310.08659},
115
+ year={2023}
116
+ }
117
+ ```
118
+