legraphista
commited on
Commit
•
52aa2f4
1
Parent(s):
f3eac58
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,94 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: microsoft/Phi-3-mini-4k-instruct
|
3 |
+
inference: false
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
library_name: gguf
|
7 |
+
license: mit
|
8 |
+
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
|
9 |
+
pipeline_tag: text-generation
|
10 |
+
quantized_by: legraphista
|
11 |
+
tags:
|
12 |
+
- quantized
|
13 |
+
- GGUF
|
14 |
+
- imatrix
|
15 |
+
- quantization
|
16 |
+
---
|
17 |
+
|
18 |
+
# Phi-3-mini-4k-instruct-GGUF
|
19 |
+
_Llama.cpp imatrix quantization of Phi-3-mini-4k-instruct-GGUF_
|
20 |
+
|
21 |
+
Original Model: [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
|
22 |
+
Original dtype: `BF16` (`bfloat16`)
|
23 |
+
Quantized by: llama.cpp [b2989](https://github.com/ggerganov/llama.cpp/releases/tag/b2989)
|
24 |
+
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
|
25 |
+
|
26 |
+
## Files
|
27 |
+
|
28 |
+
### IMatrix
|
29 |
+
Status: ⏳ Processing
|
30 |
+
Link: [here](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-GGUF/blob/main/imatrix.dat)
|
31 |
+
|
32 |
+
### Common Quants
|
33 |
+
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
|
34 |
+
| -------- | ---------- | --------- | ------ | ------------ | -------- |
|
35 |
+
| Phi-3-mini-4k-instruct.Q8_0 | Q8_0 | - | ⏳ Processing | No | -
|
36 |
+
| Phi-3-mini-4k-instruct.Q6_K | Q6_K | - | ⏳ Processing | No | -
|
37 |
+
| Phi-3-mini-4k-instruct.Q4_K | Q4_K | - | ⏳ Processing | Yes | -
|
38 |
+
| Phi-3-mini-4k-instruct.Q3_K | Q3_K | - | ⏳ Processing | Yes | -
|
39 |
+
| Phi-3-mini-4k-instruct.Q2_K | Q2_K | - | ⏳ Processing | Yes | -
|
40 |
+
|
41 |
+
|
42 |
+
### All Quants
|
43 |
+
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
|
44 |
+
| -------- | ---------- | --------- | ------ | ------------ | -------- |
|
45 |
+
| Phi-3-mini-4k-instruct.FP16 | F16 | - | ⏳ Processing | No | -
|
46 |
+
| Phi-3-mini-4k-instruct.BF16 | BF16 | - | ⏳ Processing | No | -
|
47 |
+
| Phi-3-mini-4k-instruct.Q5_K | Q5_K | - | ⏳ Processing | No | -
|
48 |
+
| Phi-3-mini-4k-instruct.Q5_K_S | Q5_K_S | - | ⏳ Processing | No | -
|
49 |
+
| Phi-3-mini-4k-instruct.Q4_K_S | Q4_K_S | - | ⏳ Processing | Yes | -
|
50 |
+
| Phi-3-mini-4k-instruct.Q3_K_L | Q3_K_L | - | ⏳ Processing | Yes | -
|
51 |
+
| Phi-3-mini-4k-instruct.Q3_K_S | Q3_K_S | - | ⏳ Processing | Yes | -
|
52 |
+
| Phi-3-mini-4k-instruct.Q2_K_S | Q2_K_S | - | ⏳ Processing | Yes | -
|
53 |
+
| Phi-3-mini-4k-instruct.IQ4_NL | IQ4_NL | - | ⏳ Processing | Yes | -
|
54 |
+
| Phi-3-mini-4k-instruct.IQ4_XS | IQ4_XS | - | ⏳ Processing | Yes | -
|
55 |
+
| Phi-3-mini-4k-instruct.IQ3_M | IQ3_M | - | ⏳ Processing | Yes | -
|
56 |
+
| Phi-3-mini-4k-instruct.IQ3_S | IQ3_S | - | ⏳ Processing | Yes | -
|
57 |
+
| Phi-3-mini-4k-instruct.IQ3_XS | IQ3_XS | - | ⏳ Processing | Yes | -
|
58 |
+
| Phi-3-mini-4k-instruct.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | Yes | -
|
59 |
+
| Phi-3-mini-4k-instruct.IQ2_M | IQ2_M | - | ⏳ Processing | Yes | -
|
60 |
+
| Phi-3-mini-4k-instruct.IQ2_S | IQ2_S | - | ⏳ Processing | Yes | -
|
61 |
+
| Phi-3-mini-4k-instruct.IQ2_XS | IQ2_XS | - | ⏳ Processing | Yes | -
|
62 |
+
| Phi-3-mini-4k-instruct.IQ2_XXS | IQ2_XXS | - | ⏳ Processing | Yes | -
|
63 |
+
| Phi-3-mini-4k-instruct.IQ1_M | IQ1_M | - | ⏳ Processing | Yes | -
|
64 |
+
| Phi-3-mini-4k-instruct.IQ1_S | IQ1_S | - | ⏳ Processing | Yes | -
|
65 |
+
|
66 |
+
|
67 |
+
## Downloading using huggingface-cli
|
68 |
+
First, make sure you have hugginface-cli installed:
|
69 |
+
```
|
70 |
+
pip install -U "huggingface_hub[cli]"
|
71 |
+
```
|
72 |
+
Then, you can target the specific file you want:
|
73 |
+
```
|
74 |
+
huggingface-cli download legraphista/Phi-3-mini-4k-instruct-GGUF --include "Phi-3-mini-4k-instruct.Q8_0.gguf" --local-dir ./
|
75 |
+
```
|
76 |
+
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
77 |
+
```
|
78 |
+
huggingface-cli download legraphista/Phi-3-mini-4k-instruct-GGUF --include "Phi-3-mini-4k-instruct.Q8_0/*" --local-dir Phi-3-mini-4k-instruct.Q8_0
|
79 |
+
# see FAQ for merging GGUF's
|
80 |
+
```
|
81 |
+
|
82 |
+
## FAQ
|
83 |
+
|
84 |
+
### Why is the IMatrix not applied everywhere?
|
85 |
+
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
|
86 |
+
|
87 |
+
### How do I merge a split GGUF?
|
88 |
+
1. Make sure you have `gguf-split` available
|
89 |
+
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
|
90 |
+
- Download the appropriate zip for your system from the latest release
|
91 |
+
- Unzip the archive and you should be able to find `gguf-split`
|
92 |
+
2. Locate your GGUF chunks folder (ex: `Phi-3-mini-4k-instruct.Q8_0`)
|
93 |
+
3. Run `gguf-split --merge Phi-3-mini-4k-instruct.Q8_0/Phi-3-mini-4k-instruct.Q8_0-00001-of-XXXXX.gguf Phi-3-mini-4k-instruct.Q8_0.gguf`
|
94 |
+
- Make sure to point `gguf-split` to the first chunk of the split.
|