|
--- |
|
base_model: microsoft/Phi-3-mini-4k-instruct |
|
inference: false |
|
language: |
|
- en |
|
library_name: gguf |
|
license: mit |
|
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE |
|
pipeline_tag: text-generation |
|
quantized_by: legraphista |
|
tags: |
|
- quantized |
|
- GGUF |
|
- imatrix |
|
- quantization |
|
--- |
|
|
|
# Phi-3-mini-4k-instruct-GGUF |
|
_Llama.cpp imatrix quantization of Phi-3-mini-4k-instruct-GGUF_ |
|
|
|
Original Model: [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) |
|
Original dtype: `BF16` (`bfloat16`) |
|
Quantized by: llama.cpp [b2989](https://github.com/ggerganov/llama.cpp/releases/tag/b2989) |
|
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw) |
|
|
|
## Files |
|
|
|
### IMatrix |
|
Status: β³ Processing |
|
Link: [here](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-GGUF/blob/main/imatrix.dat) |
|
|
|
### Common Quants |
|
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | |
|
| -------- | ---------- | --------- | ------ | ------------ | -------- | |
|
| Phi-3-mini-4k-instruct.Q8_0 | Q8_0 | - | β³ Processing | No | - |
|
| [Phi-3-mini-4k-instruct.Q6_K.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-GGUF/blob/main/Phi-3-mini-4k-instruct.Q6_K.gguf) | Q6_K | 3.14GB | β
Available | No | π¦ No |
|
| Phi-3-mini-4k-instruct.Q4_K | Q4_K | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.Q3_K | Q3_K | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.Q2_K | Q2_K | - | β³ Processing | Yes | - |
|
|
|
|
|
### All Quants |
|
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | |
|
| -------- | ---------- | --------- | ------ | ------------ | -------- | |
|
| Phi-3-mini-4k-instruct.FP16 | F16 | - | β³ Processing | No | - |
|
| Phi-3-mini-4k-instruct.BF16 | BF16 | - | β³ Processing | No | - |
|
| [Phi-3-mini-4k-instruct.Q5_K.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-GGUF/blob/main/Phi-3-mini-4k-instruct.Q5_K.gguf) | Q5_K | 2.82GB | β
Available | No | π¦ No |
|
| Phi-3-mini-4k-instruct.Q5_K_S | Q5_K_S | - | β³ Processing | No | - |
|
| Phi-3-mini-4k-instruct.Q4_K_S | Q4_K_S | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.Q3_K_L | Q3_K_L | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.Q3_K_S | Q3_K_S | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.Q2_K_S | Q2_K_S | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.IQ4_NL | IQ4_NL | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.IQ4_XS | IQ4_XS | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.IQ3_M | IQ3_M | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.IQ3_S | IQ3_S | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.IQ3_XS | IQ3_XS | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.IQ3_XXS | IQ3_XXS | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.IQ2_M | IQ2_M | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.IQ2_S | IQ2_S | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.IQ2_XS | IQ2_XS | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.IQ2_XXS | IQ2_XXS | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.IQ1_M | IQ1_M | - | β³ Processing | Yes | - |
|
| Phi-3-mini-4k-instruct.IQ1_S | IQ1_S | - | β³ Processing | Yes | - |
|
|
|
|
|
## Downloading using huggingface-cli |
|
First, make sure you have hugginface-cli installed: |
|
``` |
|
pip install -U "huggingface_hub[cli]" |
|
``` |
|
Then, you can target the specific file you want: |
|
``` |
|
huggingface-cli download legraphista/Phi-3-mini-4k-instruct-GGUF --include "Phi-3-mini-4k-instruct.Q8_0.gguf" --local-dir ./ |
|
``` |
|
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: |
|
``` |
|
huggingface-cli download legraphista/Phi-3-mini-4k-instruct-GGUF --include "Phi-3-mini-4k-instruct.Q8_0/*" --local-dir Phi-3-mini-4k-instruct.Q8_0 |
|
# see FAQ for merging GGUF's |
|
``` |
|
|
|
## FAQ |
|
|
|
### Why is the IMatrix not applied everywhere? |
|
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). |
|
|
|
### How do I merge a split GGUF? |
|
1. Make sure you have `gguf-split` available |
|
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases |
|
- Download the appropriate zip for your system from the latest release |
|
- Unzip the archive and you should be able to find `gguf-split` |
|
2. Locate your GGUF chunks folder (ex: `Phi-3-mini-4k-instruct.Q8_0`) |
|
3. Run `gguf-split --merge Phi-3-mini-4k-instruct.Q8_0/Phi-3-mini-4k-instruct.Q8_0-00001-of-XXXXX.gguf Phi-3-mini-4k-instruct.Q8_0.gguf` |
|
- Make sure to point `gguf-split` to the first chunk of the split. |