File size: 9,278 Bytes
27a5e1f ae2968f 27a5e1f 18fe8c3 b1b43b9 3338f5c 071a053 6767fec 27a5e1f 4d90fd1 02abcb8 18fe8c3 b1b43b9 ae2968f 0bd282a 3338f5c aec0188 71fb14e 3a2b214 071a053 bc9c236 9d03b2a 4a1d6a4 7fe2356 9c9626a 03eb6f3 6767fec 64d0c19 ff14c95 27a5e1f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 |
---
base_model: NTQAI/Nxcode-CQ-7B-orpo
inference: false
library_name: gguf
license: other
license_link: https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE
license_name: tongyi-qianwen-research
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- code
- quantized
- GGUF
- imatrix
- quantization
- imat
- imatrix
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
- 1bit
---
# Nxcode-CQ-7B-orpo-IMat-GGUF
_Llama.cpp imatrix quantization of NTQAI/Nxcode-CQ-7B-orpo_
Original Model: [NTQAI/Nxcode-CQ-7B-orpo](https://huggingface.co/NTQAI/Nxcode-CQ-7B-orpo)
Original dtype: `BF16` (`bfloat16`)
Quantized by: llama.cpp [b3067](https://github.com/ggerganov/llama.cpp/releases/tag/b3067)
IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Simple chat template](#simple-chat-template)
- [Chat template with system prompt](#chat-template-with-system-prompt)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: β
Available
Link: [here](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Nxcode-CQ-7B-orpo.Q8_0.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q8_0.gguf) | Q8_0 | 7.71GB | β
Available | βͺ Static | π¦ No
| [Nxcode-CQ-7B-orpo.Q6_K.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q6_K.gguf) | Q6_K | 6.38GB | β
Available | βͺ Static | π¦ No
| [Nxcode-CQ-7B-orpo.Q4_K.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q4_K.gguf) | Q4_K | 4.74GB | β
Available | π’ IMatrix | π¦ No
| [Nxcode-CQ-7B-orpo.Q3_K.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q3_K.gguf) | Q3_K | 3.81GB | β
Available | π’ IMatrix | π¦ No
| [Nxcode-CQ-7B-orpo.Q2_K.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q2_K.gguf) | Q2_K | 3.05GB | β
Available | π’ IMatrix | π¦ No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Nxcode-CQ-7B-orpo.BF16.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.BF16.gguf) | BF16 | 14.50GB | β
Available | βͺ Static | π¦ No
| [Nxcode-CQ-7B-orpo.FP16.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.FP16.gguf) | F16 | 14.50GB | β
Available | βͺ Static | π¦ No
| [Nxcode-CQ-7B-orpo.Q8_0.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q8_0.gguf) | Q8_0 | 7.71GB | β
Available | βͺ Static | π¦ No
| [Nxcode-CQ-7B-orpo.Q6_K.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q6_K.gguf) | Q6_K | 6.38GB | β
Available | βͺ Static | π¦ No
| [Nxcode-CQ-7B-orpo.Q5_K.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q5_K.gguf) | Q5_K | 5.43GB | β
Available | βͺ Static | π¦ No
| [Nxcode-CQ-7B-orpo.Q5_K_S.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q5_K_S.gguf) | Q5_K_S | 5.15GB | β
Available | βͺ Static | π¦ No
| [Nxcode-CQ-7B-orpo.Q4_K.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q4_K.gguf) | Q4_K | 4.74GB | β
Available | π’ IMatrix | π¦ No
| [Nxcode-CQ-7B-orpo.Q4_K_S.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q4_K_S.gguf) | Q4_K_S | 4.41GB | β
Available | π’ IMatrix | π¦ No
| [Nxcode-CQ-7B-orpo.IQ4_NL.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.IQ4_NL.gguf) | IQ4_NL | 4.19GB | β
Available | π’ IMatrix | π¦ No
| [Nxcode-CQ-7B-orpo.IQ4_XS.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.IQ4_XS.gguf) | IQ4_XS | 4.03GB | β
Available | π’ IMatrix | π¦ No
| [Nxcode-CQ-7B-orpo.Q3_K.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q3_K.gguf) | Q3_K | 3.81GB | β
Available | π’ IMatrix | π¦ No
| [Nxcode-CQ-7B-orpo.Q3_K_L.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q3_K_L.gguf) | Q3_K_L | 3.99GB | β
Available | π’ IMatrix | π¦ No
| [Nxcode-CQ-7B-orpo.Q3_K_S.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q3_K_S.gguf) | Q3_K_S | 3.50GB | β
Available | π’ IMatrix | π¦ No
| [Nxcode-CQ-7B-orpo.IQ3_M.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.IQ3_M.gguf) | IQ3_M | 3.61GB | β
Available | π’ IMatrix | π¦ No
| [Nxcode-CQ-7B-orpo.IQ3_S.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.IQ3_S.gguf) | IQ3_S | 3.51GB | β
Available | π’ IMatrix | π¦ No
| [Nxcode-CQ-7B-orpo.IQ3_XS.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.IQ3_XS.gguf) | IQ3_XS | 3.36GB | β
Available | π’ IMatrix | π¦ No
| [Nxcode-CQ-7B-orpo.IQ3_XXS.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.IQ3_XXS.gguf) | IQ3_XXS | 3.23GB | β
Available | π’ IMatrix | π¦ No
| [Nxcode-CQ-7B-orpo.Q2_K.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q2_K.gguf) | Q2_K | 3.05GB | β
Available | π’ IMatrix | π¦ No
| [Nxcode-CQ-7B-orpo.Q2_K_S.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.Q2_K_S.gguf) | Q2_K_S | 3.03GB | β
Available | π’ IMatrix | π¦ No
| [Nxcode-CQ-7B-orpo.IQ2_M.gguf](https://huggingface.co/legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF/blob/main/Nxcode-CQ-7B-orpo.IQ2_M.gguf) | IQ2_M | 3.01GB | β
Available | π’ IMatrix | π¦ No
| Nxcode-CQ-7B-orpo.IQ2_S | IQ2_S | - | β³ Processing | π’ IMatrix | -
| Nxcode-CQ-7B-orpo.IQ2_XS | IQ2_XS | - | β³ Processing | π’ IMatrix | -
| Nxcode-CQ-7B-orpo.IQ2_XXS | IQ2_XXS | - | β³ Processing | π’ IMatrix | -
| Nxcode-CQ-7B-orpo.IQ1_M | IQ1_M | - | β³ Processing | π’ IMatrix | -
| Nxcode-CQ-7B-orpo.IQ1_S | IQ1_S | - | β³ Processing | π’ IMatrix | -
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF --include "Nxcode-CQ-7B-orpo.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/Nxcode-CQ-7B-orpo-IMat-GGUF --include "Nxcode-CQ-7B-orpo.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Simple chat template
```
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
{user_prompt}<|im_end|>
<|im_start|>assistant
{assistant_response}<|im_end|>
<|im_start|>user
{next_user_prompt}<|im_end|>
```
### Chat template with system prompt
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{user_prompt}<|im_end|>
<|im_start|>assistant
{assistant_response}<|im_end|>
<|im_start|>user
{next_user_prompt}<|im_end|>
```
### Llama.cpp
```
llama.cpp/main -m Nxcode-CQ-7B-orpo.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `Nxcode-CQ-7B-orpo.Q8_0`)
3. Run `gguf-split --merge Nxcode-CQ-7B-orpo.Q8_0/Nxcode-CQ-7B-orpo.Q8_0-00001-of-XXXXX.gguf Nxcode-CQ-7B-orpo.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |