Edit model card

NOTE: You will need a recent build of llama.cpp to run these quants (i.e. at least commit 494c870).

GGUF importance matrix (imatrix) quants for https://huggingface.co/TechxGenus/starcoder2-15b-instruct

Fine-tuned starcoder2-15b with an additional 0.7 billion high-quality, code-related tokens for 3 epochs. We used DeepSpeed ZeRO 3 and Flash Attention 2 to accelerate the training process. It achieves 77.4 pass@1 on HumanEval-Python. This model operates using the Alpaca instruction format (excluding the system prompt).

Layers Context Template
40
16384
### Instruction
{instruction}
### Response
{response}
Downloads last month
42
GGUF
Model size
16B params
Architecture
starcoder2

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) does not yet support gguf models for this pipeline type.

Model tree for dranger003/starcoder2-15b-instruct-iMat.GGUF

Quantized
(1)
this model