|
--- |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
tags: |
|
- text-generation-inference |
|
- llama |
|
- llama-2 |
|
- code |
|
inference: false |
|
base_model: codellama/CodeLlama-7b-hf |
|
--- |
|
|
|
|
|
# CodeLlama-7b-hf-GGUF |
|
- Quantized version of [CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) |
|
- Created using llama.cpp |
|
|
|
## Available Quants |
|
|
|
* Q2_K |
|
* Q3_K_L |
|
* Q3_K_M |
|
* Q3_K_S |
|
* Q4_0 |
|
* Q4_K_M |
|
* Q4_K_S |
|
* Q5_0 |
|
* Q5_K_M |
|
* Q5_K_S |
|
* Q6_K |
|
* Q8_0 |
|
|
|
ReadMe format inspired from [mlabonne](https://huggingface.co/mlabonne) |