---
language:
- en
- vi
license: mit
library_name: transformers
tags:
- ghost
- TensorBlock
- GGUF
pipeline_tag: text-generation
widget:
- text: How many helicopters can a human eat in one sitting
output:
text: Ahoy, me matey! A human can eat approximately one helicopter in one sitting,
but only if they're a giant sea monster with a stomach the size of a small country.
🤢🤢 So, it's not advisable to try this, pirate! 🏰🛢️
base_model: ghost-x/ghost-7b-v0.9.1
model-index:
- name: ghost-7b-v0.9.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 55.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.03
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.78
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.96
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.91
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.1
name: Open LLM Leaderboard
---
## ghost-x/ghost-7b-v0.9.1 - GGUF
This repo contains GGUF format model files for [ghost-x/ghost-7b-v0.9.1](https://huggingface.co/ghost-x/ghost-7b-v0.9.1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Prompt template
```
<|system|>
{system_prompt}
<|user|>
{prompt}
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [ghost-7b-v0.9.1-Q2_K.gguf](https://huggingface.co/tensorblock/ghost-7b-v0.9.1-GGUF/blob/main/ghost-7b-v0.9.1-Q2_K.gguf) | Q2_K | 2.532 GB | smallest, significant quality loss - not recommended for most purposes |
| [ghost-7b-v0.9.1-Q3_K_S.gguf](https://huggingface.co/tensorblock/ghost-7b-v0.9.1-GGUF/blob/main/ghost-7b-v0.9.1-Q3_K_S.gguf) | Q3_K_S | 2.947 GB | very small, high quality loss |
| [ghost-7b-v0.9.1-Q3_K_M.gguf](https://huggingface.co/tensorblock/ghost-7b-v0.9.1-GGUF/blob/main/ghost-7b-v0.9.1-Q3_K_M.gguf) | Q3_K_M | 3.277 GB | very small, high quality loss |
| [ghost-7b-v0.9.1-Q3_K_L.gguf](https://huggingface.co/tensorblock/ghost-7b-v0.9.1-GGUF/blob/main/ghost-7b-v0.9.1-Q3_K_L.gguf) | Q3_K_L | 3.560 GB | small, substantial quality loss |
| [ghost-7b-v0.9.1-Q4_0.gguf](https://huggingface.co/tensorblock/ghost-7b-v0.9.1-GGUF/blob/main/ghost-7b-v0.9.1-Q4_0.gguf) | Q4_0 | 3.827 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ghost-7b-v0.9.1-Q4_K_S.gguf](https://huggingface.co/tensorblock/ghost-7b-v0.9.1-GGUF/blob/main/ghost-7b-v0.9.1-Q4_K_S.gguf) | Q4_K_S | 3.856 GB | small, greater quality loss |
| [ghost-7b-v0.9.1-Q4_K_M.gguf](https://huggingface.co/tensorblock/ghost-7b-v0.9.1-GGUF/blob/main/ghost-7b-v0.9.1-Q4_K_M.gguf) | Q4_K_M | 4.068 GB | medium, balanced quality - recommended |
| [ghost-7b-v0.9.1-Q5_0.gguf](https://huggingface.co/tensorblock/ghost-7b-v0.9.1-GGUF/blob/main/ghost-7b-v0.9.1-Q5_0.gguf) | Q5_0 | 4.654 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ghost-7b-v0.9.1-Q5_K_S.gguf](https://huggingface.co/tensorblock/ghost-7b-v0.9.1-GGUF/blob/main/ghost-7b-v0.9.1-Q5_K_S.gguf) | Q5_K_S | 4.654 GB | large, low quality loss - recommended |
| [ghost-7b-v0.9.1-Q5_K_M.gguf](https://huggingface.co/tensorblock/ghost-7b-v0.9.1-GGUF/blob/main/ghost-7b-v0.9.1-Q5_K_M.gguf) | Q5_K_M | 4.779 GB | large, very low quality loss - recommended |
| [ghost-7b-v0.9.1-Q6_K.gguf](https://huggingface.co/tensorblock/ghost-7b-v0.9.1-GGUF/blob/main/ghost-7b-v0.9.1-Q6_K.gguf) | Q6_K | 5.534 GB | very large, extremely low quality loss |
| [ghost-7b-v0.9.1-Q8_0.gguf](https://huggingface.co/tensorblock/ghost-7b-v0.9.1-GGUF/blob/main/ghost-7b-v0.9.1-Q8_0.gguf) | Q8_0 | 7.167 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/ghost-7b-v0.9.1-GGUF --include "ghost-7b-v0.9.1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/ghost-7b-v0.9.1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```