|
--- |
|
base_model: Aryanne/Astrea-RP-v1-3B |
|
inference: false |
|
language: |
|
- en |
|
library_name: transformers |
|
license: other |
|
model_creator: Aryanne |
|
model_name: Astrea-RP-v1-3B |
|
pipeline_tag: text-generation |
|
quantized_by: afrideva |
|
tags: |
|
- gpt |
|
- llm |
|
- large language model |
|
- gguf |
|
- ggml |
|
- quantized |
|
- q2_k |
|
- q3_k_m |
|
- q4_k_m |
|
- q5_k_m |
|
- q6_k |
|
- q8_0 |
|
--- |
|
# Aryanne/Astrea-RP-v1-3B-GGUF |
|
|
|
Quantized GGUF model files for [Astrea-RP-v1-3B](https://huggingface.co/Aryanne/Astrea-RP-v1-3B) from [Aryanne](https://huggingface.co/Aryanne) |
|
|
|
|
|
| Name | Quant method | Size | |
|
| ---- | ---- | ---- | |
|
| [astrea-rp-v1-3b.fp16.gguf](https://huggingface.co/afrideva/Astrea-RP-v1-3B-GGUF/resolve/main/astrea-rp-v1-3b.fp16.gguf) | fp16 | 5.59 GB | |
|
| [astrea-rp-v1-3b.q2_k.gguf](https://huggingface.co/afrideva/Astrea-RP-v1-3B-GGUF/resolve/main/astrea-rp-v1-3b.q2_k.gguf) | q2_k | 1.20 GB | |
|
| [astrea-rp-v1-3b.q3_k_m.gguf](https://huggingface.co/afrideva/Astrea-RP-v1-3B-GGUF/resolve/main/astrea-rp-v1-3b.q3_k_m.gguf) | q3_k_m | 1.39 GB | |
|
| [astrea-rp-v1-3b.q4_k_m.gguf](https://huggingface.co/afrideva/Astrea-RP-v1-3B-GGUF/resolve/main/astrea-rp-v1-3b.q4_k_m.gguf) | q4_k_m | 1.71 GB | |
|
| [astrea-rp-v1-3b.q5_k_m.gguf](https://huggingface.co/afrideva/Astrea-RP-v1-3B-GGUF/resolve/main/astrea-rp-v1-3b.q5_k_m.gguf) | q5_k_m | 1.99 GB | |
|
| [astrea-rp-v1-3b.q6_k.gguf](https://huggingface.co/afrideva/Astrea-RP-v1-3B-GGUF/resolve/main/astrea-rp-v1-3b.q6_k.gguf) | q6_k | 2.30 GB | |
|
| [astrea-rp-v1-3b.q8_0.gguf](https://huggingface.co/afrideva/Astrea-RP-v1-3B-GGUF/resolve/main/astrea-rp-v1-3b.q8_0.gguf) | q8_0 | 2.97 GB | |
|
|
|
|
|
|
|
## Original Model Card: |
|
This model is a merge of [euclaise/Echo-3B](https://huggingface.co/euclaise/Echo-3B), [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) and [Aryanne/Astridboros-3B](https://huggingface.co/Aryanne/Astridboros-3B) using task_arithmetic(see astrea-rp-v1-3b.yml or below). |
|
|
|
|
|
```yaml |
|
merge_method: task_arithmetic |
|
base_model: euclaise/Ferret-3B |
|
models: |
|
- model: euclaise/Ferret-3B |
|
- model: stabilityai/stablelm-zephyr-3b |
|
parameters: |
|
weight: 0.33 |
|
- model: euclaise/Echo-3B |
|
parameters: |
|
weight: 0.66 |
|
- model: Aryanne/Astridboros-3B |
|
parameters: |
|
weight: 0.16 |
|
dtype: float16 |
|
``` |
|
I recommend the use of Vicuna prompt format, but it's your choice to see what works for you. |
|
|
|
I think zephyr license applies to this merge, for non commercial use. |