File size: 1,280 Bytes
b4b0ef9 49c9d38 5eadf18 b4b0ef9 49c9d38 537fbe0 49c9d38 10de8b0 49c9d38 7226c4a 49c9d38 3feef72 91b790a 3feef72 49c9d38 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
license: apache-2.0
tags:
- moe
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- mlabonne/NeuralDaredevil-7B
- text-generation-inference
- Text Generation
---
---
**This is a repository of GGUF Quants for DareBeagel-2x7B**
---
Original Model Available Here: https://huggingface.co/shadowml/DareBeagel-2x7B
**Available Quants**
* Q8_0
* Q6_K
* Q5_K_M
* Q5_K_S
* Q4_K_M
* Q4_K_S
More coming... slow upload speed.
# Beyonder-2x7B-v2
Beyonder-2x7B-v2 is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B)
## 🧩 Configuration
```yaml
base_model: mlabonne/NeuralBeagle14-7B
gate_mode: random
experts:
- source_model: mlabonne/NeuralBeagle14-7B
positive_prompts: [""]
- source_model: mlabonne/NeuralDaredevil-7B
positive_prompts: [""]
```
## 💻 Usage
```
Load in Kobold.cpp or whatever.
I found Alpaca (and Alpaca-ish) prompts worked well.
Settings that worked good for me are:
Min P - 0.1
Dynamic Temperature Min 0 Max 3
Rep Pen 1.03
Rep Pen Range 1000
``` |