File size: 2,805 Bytes
c528621 e07114e 75061db c528621 e07114e 75061db e07114e 75061db e07114e 75061db |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
base_model:
- Dogge/llama-3-8B-instruct-Bluemoon-Freedom-RP
- nbeerbower/llama-3-dragonmaid-8B
- kuotient/Meta-Llama-3-8B-Instruct
- Locutusque/llama-3-neural-chat-v1-8b
- Undi95/Llama-3-Unholy-8B-e4
- openlynn/Llama-3-Soliloquy-8B
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
- ERP
- RP
- Roleplay
- uncensored
license: other
language:
- en
---
# merged
This is the unquantized first version of SnowyRP's Llama 3 model. Quality of this model is unknown, since it is a Llama 3 model that I have not had the chance to test much.
## Merge Details
For this model I used the most uncensored and smartest finetunes of LLama 3, since LLama 3 has the largest ammount of training tokens I know of, so I expect it to have more than enough of a knowledge base.
## Model Use:
Here is an modified Alpaca prompt I use with this model:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: Take the role of {{char}} in a play that leaves a lasting impression on {{user}}. Write {{char}}'s next reply. Never skip or gloss over {{char}}’s actions. Progress the scene at a naturally slow pace. Avoid repetition, don't loop. Develop the plot slowly, always stay in character. Describe all actions in full, elaborate, explicit, graphic, and vivid detail. Mention all relevant sensory perceptions.
```
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [kuotient/Meta-Llama-3-8B-Instruct](https://huggingface.co/kuotient/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [Dogge/llama-3-8B-instruct-Bluemoon-Freedom-RP](https://huggingface.co/Dogge/llama-3-8B-instruct-Bluemoon-Freedom-RP)
* [nbeerbower/llama-3-dragonmaid-8B](https://huggingface.co/nbeerbower/llama-3-dragonmaid-8B)
* [Locutusque/llama-3-neural-chat-v1-8b](https://huggingface.co/Locutusque/llama-3-neural-chat-v1-8b)
* [Undi95/Llama-3-Unholy-8B-e4](https://huggingface.co/Undi95/Llama-3-Unholy-8B-e4)
* [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: kuotient/Meta-Llama-3-8B-Instruct
dtype: bfloat16
merge_method: model_stock
slices:
- sources:
- layer_range: [0, 32]
model: Undi95/Llama-3-Unholy-8B-e4
- layer_range: [0, 32]
model: nbeerbower/llama-3-dragonmaid-8B
- layer_range: [0, 32]
model: openlynn/Llama-3-Soliloquy-8B
- layer_range: [0, 32]
model: Locutusque/llama-3-neural-chat-v1-8b
- layer_range: [0, 32]
model: Dogge/llama-3-8B-instruct-Bluemoon-Freedom-RP
- layer_range: [0, 32]
model: kuotient/Meta-Llama-3-8B-Instruct
``` |