|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- Setiaku/Stheno-v3.2 |
|
- Squish42/bluemoon-fandom-1-1-rp-cleaned |
|
- openerotica/freedom-rp |
|
- MinervaAI/Aesir-Preview |
|
- jeiku/JeikuL3v2 |
|
- ResplendentAI/Sissification_Hypno_1k |
|
- ResplendentAI/Synthetic_Soul_1k |
|
- ResplendentAI/theory_of_mind_fixed_output |
|
language: |
|
- en |
|
base_model: ResplendentAI/Nymph_8B |
|
tags: |
|
- not-for-all-audiences |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# QuantFactory/Nymph_8B-GGUF |
|
This is quantized version of [ResplendentAI/Nymph_8B](https://huggingface.co/ResplendentAI/Nymph_8B?not-for-all-audiences=true) created using llama.cpp |
|
|
|
# Model Description |
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/9U_eJCDzLJ8nxb6qfuICc.jpeg) |
|
|
|
Nymph is the culmination of everything I have learned with the T-series project. This model aims to be a unique and full featured RP juggernaut. |
|
|
|
The finetune incorporates 1.6 Million tokens of RP data sourced from Bluemoon, FreedomRP, Aesir-Preview, and Claude Opus logs. I made sure to use the multi-turn sharegpt datasets this time instead of alpaca conversions. I have also included three of my personal datasets. The final touch is an ORPO based upon Openhermes Roleplay preferences. |