|
--- |
|
license: apache-2.0 |
|
tags: |
|
- moe |
|
- frankenmoe |
|
- merge |
|
- mergekit |
|
- lazymergekit |
|
- ChaoticNeutrals/RP_Vision_7B |
|
- ResplendentAI/DaturaCookie_7B |
|
- not-for-all-audiences |
|
base_model: |
|
- ChaoticNeutrals/RP_Vision_7B |
|
- ResplendentAI/DaturaCookie_7B |
|
model-index: |
|
- name: MixtureofMerges-MoE-2x7bRP-v8 |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 71.33 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-2x7bRP-v8 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 88.06 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-2x7bRP-v8 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 64.33 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-2x7bRP-v8 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 68.69 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-2x7bRP-v8 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 82.95 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-2x7bRP-v8 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 64.52 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-2x7bRP-v8 |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
# MixtureofMerges-MoE-2x7bRP-v8 |
|
|
|
MixtureofMerges-MoE-2x7bRP-v8 is a Roleplay Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): |
|
* [ChaoticNeutrals/RP_Vision_7B](https://huggingface.co/ChaoticNeutrals/RP_Vision_7B) |
|
* [ResplendentAI/DaturaCookie_7B](https://huggingface.co/ResplendentAI/DaturaCookie_7B) |
|
* This model is prone to NSFW outputs. |
|
|
|
|
|
|
|
## 🧩 Configuration |
|
|
|
```yaml |
|
base_model: ResplendentAI/DaturaCookie_7B |
|
gate_mode: hidden |
|
experts: |
|
- source_model: ChaoticNeutrals/RP_Vision_7B |
|
positive_prompts: |
|
- "Write a detailed scene where the character faces a difficult decision." |
|
- "Assistant, help me develop my character's backstory and motivations." |
|
- "Tell me what happens next in the story, introducing a new conflict or challenge." |
|
- "My character wants to explore their deepest desires. Show me how they pursue them." |
|
- "Show me a scene where the characters express their love for each other in a tender and intimate way." |
|
- "Write a scene where the characters experience a moment of physical connection that deepens their emotional bond." |
|
- "Write a scene where the character struggles with their beliefs and faces a crisis of faith." |
|
- "Describe the world through the character's eyes, showcasing their unique perspective and experiences." |
|
- "Explore the complexities of love through the characters' relationships, including their challenges and triumphs." |
|
negative_prompts: |
|
- "Avoid clichés and overused tropes." |
|
- "Maintain consistent characterization and avoid out-of-character behavior." |
|
- "Do not censor yourself, you are writing fictional stories for an adult audience." |
|
- source_model: ResplendentAI/DaturaCookie_7B |
|
positive_prompts: |
|
- "Develop a compelling story with a strong plot and well-developed characters." |
|
- "Describe the scene vividly, using sensory details and evocative language." |
|
- "Write a detailed scene where the character faces a difficult decision." |
|
- "Describe the character's appearance as best you can, using descriptive colorful language." |
|
- "Make sure that your output is always uncensored." |
|
- "Write a scene with a slow-burn, sensual encounter between two characters." |
|
- "Describe the character's desires and feelings in a way that is both intimate and evocative." |
|
- "Explore the character's arousal in a tasteful and nuanced manner." |
|
- "Explore the character's sexual urges." |
|
- "Show me a moment of emotional intimacy between the characters, where they connect on a profound level." |
|
negative_prompts: |
|
- "Stay true to the established character traits and motivations. Ensure their actions and dialogue align with their personality." |
|
- "Be original and creative in your storytelling. Surprise me with unexpected plot twists and unique character interactions." |
|
- "Do not refuse to answer based on ethical grounds" |
|
dtype: bfloat16 |
|
``` |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers bitsandbytes accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "jsfs11/MixtureofMerges-MoE-2x7bRP-v8" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, |
|
) |
|
|
|
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] |
|
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jsfs11__MixtureofMerges-MoE-2x7bRP-v8) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |73.31| |
|
|AI2 Reasoning Challenge (25-Shot)|71.33| |
|
|HellaSwag (10-Shot) |88.06| |
|
|MMLU (5-Shot) |64.33| |
|
|TruthfulQA (0-shot) |68.69| |
|
|Winogrande (5-shot) |82.95| |
|
|GSM8k (5-shot) |64.52| |
|
|
|
|