Edit model card

Wiedervereinigung-7b-dpo

image/png

This is a dpo aligned merge of our favourite german models, scoring 7.11 on the mt-bench-de average. Since the original models based on mistral - three of them on the brilliant german LeoLM/leo-mistral-hessianai-7b - they are reunited in this merged model. Therefore the name, no nationalist ideas involved :-).

To improve result quality they are dpo-trained with a german translation of slimorca dpo using hermeo-7B for reject results.

If you are gpu-poor like me you can now use LLaMA-Factory to train with german datasets.

Kudos to the authors of the original models at DiscoResearch and VAGOsolutions, Malte Ostendorff and Matthias Uhlig. We are your fan club.

This model was brought to you and the nvidia bill was paid by Mayflower GmbH.

Benchmark results: mt-bench-de

Is the merged model alone already good? Well, of course. But it is even better with the help of some dpo tuning.

{
    "first_turn": 7.3,
    "second_turn": 6.925,
    "categories": {
        "writing": 8.425,
        "roleplay": 8.6,
        "reasoning": 5.4,
        "math": 4.35,
        "coding": 4.3,
        "extraction": 7.975,
        "stem": 8.5,
        "humanities": 9.35
    },
    "average": 7.1125
}

Other Versions

A big thank you to LoneStriker for the quantized models.

Wiedervereinigung-7b is a LazyMergekit merge of:

🧩 Configuration

models:
  - model: LeoLM/leo-mistral-hessianai-7b
    # No parameters necessary for base model
  - model: DiscoResearch/DiscoLM_German_7b_v1
    parameters:
      density: 0.6
      weight: 0.25
  - model: DRXD1000/Phoenix
    parameters:
      density: 0.6
      weight: 0.25
  - model: VAGOsolutions/SauerkrautLM-7b-v1-mistral
    parameters:
      density: 0.6
      weight: 0.25
  - model: malteos/hermeo-7b
    parameters:
      density: 0.6
      weight: 0.25
merge_method: dare_ties
base_model: LeoLM/leo-mistral-hessianai-7b
parameters:
  int8_mask: true
dtype: bfloat16

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mayflowergmbh/Wiedervereinigung-7b-dpo"
messages = [{"role": "user", "content": "Was ist ein deutsches Large Language Model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
819
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mayflowergmbh/Wiedervereinigung-7b-dpo