Edit model card

Frankenmerge 11b between teknium/OpenHermes-2.5-Mistral-7B and Intel/neural-chat-7b-v3-1

GGUF: https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF

Merge with the following conditions

  - model: teknium/OpenHermes-2.5-Mistral-7B
    layer_range: [0, 8]

  - model: Intel/neural-chat-7b-v3-1
    layer_range: [4, 12]

  - model: teknium/OpenHermes-2.5-Mistral-7B
    layer_range: [9, 16]

  - model: Intel/neural-chat-7b-v3-1
    layer_range: [13, 20]

  - model: teknium/OpenHermes-2.5-Mistral-7B
    layer_range: [17, 24]

  - model: Intel/neural-chat-7b-v3-1
    layer_range: [21, 28]

  - model: teknium/OpenHermes-2.5-Mistral-7B
    layer_range: [25, 32]

merge_method: passthrough

Benchmarks are coming soon...

Downloads last month
630
Safetensors
Model size
11.4B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for S4sch/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b

Quantizations
3 models