Description
This repo contains bf16 files of Nyxene-v2-11B. It feels like with the new models, 1% is no longer needed as in the previous version. And yes, new version. Again.
Model used
- berkeley-nest/Starling-LM-7B-alpha
- openaccess-ai-collective/DPOpenHermes-7B
- fblgit/fblgit/una-cybertron-7b-v2
- chargoddard/loyal-piano-m7-cdpo
Prompt template
The best one after further testing is this one:
<|system|>
Below is an instruction that describes a task. Write a response that appropriately completes the request.
<|user|>
{prompt}
<|assistant|>
The secret sauce
loyal-piano-cybertron-11B :
slices:
- sources:
- model: fblgit/una-cybertron-7b-v2
layer_range: [0, 24]
- sources:
- model: chargoddard/loyal-piano-m7-cdpo
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
Starling-DPOHermes-11B :
slices:
- sources:
- model: berkeley-nest/Starling-LM-7B-alpha
layer_range: [0, 24]
- sources:
- model: openaccess-ai-collective/DPOpenHermes-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
Nyxene-11B :
slices:
- sources:
- model: loyal-piano-cybertron-11B
layer_range: [0, 48]
- model: Starling-NeuralHermes-11B
layer_range: [0, 48]
merge_method: slerp
base_model: dolphin-juanako-11B
parameters:
t:
- filter: lm_head
value: [0.75]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75, 0.25]
- filter: mlp
value: [0.25, 0.75]
- filter: layernorm
value: [0.5, 0.5]
- filter: modelnorm
value: [0.75]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
I use mergekit for all the manipulation told here.
Thanks to the Undi95 for the original 11B mistral merge recipe.
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.