Mistral-NeMo-12B-Lyra-v4, a variation of Lyra-v4a1, layered over Lyra-v3, which was built on top of Lyra-v2a2, which itself was built upon Lyra-v2a1.
Automatic Approval
If you agree, please place future merges / derivatives under cc-by-nc-4.0 license. ty
Model Versioning
[See Previous Models]
|
Lyra-v4a1
|
------------> Lyra-v4 [Seperate RL Step targeting Instruct and Coherency over Base Nemo instead of SFT First, Result is Merged with Lyra-v4a1, fixes most quant-based issues. Somehow.]
This uses ChatML, or any of its variants which were included in previous versions.
<|im_start|>system
This is the system prompt.<|im_end|>
<|im_start|>user
Instructions placed here.<|im_end|>
<|im_start|>assistant
The model's response will be here.<|im_end|>
--------------------------------------------------
[INST]system
This is another system prompt.[/INST]
[INST]user
Your instructions placed here.[/INST]
[INST]assistant
The model's response will be here.[/INST]
Recommended Samplers:
Temperature: 0.6 - 1 # Make sure min_p is set before Temperature in Sampler Orders
min_p: 0.1 - 0.2 # Crucial for NeMo
Recommended Stopping Strings:
<|im_end|>
</s>
[/INST]
Notes
- I think I fixed the extra token stuff some users seem to be facing, while retaining everything else? It's some error alright.
- If you're using XML tags, you may see weird malformed stopping strings. Just add them to your current list. and move on.
- Its pretty nice, imo. I've been messing around with it a lot.
- Make sure the ChatML template is correct, I think there's some issues with the one used in SillyTavern which might cause improper replies?
Reup due to issues I faced on my page, crashing every time i go in here. works fine now? so yay?
Changes: Weights are the same, literally. go and check the checksum.
Credit to ArliAI for the model tokenizer configs and json I yoinked, to fix the tokenizer mess (in v3) I made, hence why config.json had their model name.
Same stuff applies
- Downloads last month
- 1,289