SanjiWatsuki commited on
Commit
8e844ee
1 Parent(s): 91cb942

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -47,6 +47,8 @@ OpenChat-3.5 uses an abomination of a prompt format with "GPT4 Correct User/Assi
47
 
48
  Most model mergers like [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) just slam them together and toss the extra ChatML tokens, resulting in a half-Alpaca-like half-ChatML-like Frankenstein's monster. For the most part, using Alpaca as the lingua franca just kinda works but [there are exceptions that can make a generation go off the rails](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3/discussions/6). I found this to be a bit of an issue in certain SillyTavern test cases.
49
 
 
 
50
  ### The sauce (All You Need is DARE)
51
 
52
  **tl;dr: It's an OpenChat/NeuralChat merger with as much RP as possible stuffed in using the DARE TIES merger method.**
@@ -88,13 +90,13 @@ There's a lot to unpack here. I went with DARE TIES because it appeared to be a
88
 
89
  First, there are two high density high weight models:
90
 
91
- chargoddard/loyal-piano-m7 is the easy primary model choice. It's an Alpaca prompt format model that scores highly, is very creative for a 7B, and is primarily trained on RP data.
92
 
93
- Toten5/Marcoroni-neural-chat-7B-v2 is the unintuitive second model pick. It is a merger of mergers that chains back to being an OpenChat/NeuralChat merger being SLERPed back into NeuralChat a second time. Despite SLERPing NeuralChat in multiple times, it retains its high benchmark scores. I opted to pick this model as my base because I believed it was the OpenChat/NeuralChat model that benchmarked well that was closest to the O.G. NeuralChat which has the most Alpaca-like prompt.
94
 
95
  By picking a density of 0.8, these models have a 96% chance of showing up for any TIE merger. This should ensure that there is a solid "base" of deltas from the base Mistral model that captures most of what makes these models good. High density with 0.3-0.4 weights have been shown in mergers like [jan-hq/supermario-v2](https://huggingface.co/jan-hq/supermario-v2)
96
 
97
- Next, there are 3 RP models merged in with medium density. Undi95/Toppy-M-7B, NeverSleep/Noromaid-7b-v0.2, and athirdpath/NSFW_DPO_vmgb-7b. Toppy-M-7B is an easy pick for being a well regarded 7B RP model - although, it is a merger of many mergers which might dilute its effectiveness as a lower density merge. NeverSleep/Noromaid-7b-v0.2 pulls in the unique private Noromaid RP dataset. Finally, athirdpath/NSFW_DPO_vmgb-7b is another Frankenstein OpenNeuralChat merger that happens to be DPOed on athirdpath's NSFW Alpaca pairs which seemed like another good RP addition to the model (plus, maybe it tilts it to being more Alpaca-flavored, idk).
98
 
99
  By picking a density of 0.4, these models should *largely* impart some of their flavor onto the merger. I suspect the density could go even lower and the models could be used even more like a LoRA-like merger on top.
100
 
 
47
 
48
  Most model mergers like [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) just slam them together and toss the extra ChatML tokens, resulting in a half-Alpaca-like half-ChatML-like Frankenstein's monster. For the most part, using Alpaca as the lingua franca just kinda works but [there are exceptions that can make a generation go off the rails](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3/discussions/6). I found this to be a bit of an issue in certain SillyTavern test cases.
49
 
50
+ Regardless, the strong Chat Arena performances from 7B models continues to lead me to believe they're the strongest base for an all-purpose model.
51
+
52
  ### The sauce (All You Need is DARE)
53
 
54
  **tl;dr: It's an OpenChat/NeuralChat merger with as much RP as possible stuffed in using the DARE TIES merger method.**
 
90
 
91
  First, there are two high density high weight models:
92
 
93
+ [chargoddard/loyal-piano-m7](https://huggingface.co/chargoddard/loyal-piano-m7) is the easy primary model choice. It's an Alpaca prompt format model that scores highly, is very creative for a 7B, and is primarily trained on RP data.
94
 
95
+ [Toten5/Marcoroni-neural-chat-7B-v2](https://huggingface.co/Toten5/Marcoroni-neural-chat-7B-v2) is the unintuitive second model pick. It is a merger of mergers that chains back to being an OpenChat/NeuralChat merger being SLERPed back into NeuralChat a second time. Despite SLERPing NeuralChat in multiple times, it retains its high benchmark scores. I opted to pick this model as my base because I believed it was the OpenChat/NeuralChat model that benchmarked well that was closest to the O.G. NeuralChat which has the most Alpaca-like prompt.
96
 
97
  By picking a density of 0.8, these models have a 96% chance of showing up for any TIE merger. This should ensure that there is a solid "base" of deltas from the base Mistral model that captures most of what makes these models good. High density with 0.3-0.4 weights have been shown in mergers like [jan-hq/supermario-v2](https://huggingface.co/jan-hq/supermario-v2)
98
 
99
+ Next, there are 3 RP models merged in with medium density. [Undi95/Toppy-M-7B](https://huggingface.co/Undi95/Toppy-M-7B), [NeverSleep/Noromaid-7b-v0.2](https://huggingface.co/NeverSleep/Noromaid-7b-v0.2), and [athirdpath/NSFW_DPO_vmgb-7b](https://huggingface.co/athirdpath/NSFW_DPO_vmgb-7b). Toppy-M-7B is an easy pick for being a well regarded 7B RP model - although, it is a merger of many mergers which might dilute its effectiveness as a lower density merge. NeverSleep/Noromaid-7b-v0.2 pulls in the unique private Noromaid RP dataset. Finally, athirdpath/NSFW_DPO_vmgb-7b is another Frankenstein OpenNeuralChat merger that happens to be DPOed on athirdpath's NSFW Alpaca pairs which seemed like another good RP addition to the model (plus, maybe it tilts it to being more Alpaca-flavored, idk).
100
 
101
  By picking a density of 0.4, these models should *largely* impart some of their flavor onto the merger. I suspect the density could go even lower and the models could be used even more like a LoRA-like merger on top.
102