Herman555 commited on
Commit
e14c15f
1 Parent(s): 0103560

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -3
README.md CHANGED
@@ -250,7 +250,38 @@ sample, similar to unsupervised finetuning.
250
 
251
 
252
 
253
- # Initial Personal Observations (Herman555)
254
- Right off the bat seemed to impress me, the writing was coherent and fluid, a pleasure to read. I actually didn't have repetition issues for once!, although that might be thanks to the storywriting LoRA. model was creative the whole way through past 8k tokens with summarization extension enabled in silltavern, although I did have to bump up the repetition penalty a tiny bit. the AI kept its writing style the whole way through, it did not get dumbed down.
255
  The model is very smart, with Zephyr-beta-7b being the top rated 7b instruction following model at the moment according to AlpacaEval as of 04/11/2023, it wasn't able to follow my sort of gamified roleplay with stats, This model however does it pretty well for a 7b, it's by no means perfect but it worked for the most part. What compelled me to merge this was the fact that the new dolphin model has added empathy "With an infusion of curated Samantha DNA".
256
- The model sticked to the character pefectly and made me feel immersed. Seamless transition from normal roleplay to ERP, both forms were excellent. One of the few models where the character didn't become an instant bimbo during ERP. this is more of a hunch because it could be the LoRA but I feel like the added empathy is helping a lot. Last but not least I was surprised that nobody was merging models with this LoRA, I mean it's limarp bro with more ERP data lol. In any case, limarp has increased the quality of roleplay dramatically in every model I tried.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
250
 
251
 
252
 
253
+ # Initial personal observations (Herman555)
254
+ Right off the bat seemed to impress me, the writing was coherent and fluid, a pleasure to read. AI mostly did not speak for me, in general I didn't have to regenerate for a quality reply much at all. I actually didn't have repetition issues for once!, although that might be thanks to the storywriting LoRA. model was creative the whole way through past 8k tokens with summarization extension enabled in silltavern, although I did have to bump up the repetition penalty a tiny bit. the AI kept its writing style the whole way through, it did not get dumbed down.
255
  The model is very smart, with Zephyr-beta-7b being the top rated 7b instruction following model at the moment according to AlpacaEval as of 04/11/2023, it wasn't able to follow my sort of gamified roleplay with stats, This model however does it pretty well for a 7b, it's by no means perfect but it worked for the most part. What compelled me to merge this was the fact that the new dolphin model has added empathy "With an infusion of curated Samantha DNA".
256
+ The model sticked to the character pefectly and made me feel immersed. Seamless transition from normal roleplay to ERP, both forms were excellent. One of the few models where the character didn't become an instant bimbo during ERP. this is more of a hunch because it could be the LoRA but I feel like the added empathy is helping a lot. Last but not least I was surprised that nobody was merging models with this LoRA, I mean it's limarp bro with more ERP data lol. In any case, limarp has increased the quality of roleplay dramatically in every model I tried.
257
+
258
+
259
+ # Back end
260
+ Koboldcpp + SillyTavern Q4_KM
261
+
262
+ # SillyTavern Formatting (AI response formatting)
263
+ Default simple-proxy-for-tavern preset. I did not use the limarp prompt format, it doesn't matter what you use, whatever gives better results.
264
+ Most cases the one I mentioned works best if you like long, detailed replies. I have not tested other prompt formats yet.
265
+
266
+ # Custom stopping strings
267
+ ["</s>", "<|", "\n#", "\n*{{user}} ", "\n\n\n"] Will improve roleplay experience.
268
+
269
+ # Samplers used (AI response configuration)
270
+ Storywriter preset
271
+ Temparature: 72-85
272
+ Repetition penalty: 10-13 (10 is a good number to start with, anything below 10 or above 13 doesn't work well in my experience.)
273
+
274
+ simple-proxy-for-tavern preset
275
+ Temparature: 65-85
276
+ Repetition penalty: 10-13
277
+
278
+ # Other settings used
279
+ Response length: 300
280
+ Context size: 8192
281
+
282
+ Summarization: main api - default settings
283
+
284
+ All other settings are default unless specified.
285
+
286
+
287
+