Great model!
This is one of the best models i’ve ever used for RP/ERP (other than it spamming im_end every reply). I do wish there was more detail on what the difference is between your 1.0 and 1.1 models when you release them.
This model is easily sitting right up there next to big tiger / gemmastura as far as refusals and censoring goes. I use it with 32k context but have used 64k as well. Works great.
When i get the time later this week i will be looking into making exl2 quants of a couple of your models. The ones that already have exl2 quants are great but a lot of them are too big to fit on a 2080ti with high context. The bpw needs to be lower on the larger models.
Agree. This model so so damn good! It constantly belts out such creative responses and chatting with it feels most human with consistent quality. I primarily use it for chatting and the context/prompt following is great, does not lecture you, does not sound like ChatGPT and in effect you will feel like "it gets you". Thank you @TheDrummer
+1. Great general model for everyday use! Thanx!