Feedback on using Etheria for RP

#2
by OrangeApples - opened

First off, thanks @Steelskull for your work. I tried this model (Q4_K_M gguf of TheBloke) but it seems to be too experimental for actual RP at the moment. Using your recommended sampler settings and ChatML format in SillyTavern, I got misspelled words, and the model would talk for the user even though I instructed it not to. Someone on Reddit also mentioned that they experienced repetition with Etheria, but I didn't use it long enough to encounter such issues.

Currently, I see no reason to use this model over a good 34b like Nous-Capybara-limarpv3 which is smaller in size yet generates higher quality and more consistent responses for RP. However, given how smart the 34b yi models are, I'm looking forward to seeing future versions of Etheria that will hopefully reach the heights that Goliath did for Llama 2 models.

Thanks for the review! I appreciate you taking your time letting me know..

I believe there is issues with the cloned layers as this was an extreme test to see the ability of such a model but I am planning to do a finetune on this model using my Aether dataset (once I work out the bugs). Ill be first testing the Dataset on Aurora to see how well it functions. the dataset will focus on general knowledge and have a pretty heavy RP/ERP push.

I'm not knowledgeable with the technical aspects of merging models, but even I can tell that this is a pretty extreme test due to the lack of 55b yi frankenmerges on huggingface. Good luck with your testing! Hope it works out.

Sign up or log in to comment