Trappu commited on
Commit
1554ad1
1 Parent(s): 5b82ca0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -109,7 +109,7 @@ model-index:
109
 
110
  This model is a merge between [Trappu/Nemo-Picaro-fixed](https://huggingface.co/Trappu/Nemo-Picaro-fixed), a model trained on my own little dataset free of synthetic data, which focuses solely on storywriting and scenrio prompting (Example: `[ Scenario: bla bla bla; Tags: bla bla bla ]`), and [anthracite-org/magnum-v2-12b](https://huggingface.co/anthracite-org/magnum-v2-12b).
111
 
112
- The reason why I decided to merge it with Magnum (and don't recommend Picaro alone) is because that model, aside from its obvious flaws (rampant impersonation, stupid, etc...) is a one-trick pony and will really rough for the average LLM user to handle. The idea was to have Magnum work as some sort of stabilizer to fix the issues that emerge from the lack of multiturn/smart data. It worked, I think. I enjoy the outputs and it's smart enough to work with.
113
 
114
  But yeah the goal of this merge was to make a model that's both good at storytelling/narration but also fine when it comes to other forms of creative writing such as RP or chatting. I don't think it's quite there yet but it's something for sure.
115
 
 
109
 
110
  This model is a merge between [Trappu/Nemo-Picaro-fixed](https://huggingface.co/Trappu/Nemo-Picaro-fixed), a model trained on my own little dataset free of synthetic data, which focuses solely on storywriting and scenrio prompting (Example: `[ Scenario: bla bla bla; Tags: bla bla bla ]`), and [anthracite-org/magnum-v2-12b](https://huggingface.co/anthracite-org/magnum-v2-12b).
111
 
112
+ The reason why I decided to merge it with Magnum (and don't recommend Picaro alone) is because that model, aside from its obvious flaws (rampant impersonation, stupid, etc...), is a one-trick pony and will be really rough for the average LLM user to handle. The idea was to have Magnum work as some sort of stabilizer to fix the issues that emerge from the lack of multiturn/smart data in Picaro's dataset. It worked, I think. I enjoy the outputs and it's smart enough to work with.
113
 
114
  But yeah the goal of this merge was to make a model that's both good at storytelling/narration but also fine when it comes to other forms of creative writing such as RP or chatting. I don't think it's quite there yet but it's something for sure.
115