Gorgon-7b-v0.1 / README.md
basiliskinstitute's picture
Update README.md
41c030b verified
|
raw
history blame
769 Bytes
metadata
license: apache-2.0
datasets:
  - openerotica/gorgon-lima-v0.1
language:
  - en
tags:
  - Erotica
  - Porn
  - NSFW
  - Summarization
  - Ecommerce
  - SEO

This is an experimental lima style model trained on a small subset of freedom-rp and erotica-analysis-16k. Due to the much smaller dataset size (about 1000 samples from each original dataset) it was much easier to edit and clean thoroughly. I also used a slightly lower learning rate of 0.00015.

The prompt format is chatml.

I have not tested the model yet, but I am hoping I can use this to help me create more training data for specific genres.

Please consider subscribing to my patreon or buying a giant candle dick on my etsy to show your support.

https://www.patreon.com/openerotica

http://openerotica.etsy.com/