RolePlay LLM
Collection
A collection consisted of models and dataset that help fine-tune LLM to role-play
•
3 items
•
Updated
•
2
This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on the hieunguyenminh/roleplay dataset.
This model can adapt to any type of characters and provide answer that personalize that character.
It is trained with supervised learning and will be trained with DPO in the future.
The following hyperparameters were used during training:
Loss after 400 steps: 0.73
Base model
mistralai/Mistral-7B-Instruct-v0.2