File size: 1,518 Bytes
cb5009e
 
 
 
 
 
f875fca
 
 
fa2f6d2
 
cb5009e
 
4f6dd79
 
 
3b4c1ce
4f6dd79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3b4c1ce
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
library_name: transformers
tags:
- unsloth
- trl
- sft
license: other
license_name: llama3
license_link: LICENSE
datasets:
- 922-CA/MoCha_v1a
---

# monika-ddlc-8b-v1:
* LLaMA-3 8b fine-tuned for Monika character from DDLC (test, may have later version out soon)
* Fine-tuned on a dataset of ~600+ items (dialogue scraped from game, reddit, and Twitter augmented by [l2-7b-monika-v0.3c1](https://huggingface.co/922-CA/llama-2-7b-monika-v0.3c1) to turn each into snippets of multi-turn chat dialogue between Player and Monika; this was then manually edited, with more manually crafted items including info about character added in)
* [GGUFs](https://huggingface.co/922-CA/Llama-3-monika-ddlc-8b-v1-GGUF)

### USAGE
This is meant to be mainly a chat model with limited RP ability.

For best results: replace "Human" and "Assistant" with "Player" and "Monika" like so:

\nPlayer: (prompt)\nMonika:

### HYPERPARAMS
* Trained for 1 epoch
* rank: 16
* lora alpha: 16
* lora dropout: 0.5
* lr: 2e-4
* batch size: 2
* warmup ratio: 0.1
* grad steps: 4

### WARNINGS AND DISCLAIMERS
This model is meant to closely reflect the characteristics of Monika. Despite this, there is always the chance that "Monika" will hallucinate and get information about herself wrong or act out of character.

Additionally, being character-focused means that this model may not be the smartest model/not as capable as others for specific tasks.

Finally, this model is not guaranteed to output aligned or safe outputs, use at your own risk!