Base Model: GPT4-x-Alpaca full fine tune by Chavinlo -> https://huggingface.co/chavinlo/gpt4-x-alpaca LORA fine tune using the Roleplay Instruct from GPT4 generated dataset -> https://github.com/teknium1/GPTeacher/tree/main/Roleplay Merged LORA to the model. Instruct it same way as alpaca / gpt4xalpaca: ``` ### Instruction: ### Response: ``` or ``` ### Instruction: ### Input: