Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
π Cakrawala-123B
Where Worlds Converge and Adventures Begin!
π What's Special About This Model?
Cakrawala-123B is a fine-tuned variant of the Mistral-Large-Instruct-2411 model, specifically optimised for generating rich roleplaying conversations and character interactions. The model has been trained to excel at producing detailed, contextually appropriate character dialogues with rich descriptions of physical actions, expressions, and emotional states while maintaining consistent character voices and perspectives throughout extended interactions.
π§ͺ The Secret Sauce
Training Diet:
- Fed with NarrativAI/CakrawalaRP dataset
- Conversation pairs with detailed interactions
- Focused on maintaining character consistency and rich descriptions
Tech Wizardry:
- Base Model: Mistral-Large-Instruct-2411
- Fine-tuned using QLoRA
- Trained over 2 epochs
Training Parameters
- Gradient Accumulation Steps: 1
- Micro Batch Size: 4
- Learning Rate: 0.000015
- Optimizer: AdamW
- Scheduler: Cosine
- Mixed Precision: BF16 & FP16 with TF32 support
π§ Under the Hood
- LoRA Configuration:
- Rank (r): 32
- Alpha: 64
- Dropout: 0.1
- Sequence Length: 2048
- Gradient Checkpointing: Enabled
- Flash Attention: Enabled
π¬ License & Credits
- Licensed under MIT
- Based on mistralai/Mistral-Large-Instruct-2411
Built with β€οΈ for roleplayers, by roleplayers
- Downloads last month
- 6
Model tree for FluffyKaeloky/Cakrawala-123B-exl2-3.5bpw
Base model
mistralai/Mistral-Large-Instruct-2411
Finetuned
NarrativAI/Cakrawala-123B