Edit model card

Daybreak-Mixtral-8x7b v24.02-7

An experimental model trained on a (currently) private ERP dataset of highly curated niche content (crestfall/daybreak as of 2024-02-10).

Not suitable for any audience.

Model was finetuned on top of mistralai/Mixtral-8x7B-Instruct-v0.1, and follows that model's instruction format.

Prompt format:

The model uses the Mixtral-8x7b-instruct format (see the base model), but users have repored that Alpaca format gives better results. Try which works for you.

Training details:

The model was trained for 1.83 epochs (eval minima based on 1% of dataset) using Axolotl.

See axolotl.yml for details.

Downloads last month
24
GGUF
Model size
46.7B params
Architecture
llama

3-bit

4-bit

6-bit

Inference API
Unable to determine this model's library. Check the docs .