--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft - not-for-all-audiences base_model: unsloth/mistral-7b-v0.2-bnb-4bit datasets: - mpasila/PIPPA-ShareGPT-formatted-named - KaraKaraWitch/PIPPA-ShareGPT-formatted --- This is a merge of [mpasila/PIPPA-Named-LoRA-7B](https://huggingface.co/mpasila/PIPPA-Named-LoRA-7B/). LoRA trained in 4-bit with 8k context using [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf/) as the base model for 1 epoch. Dataset used is [a modified](https://huggingface.co/datasets/mpasila/PIPPA-ShareGPT-formatted-named) version of [KaraKaraWitch/PIPPA-ShareGPT-formatted](https://huggingface.co/datasets/KaraKaraWitch/PIPPA-ShareGPT-formatted). ### Prompt format: ChatML # Uploaded model - **Developed by:** mpasila - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)