--- language: - en license: llama3 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft - not-for-all-audiences base_model: unsloth/llama-3-8b-bnb-4bit datasets: - mpasila/LimaRP-PIPPA-Mix-8K-Context - grimulkan/LimaRP-augmented - KaraKaraWitch/PIPPA-ShareGPT-formatted --- This is a merge of [mpasila/Llama-3-LiPPA-LoRA-8B](https://huggingface.co/mpasila/Llama-3-LiPPA-LoRA-8B). LoRA trained in 4-bit with 8k context using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B/) as the base model for 1 epoch. Dataset used is [mpasila/LimaRP-PIPPA-Mix-8K-Context](https://huggingface.co/datasets/mpasila/LimaRP-PIPPA-Mix-8K-Context) which was made using [grimulkan/LimaRP-augmented](https://huggingface.co/datasets/grimulkan/LimaRP-augmented) and [KaraKaraWitch/PIPPA-ShareGPT-formatted](https://huggingface.co/datasets/KaraKaraWitch/PIPPA-ShareGPT-formatted). This has been trained on the base model and not the instruct model. The model trained with the instruct model using the same dataset is here: [mpasila/Llama-3-Instruct-LiPPA-8B](https://huggingface.co/mpasila/Llama-3-Instruct-LiPPA-8B) From quick testing it appears to work fairly well for chatting. ### Prompt format: Llama 3 Instruct Unsloth changed assistant to gpt and user to human. # Uploaded model - **Developed by:** mpasila - **License:** Llama 3 Community License - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)