LoRA Adapter Layers!
Uploaded model
- Developed by: student-abdullah
- Finetuned from model: meta-llama/Meta-Llama-3.1-8B
- Created on: 27th September, 2024
- Full model: student-abdullah/Llama3.1_Medicine_Hinglish_Fine-Tuned_27-09_8bit_gguf
Acknowledgement
Model Description
This LoRA adapter layer model is fine-tuned from the meta-llama/Meta-Llama-3.1-8B base model to specialisation related to generic medications under the PMBJP scheme. The fine-tuning process included the following hyperparameters:
- Fine Tuning Template: Llama 3.1 Q&A
- Max Tokens: 512
- LoRA Alpha: 32
- LoRA Rank (r): 128
- Learning rate: 2e-4
- Gradient Accumulation Steps: 2
- Batch Size: 12
Model Quantitative Performace
- Training Quantitative Loss: 0.1368 (at final 300th epoch)
Limitations
- This is not a fully compiled model, rather just LoRA layers
Model tree for student-abdullah/Llama3.1_Medicine_Hinglish_Fine-Tuned_27-09_LoRA_Layers
Base model
meta-llama/Llama-3.1-8B