|
--- |
|
base_model: Meta/Meta-Llama-3.1-8B |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- gguf |
|
datasets: |
|
- student-abdullah/BigPharma-Local_Meds_Dataset |
|
--- |
|
|
|
|
|
# Uploaded model |
|
|
|
- **Developed by:** student-abdullah |
|
- **License:** apache-2.0 |
|
- **Finetuned from model:** Meta/Meta-Llama-3.1-8B |
|
|
|
--- |
|
# Acknowledgement |
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
|
--- |
|
# Model usecase |
|
Inflation has significantly driven up the costs of essential necessities in India, compounded by high profit margins in the pharmaceutical industry. This has made vital medications increasingly expensive, burdening individuals financially and making access to treatments more difficult. The Government of India had launched the Pradhan Mantri Bhartiya Janaushadhi Pariyojana (PMBJP) in 2008 to provide similar quality and effectiveness alternative medicines at affordable prices through Janaushadhi Kendras. As of June 30, 2024, over 12,616 Kendras offer about 2,047 drugs and 300 surgical items. Despite this, low public awareness about these alternatives and Kendra locations limits the initiative’s effectiveness. |
|
|
|
--- |
|
# Model Description |
|
This model is fine-tuned from the Meta/Meta-Llama-3.1-8B base model to enhance its capabilities in generating relevant and accurate responses related to generic medications under the PMBJP scheme. The fine-tuning process included the following hyperparameters: |
|
|
|
- Max Tokens: 512 |
|
- LoRA Alpha: 32 |
|
- LoRA Rank (r): 128 |
|
- Gradient Accumulation Steps: 32 |
|
- Batch Size: 4 |
|
- Qunatization: 16 bits |
|
|
|
--- |
|
# Model Quantitative Performace |
|
- Training Quantitative Loss: 0.168 (at final 60th epoch) |
|
|
|
--- |
|
# Limitations |
|
- Token Limitations: With a max token limit of 512, the model might not handle very long queries or contexts effectively. |
|
- Training Data Limitations: The model’s performance is contingent on the quality and coverage of the fine-tuning dataset, which may affect its generalizability to different contexts or medications not covered in the dataset. |
|
- Potential Biases: As with any model fine-tuned on specific data, there may be biases based on the dataset used for training. |
|
|