MedGPT-Llama3.1-8B-v.1
- This model is a fine-tuned version of unsloth/Meta-Llama-3.1-8B on an dataset created by Valerio Job together with GPs based on real medical data.
- Version 1 (v.1) of MedGPT is the very first version of MedGPT and the training dataset has been kept simple and small with only 60 examples.
- This repo includes the 16bit format of the model as well as the LoRA adapters of the model. There is a separate repo called valeriojob/MedGPT-Llama3.1-8B-BA-v.1-GGUF that includes the quantized versions of this model in GGUF format.
- This model was trained 2x faster with Unsloth and Huggingface's TRL library.
Model description
This model acts as a supplementary assistance to GPs helping them in medical and admin tasks.
Intended uses & limitations
The fine-tuned model should not be used in production! This model has been created as a initial prototype in the context of a bachelor thesis.
Training and evaluation data
The dataset (train and test) used for fine-tuning this model can be found here: datasets/valeriojob/BA-v.1
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- per_device_train_batch_size = 2,
- gradient_accumulation_steps = 4,
- warmup_steps = 5,
- max_steps = 60,
- learning_rate = 2e-4,
- fp16 = not is_bfloat16_supported(),
- bf16 = is_bfloat16_supported(),
- logging_steps = 1,
- optim = "adamw_8bit",
- weight_decay = 0.01,
- lr_scheduler_type = "linear",
- seed = 3407,
- output_dir = "outputs"
Training results
Training Loss | Step |
---|---|
1.793200 | 1 |
1.635900 | 2 |
1.493000 | 3 |
1.227600 | 5 |
0.640500 | 10 |
0.438300 | 15 |
0.370200 | 20 |
0.205100 | 30 |
0.094900 | 40 |
0.068500 | 50 |
0.059400 | 60 |
Licenses
- License: apache-2.0
- Downloads last month
- 20
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for valeriojob/MedGPT-Llama3.1-8B-BA-v.1
Base model
unsloth/Meta-Llama-3.1-8B