Phi-Medical-QA-LoRA / README.md
aryaadhi's picture
End of training
f41184f verified
|
raw
history blame
3.4 kB
metadata
base_model: microsoft/phi-1_5
library_name: peft
license: mit
tags:
  - generated_from_trainer
model-index:
  - name: Phi-Medical-QA-LoRA
    results: []

Phi-Medical-QA-LoRA

This model is a fine-tuned version of microsoft/phi-1_5 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7010

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 7

Training results

Training Loss Epoch Step Validation Loss
3.1348 0.1684 100 2.7621
2.6302 0.3367 200 2.5110
2.4441 0.5051 300 2.3619
2.3325 0.6734 400 2.2485
2.2262 0.8418 500 2.1819
2.1809 1.0101 600 2.1318
2.0646 1.1785 700 2.0802
2.0665 1.3468 800 2.0541
2.0072 1.5152 900 2.0057
1.9954 1.6835 1000 1.9762
1.9577 1.8519 1100 1.9554
1.9263 2.0202 1200 1.9256
1.8635 2.1886 1300 1.9027
1.855 2.3569 1400 1.8871
1.8258 2.5253 1500 1.8750
1.8269 2.6936 1600 1.8555
1.8194 2.8620 1700 1.8415
1.775 3.0303 1800 1.8257
1.7379 3.1987 1900 1.8175
1.7384 3.3670 2000 1.8052
1.74 3.5354 2100 1.7943
1.7275 3.7037 2200 1.7778
1.6903 3.8721 2300 1.7680
1.6908 4.0404 2400 1.7594
1.6663 4.2088 2500 1.7559
1.6312 4.3771 2600 1.7457
1.6412 4.5455 2700 1.7395
1.6392 4.7138 2800 1.7327
1.6237 4.8822 2900 1.7260
1.6138 5.0505 3000 1.7244
1.5858 5.2189 3100 1.7205
1.6005 5.3872 3200 1.7163
1.5662 5.5556 3300 1.7120
1.5888 5.7239 3400 1.7075
1.5802 5.8923 3500 1.7068
1.5659 6.0606 3600 1.7038
1.5526 6.2290 3700 1.7039
1.54 6.3973 3800 1.7024
1.5653 6.5657 3900 1.7018
1.545 6.7340 4000 1.7012
1.5455 6.9024 4100 1.7010

Framework versions

  • PEFT 0.11.1
  • Transformers 4.42.4
  • Pytorch 1.13.1+cu117
  • Datasets 2.19.2
  • Tokenizers 0.19.1