dollygem-2b-LoRA / README.md
eduardo-alvarez's picture
End of training
532afd8 verified
|
raw
history blame
2.11 kB
metadata
license: gemma
library_name: peft
tags:
  - trl
  - sft
  - generated_from_trainer
datasets:
  - generator
base_model: google/gemma-2b
model-index:
  - name: gemma-2b-dolly-qa
    results: []

gemma-2b-dolly-qa

This model is a fine-tuned version of google/gemma-2b on the generator dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0215

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.05
  • training_steps: 1480

Training results

Training Loss Epoch Step Validation Loss
2.9198 1.64 100 2.5675
2.437 3.28 200 2.2818
2.2514 4.92 300 2.1677
2.1587 6.56 400 2.1038
2.116 8.2 500 2.0741
2.0794 9.84 600 2.0576
2.0663 11.48 700 2.0467
2.0494 13.11 800 2.0394
2.0449 14.75 900 2.0336
2.0336 16.39 1000 2.0293
2.0281 18.03 1100 2.0262
2.0172 19.67 1200 2.0240
2.0227 21.31 1300 2.0227
2.0128 22.95 1400 2.0215

Framework versions

  • PEFT 0.10.0
  • Transformers 4.39.3
  • Pytorch 2.0.1a0+cxx11.abi
  • Datasets 2.18.0
  • Tokenizers 0.15.2