robinsmits's picture
Update README.md
702d923 verified
|
raw
history blame
2.03 kB
metadata
language:
  - en
license: apache-2.0
library_name: peft
tags:
  - mistral
  - generated_from_trainer
  - Transformers
  - text-generation-inference
datasets:
  - robinsmits/ChatAlpaca-20K
inference: false
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
  - name: Mistral-Instruct-7B-v0.2-ChatAlpaca
    results: []
pipeline_tag: text-generation

Mistral-Instruct-7B-v0.2-ChatAlpaca

This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the English robinsmits/ChatAlpaca-20K dataset.

It achieves the following results on the evaluation set:

  • Loss: 0.8584

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 1
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss
0.99 0.2 120 0.9355
0.8793 0.39 240 0.8848
0.8671 0.59 360 0.8737
0.8662 0.78 480 0.8679
0.8627 0.98 600 0.8639
0.8426 1.18 720 0.8615
0.8574 1.37 840 0.8598
0.8473 1.57 960 0.8589
0.8528 1.76 1080 0.8585
0.852 1.96 1200 0.8584

Framework versions

  • PEFT 0.7.1
  • Transformers 4.36.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.16.0
  • Tokenizers 0.15.0