metadata
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: tiiuae/falcon-7b
model-index:
- name: saqr-7b-instruct
results: []
datasets:
- HuggingFaceH4/ultrachat_200k
- openbmb/UltraFeedback
- gsm8k
saqr-7b-instruct
This model is a fine-tuned version of tiiuae/falcon-7b on ultrachat_200k, UltraFeedback, and gsm8k datasets.
Model description
This model is a finetuned version of tiiuae/falcon-7b using supervised finetuning on nearly the same datasets as Zephyr-7B-beta.
Training and evaluation data
For training evaluation can be found here.
For evaluation can be found at Hugging face LeaderBoard here.
Training procedure
Can be found here.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 7
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 14
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 5000
Training results
Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1