lmind_nq_train6000_eval6489_v1_docidx_v3_meta-llama_Llama-2-7b-hf_5e-5_lora2
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3 dataset. It achieves the following results on the evaluation set:
- Loss: 4.4400
- Accuracy: 0.4387
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 20.0
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
1.3957 | 1.0 | 341 | 3.3998 | 0.4541 |
1.3756 | 2.0 | 683 | 3.4568 | 0.4546 |
1.3109 | 3.0 | 1024 | 3.5541 | 0.4578 |
1.2488 | 4.0 | 1366 | 3.6057 | 0.4573 |
1.1856 | 5.0 | 1707 | 3.7215 | 0.4557 |
1.1284 | 6.0 | 2049 | 3.7284 | 0.4545 |
1.0567 | 7.0 | 2390 | 3.8020 | 0.4533 |
0.978 | 8.0 | 2732 | 3.8535 | 0.4524 |
0.9007 | 9.0 | 3073 | 3.9364 | 0.4516 |
0.833 | 10.0 | 3415 | 3.9463 | 0.4499 |
0.7455 | 11.0 | 3756 | 4.0375 | 0.4488 |
0.6909 | 12.0 | 4098 | 4.1021 | 0.4471 |
0.6243 | 13.0 | 4439 | 4.1491 | 0.4457 |
0.5672 | 14.0 | 4781 | 4.2086 | 0.4441 |
0.5096 | 15.0 | 5122 | 4.2696 | 0.4443 |
0.4532 | 16.0 | 5464 | 4.2835 | 0.4422 |
0.4201 | 17.0 | 5805 | 4.3720 | 0.4411 |
0.3642 | 18.0 | 6147 | 4.3791 | 0.4412 |
0.3222 | 19.0 | 6488 | 4.4365 | 0.4393 |
0.2966 | 19.97 | 6820 | 4.4400 | 0.4387 |
Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
Model tree for tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3_meta-llama_Llama-2-7b-hf_5e-5_lora2
Base model
meta-llama/Llama-2-7b-hfDataset used to train tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3_meta-llama_Llama-2-7b-hf_5e-5_lora2
Evaluation results
- Accuracy on tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3self-reported0.439