|
--- |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: full-lstm-1 |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# full-lstm-1 |
|
|
|
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 3.9726 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 5e-05 |
|
- train_batch_size: 32 |
|
- eval_batch_size: 32 |
|
- seed: 1 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- training_steps: 3052726 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:-----:|:-------:|:---------------:| |
|
| 4.8036 | 0.03 | 76319 | 4.7692 | |
|
| 4.5156 | 0.03 | 152638 | 4.4851 | |
|
| 4.3728 | 1.03 | 228957 | 4.3496 | |
|
| 4.2801 | 0.03 | 305276 | 4.2669 | |
|
| 4.216 | 1.03 | 381595 | 4.2097 | |
|
| 4.1643 | 0.03 | 457914 | 4.1683 | |
|
| 4.1289 | 0.03 | 534233 | 4.1370 | |
|
| 4.1016 | 0.03 | 610552 | 4.1116 | |
|
| 4.0704 | 1.03 | 686871 | 4.0915 | |
|
| 4.0474 | 0.03 | 763190 | 4.0756 | |
|
| 4.0289 | 1.03 | 839509 | 4.0626 | |
|
| 4.0114 | 0.03 | 915828 | 4.0512 | |
|
| 3.9911 | 1.03 | 992147 | 4.0417 | |
|
| 3.9793 | 0.03 | 1068467 | 4.0332 | |
|
| 3.9657 | 1.03 | 1144787 | 4.0265 | |
|
| 3.961 | 0.03 | 1221107 | 4.0183 | |
|
| 3.942 | 1.03 | 1297427 | 4.0134 | |
|
| 3.9346 | 0.03 | 1373747 | 4.0081 | |
|
| 3.9222 | 1.03 | 1450067 | 4.0045 | |
|
| 3.9166 | 0.03 | 1526387 | 4.0005 | |
|
| 3.9154 | 1.03 | 1602707 | 3.9975 | |
|
| 3.9098 | 0.03 | 1679027 | 3.9947 | |
|
| 3.909 | 1.03 | 1755347 | 3.9925 | |
|
| 3.9044 | 0.03 | 1831667 | 3.9899 | |
|
| 3.8977 | 1.03 | 1907987 | 3.9880 | |
|
| 3.8925 | 0.03 | 1984307 | 3.9862 | |
|
| 3.8863 | 1.03 | 2060627 | 3.9841 | |
|
| 3.8797 | 0.03 | 2136947 | 3.9827 | |
|
| 3.8795 | 0.03 | 2213267 | 3.9815 | |
|
| 3.8729 | 0.03 | 2289587 | 3.9806 | |
|
| 3.8659 | 1.03 | 2365907 | 3.9793 | |
|
| 3.862 | 0.03 | 2442227 | 3.9783 | |
|
| 3.8541 | 1.03 | 2518547 | 3.9774 | |
|
| 3.8528 | 0.03 | 2594867 | 3.9767 | |
|
| 3.846 | 1.03 | 2671187 | 3.9760 | |
|
| 3.8445 | 0.03 | 2747507 | 3.9748 | |
|
| 3.848 | 0.03 | 2823827 | 3.9741 | |
|
| 3.8471 | 0.03 | 2900147 | 3.9735 | |
|
| 3.8508 | 1.03 | 2976467 | 3.9729 | |
|
| 3.8512 | 0.02 | 3052726 | 3.9726 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.33.3 |
|
- Pytorch 2.0.1 |
|
- Datasets 2.12.0 |
|
- Tokenizers 0.13.3 |
|
|