license: apache-2.0 | |
tags: | |
- generated_from_trainer | |
datasets: | |
- aslg_pc12 | |
metrics: | |
- bertscore | |
- bleu | |
- comet | |
- rouge | |
base_model: t5-small | |
pipeline_tag: translation | |
model-index: | |
- name: t5_small_gloss_merged_dataset | |
results: | |
- task: | |
type: translation | |
name: Translation | |
dataset: | |
name: aslg_pc12 | |
type: aslg_pc12 | |
config: default | |
split: train | |
metrics: | |
- type: bleu | |
value: 68.9182 | |
name: BLEU | |
verified: true | |
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWZiYmFkMDNmMTFhMmU2MzAxYTEzMWQ3NmRiNGRjNzI2OTg2NjMxNTcxYjRkOTg0M2E4MzkzNDU4MjZiNTI3OSIsInZlcnNpb24iOjF9.GHJA10A5JW8Y4nCy9w46YQZGuh6BXnHLEWC-_Y5Vb1EfHcXBt7aQr2gArDcfrW-epJSXpiDk-A8DpNnG0HSSAQ | |
- type: loss | |
value: 0.33368241786956787 | |
name: loss | |
verified: true | |
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZThjOTg3ZmRmZjVhMGM2ZjY0ZGRhYjc4ZGQ1NTFlZWU4YTkzZjJiMGYwMGEzYjY3ZTVhYTNmMzczZmFhYjIyZCIsInZlcnNpb24iOjF9.Da4BqQhCXMhubGfPVbqPZzZU3Y-FByXA6mgy0u31u_SsqKSnGqS-C0TIF81wdpVUBYciu3BboqpefDtC5HYrBg | |
- type: gen_len | |
value: 15.6225 | |
name: gen_len | |
verified: true | |
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTBkZWU4MzdhMjY0N2ViZTkwMjM5NmZlYTJkYzk4YTk3ODk3ODRlOTE0NjdmMmQzMjBhYmVjODU3N2E5YTNiYyIsInZlcnNpb24iOjF9.jVgLJiZJR66wWio2V3aCKp-L_LkOF14VV1XxCLb79GWU3CJZucMJorA6mmofP9rOSqh92ZfkaFUJ_ScqjNHwCg | |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You | |
should probably proofread and complete it, then remove this comment. --> | |
# t5_small_gloss_merged_dataset | |
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. | |
## Model description | |
More information needed | |
## Intended uses & limitations | |
More information needed | |
## Training and evaluation data | |
More information needed | |
## Training procedure | |
### Training hyperparameters | |
The following hyperparameters were used during training: | |
- learning_rate: 5e-05 | |
- train_batch_size: 8 | |
- eval_batch_size: 8 | |
- seed: 42 | |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 | |
- lr_scheduler_type: linear | |
- num_epochs: 3.0 | |
### Training results | |
### Framework versions | |
- Transformers 4.34.0 | |
- Pytorch 2.0.1+cu118 | |
- Datasets 2.14.5 | |
- Tokenizers 0.14.1 |