File size: 3,091 Bytes
976a807 7691ba4 976a807 7691ba4 976a807 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
---
tags:
- generated_from_trainer
model-index:
- name: wavlm-large-timit-punctuation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-large-timit-punctuation
This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3368
- Wer: 0.2601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.2379 | 1.0 | 500 | 3.1228 | 1.0 |
| 2.5847 | 2.01 | 1000 | 1.1550 | 0.9147 |
| 1.0034 | 3.01 | 1500 | 0.5856 | 0.5180 |
| 0.5868 | 4.02 | 2000 | 0.4238 | 0.4229 |
| 0.3892 | 5.02 | 2500 | 0.3356 | 0.3665 |
| 0.2926 | 6.02 | 3000 | 0.3196 | 0.3360 |
| 0.2294 | 7.03 | 3500 | 0.3046 | 0.3170 |
| 0.1976 | 8.03 | 4000 | 0.3032 | 0.3111 |
| 0.1644 | 9.04 | 4500 | 0.2946 | 0.2954 |
| 0.1574 | 10.04 | 5000 | 0.3211 | 0.2998 |
| 0.1391 | 11.04 | 5500 | 0.2986 | 0.2922 |
| 0.1124 | 12.05 | 6000 | 0.2948 | 0.2837 |
| 0.1003 | 13.05 | 6500 | 0.2928 | 0.2788 |
| 0.1031 | 14.06 | 7000 | 0.3230 | 0.2805 |
| 0.0901 | 15.06 | 7500 | 0.3081 | 0.2749 |
| 0.0842 | 16.06 | 8000 | 0.3075 | 0.2726 |
| 0.0809 | 17.07 | 8500 | 0.3215 | 0.2717 |
| 0.0747 | 18.07 | 9000 | 0.3272 | 0.2721 |
| 0.0735 | 19.08 | 9500 | 0.3242 | 0.2684 |
| 0.0631 | 20.08 | 10000 | 0.3216 | 0.2640 |
| 0.0632 | 21.08 | 10500 | 0.3149 | 0.2646 |
| 0.0625 | 22.09 | 11000 | 0.3196 | 0.2630 |
| 0.0611 | 23.09 | 11500 | 0.3244 | 0.2638 |
| 0.0532 | 24.1 | 12000 | 0.3271 | 0.2641 |
| 0.0503 | 25.1 | 12500 | 0.3368 | 0.2636 |
| 0.0534 | 26.1 | 13000 | 0.3393 | 0.2627 |
| 0.049 | 27.11 | 13500 | 0.3389 | 0.2626 |
| 0.0441 | 28.11 | 14000 | 0.3375 | 0.2605 |
| 0.0522 | 29.12 | 14500 | 0.3368 | 0.2601 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.8.2+cu111
- Datasets 1.17.0
- Tokenizers 0.11.6
|