|
--- |
|
base_model: distil-whisper/distil-large-v3 |
|
datasets: |
|
- audiofolder |
|
library_name: peft |
|
license: mit |
|
metrics: |
|
- wer |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: distil_whisper-v3-LoRA-en_students_test_2 |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# distil_whisper-v3-LoRA-en_students_test_2 |
|
|
|
This model is a fine-tuned version of [distil-whisper/distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3) on the audiofolder dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.6839 |
|
- Wer: 18.4361 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 1e-05 |
|
- train_batch_size: 28 |
|
- eval_batch_size: 28 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_steps: 50 |
|
- training_steps: 100000 |
|
- mixed_precision_training: Native AMP |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Wer | |
|
|:-------------:|:------:|:----:|:---------------:|:-------:| |
|
| 1.5189 | 0.4444 | 500 | 1.1913 | 25.9108 | |
|
| 1.1727 | 0.8889 | 1000 | 0.9531 | 24.5396 | |
|
| 1.1341 | 1.3333 | 1500 | 0.8688 | 22.2761 | |
|
| 1.0152 | 1.7778 | 2000 | 0.8174 | 20.8792 | |
|
| 1.0589 | 2.2222 | 2500 | 0.7855 | 20.7595 | |
|
| 0.9793 | 2.6667 | 3000 | 0.7611 | 22.2846 | |
|
| 0.9594 | 3.1111 | 3500 | 0.7442 | 20.3860 | |
|
| 1.0031 | 3.5556 | 4000 | 0.7303 | 18.5045 | |
|
| 0.9525 | 4.0 | 4500 | 0.7199 | 18.1054 | |
|
| 0.8729 | 4.4444 | 5000 | 0.7105 | 19.3170 | |
|
| 1.0031 | 4.8889 | 5500 | 0.7028 | 19.7446 | |
|
| 0.9273 | 5.3333 | 6000 | 0.6966 | 19.7189 | |
|
| 0.9174 | 5.7778 | 6500 | 0.6896 | 18.4475 | |
|
| 0.8842 | 6.2222 | 7000 | 0.6839 | 18.4361 | |
|
|
|
|
|
### Framework versions |
|
|
|
- PEFT 0.11.1 |
|
- Transformers 4.42.4 |
|
- Pytorch 2.1.0+cu118 |
|
- Datasets 2.20.0 |
|
- Tokenizers 0.19.1 |