File size: 2,406 Bytes
aa3e9f2
3c00016
 
 
287b8d8
3c00016
 
 
 
 
 
 
 
 
6ff8384
3c00016
 
 
 
 
 
 
6ff8384
3c00016
6ff8384
aa3e9f2
 
3c00016
 
aa3e9f2
3c00016
aa3e9f2
3c00016
 
 
 
aa3e9f2
3c00016
aa3e9f2
3c00016
aa3e9f2
3c00016
aa3e9f2
3c00016
aa3e9f2
3c00016
aa3e9f2
3c00016
aa3e9f2
3c00016
aa3e9f2
3c00016
aa3e9f2
3c00016
 
 
 
 
 
 
 
 
 
 
 
aa3e9f2
3c00016
aa3e9f2
3c00016
 
 
 
 
 
 
 
 
 
 
aa3e9f2
 
3c00016
aa3e9f2
3c00016
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: facebook/wav2vec2-large-xlsr-53
datasets:
- xtreme_s
metrics:
- wer
model-index:
- name: wav2vec2-XLS-R-Fleurs-demo-google-colab-Ezra_William_Prod7
  results:
  - task:
      type: automatic-speech-recognition
      name: Automatic Speech Recognition
    dataset:
      name: xtreme_s
      type: xtreme_s
      config: fleurs.id_id
      split: test
      args: fleurs.id_id
    metrics:
    - type: wer
      value: 0.5133213590779824
      name: Wer
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# wav2vec2-XLS-R-Fleurs-demo-google-colab-Ezra_William_Prod7

This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the xtreme_s dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1411
- Wer: 0.5133

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 180
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Wer    |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 4.9813        | 18.18  | 300  | 2.8480          | 1.0    |
| 1.5729        | 36.36  | 600  | 0.8808          | 0.7159 |
| 0.219         | 54.55  | 900  | 0.9209          | 0.5983 |
| 0.1213        | 72.73  | 1200 | 0.9869          | 0.6005 |
| 0.0898        | 90.91  | 1500 | 1.0485          | 0.5840 |
| 0.0668        | 109.09 | 1800 | 1.0746          | 0.5514 |
| 0.0499        | 127.27 | 2100 | 1.0648          | 0.5341 |
| 0.0372        | 145.45 | 2400 | 1.1656          | 0.5280 |
| 0.0292        | 163.64 | 2700 | 1.1411          | 0.5133 |


### Framework versions

- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1