File size: 2,032 Bytes
d2c789d
be22dcd
 
 
 
 
 
 
492ad7f
be22dcd
 
 
 
4087797
492ad7f
be22dcd
 
 
 
 
 
 
492ad7f
4087797
492ad7f
d2c789d
 
be22dcd
 
d2c789d
be22dcd
d2c789d
be22dcd
 
4087797
 
d2c789d
be22dcd
d2c789d
be22dcd
d2c789d
be22dcd
d2c789d
be22dcd
d2c789d
be22dcd
d2c789d
be22dcd
d2c789d
be22dcd
d2c789d
be22dcd
d2c789d
be22dcd
 
 
 
 
 
 
 
 
 
4087797
be22dcd
d2c789d
be22dcd
d2c789d
be22dcd
 
4087797
 
 
d2c789d
 
be22dcd
d2c789d
be22dcd
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- wer
base_model: facebook/wav2vec2-large-xlsr-53
model-index:
- name: wav2vec2-XLS-R-Fleurs-demo-google-colab-Ezra_William_Prod6
  results:
  - task:
      type: automatic-speech-recognition
      name: Automatic Speech Recognition
    dataset:
      name: xtreme_s
      type: xtreme_s
      config: fleurs.id_id
      split: test
      args: fleurs.id_id
    metrics:
    - type: wer
      value: 0.50321808112558
      name: Wer
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# wav2vec2-XLS-R-Fleurs-demo-google-colab-Ezra_William_Prod6

This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the xtreme_s dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8186
- Wer: 0.5032

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 60
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Wer    |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9714        | 18.18 | 300  | 2.8507          | 1.0    |
| 1.2966        | 36.36 | 600  | 0.8132          | 0.6056 |
| 0.1563        | 54.55 | 900  | 0.8186          | 0.5032 |


### Framework versions

- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1