File size: 1,429 Bytes
1023de8
 
24db986
 
 
 
 
 
 
1023de8
 
24db986
 
1023de8
24db986
 
1023de8
24db986
1023de8
24db986
1023de8
24db986
1023de8
24db986
1023de8
24db986
1023de8
24db986
1023de8
24db986
1023de8
24db986
1023de8
24db986
1023de8
24db986
 
 
 
 
 
 
 
 
 
 
1023de8
24db986
1023de8
24db986
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
model-index:
- name: bambara_mms_20_hour_jeli_asr_dataset
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/asr-africa-research-team/ASR%20Africa/runs/wq6zc6lt)
# bambara_mms_20_hour_jeli_asr_dataset

This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on an unknown dataset.

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50

### Framework versions

- Transformers 4.45.1
- Pytorch 2.1.0+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3