ad019el commited on
Commit
5d68611
1 Parent(s): 1610eb5

End of training

Browse files
Files changed (2) hide show
  1. README.md +13 -11
  2. pytorch_model.bin +1 -1
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
  metrics:
@@ -13,10 +14,10 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # tamasheq-99-2
15
 
16
- This model was trained from scratch on the None dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 1.3969
19
- - Wer: 0.8971
20
 
21
  ## Model description
22
 
@@ -36,11 +37,11 @@ More information needed
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 3e-05
39
- - train_batch_size: 16
40
  - eval_batch_size: 8
41
  - seed: 42
42
  - gradient_accumulation_steps: 2
43
- - total_train_batch_size: 32
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 500
@@ -50,16 +51,17 @@ The following hyperparameters were used during training:
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Wer |
52
  |:-------------:|:-----:|:----:|:---------------:|:------:|
53
- | 3.4977 | 15.79 | 300 | 1.5596 | 1.0941 |
54
- | 0.8333 | 31.58 | 600 | 1.2255 | 0.9147 |
55
- | 0.4626 | 47.37 | 900 | 1.2934 | 0.8941 |
56
- | 0.409 | 63.16 | 1200 | 1.3398 | 0.8765 |
57
- | 0.3349 | 78.95 | 1500 | 1.3969 | 0.8971 |
 
58
 
59
 
60
  ### Framework versions
61
 
62
- - Transformers 4.32.1
63
  - Pytorch 2.0.1+cu118
64
  - Datasets 2.14.4
65
  - Tokenizers 0.13.3
 
1
  ---
2
+ base_model: ad019el/tamasheq-99-2
3
  tags:
4
  - generated_from_trainer
5
  metrics:
 
14
 
15
  # tamasheq-99-2
16
 
17
+ This model is a fine-tuned version of [ad019el/tamasheq-99-2](https://huggingface.co/ad019el/tamasheq-99-2) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 1.2993
20
+ - Wer: 0.9
21
 
22
  ## Model description
23
 
 
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 3e-05
40
+ - train_batch_size: 8
41
  - eval_batch_size: 8
42
  - seed: 42
43
  - gradient_accumulation_steps: 2
44
+ - total_train_batch_size: 16
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 500
 
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Wer |
53
  |:-------------:|:-----:|:----:|:---------------:|:------:|
54
+ | 4.1781 | 4.62 | 300 | 1.7684 | 1.0853 |
55
+ | 1.1518 | 9.23 | 600 | 1.1398 | 0.8588 |
56
+ | 0.789 | 13.85 | 900 | 1.0551 | 0.8647 |
57
+ | 0.659 | 18.46 | 1200 | 1.1470 | 0.8735 |
58
+ | 0.5883 | 23.08 | 1500 | 1.2403 | 0.9 |
59
+ | 0.5239 | 27.69 | 1800 | 1.2993 | 0.9 |
60
 
61
 
62
  ### Framework versions
63
 
64
+ - Transformers 4.33.0
65
  - Pytorch 2.0.1+cu118
66
  - Datasets 2.14.4
67
  - Tokenizers 0.13.3
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7de39d6aed875a661ab66f83678f494d3ad2fe108c1dc2abe378acec23fde976
3
  size 1262082221
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61ee5bcdf95940075f5a235bd573cb2fd6663de82f3d58099364b3ea3000a769
3
  size 1262082221