Elalimy commited on
Commit
36aed80
1 Parent(s): 2e2cd72

End of training

Browse files
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - bleu
8
+ base_model: Helsinki-NLP/opus-mt-en-ar
9
+ model-index:
10
+ - name: finetuned_helsinki_peft_model__en_to_ar
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # finetuned_helsinki_peft_model__en_to_ar
18
+
19
+ This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 1.7655
22
+ - Bleu: 26.7751
23
+ - Gen Len: 13.524
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 2e-05
43
+ - train_batch_size: 24
44
+ - eval_batch_size: 24
45
+ - seed: 42
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_steps: 500
49
+ - training_steps: 10000
50
+ - mixed_precision_training: Native AMP
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
55
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
56
+ | 1.9431 | 0.2 | 500 | 1.7705 | 26.9288 | 13.456 |
57
+ | 1.9149 | 0.4 | 1000 | 1.7700 | 27.5825 | 13.516 |
58
+ | 1.9356 | 0.6 | 1500 | 1.7689 | 26.9314 | 13.5445 |
59
+ | 1.9242 | 0.8 | 2000 | 1.7681 | 26.9265 | 13.4585 |
60
+ | 1.9542 | 1.0 | 2500 | 1.7686 | 26.9745 | 13.5045 |
61
+ | 1.9417 | 1.2 | 3000 | 1.7676 | 27.1966 | 13.545 |
62
+ | 1.9233 | 1.4 | 3500 | 1.7675 | 26.7973 | 13.5225 |
63
+ | 1.929 | 1.6 | 4000 | 1.7670 | 26.8808 | 13.5455 |
64
+ | 1.9504 | 1.8 | 4500 | 1.7669 | 27.0192 | 13.484 |
65
+ | 1.9148 | 2.0 | 5000 | 1.7673 | 27.0003 | 13.5225 |
66
+ | 1.9052 | 2.2 | 5500 | 1.7667 | 26.9055 | 13.5455 |
67
+ | 1.9443 | 2.4 | 6000 | 1.7665 | 26.8744 | 13.492 |
68
+ | 1.9212 | 2.6 | 6500 | 1.7667 | 26.7736 | 13.51 |
69
+ | 1.9306 | 2.8 | 7000 | 1.7665 | 26.8395 | 13.491 |
70
+ | 1.9552 | 3.0 | 7500 | 1.7659 | 26.853 | 13.4695 |
71
+ | 1.9396 | 3.2 | 8000 | 1.7653 | 26.7499 | 13.4975 |
72
+ | 1.9306 | 3.4 | 8500 | 1.7654 | 26.7146 | 13.53 |
73
+ | 1.917 | 3.6 | 9000 | 1.7656 | 26.8062 | 13.527 |
74
+ | 1.9285 | 3.8 | 9500 | 1.7655 | 26.7167 | 13.5315 |
75
+ | 1.9109 | 4.0 | 10000 | 1.7655 | 26.7751 | 13.524 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - PEFT 0.9.1.dev0
81
+ - Transformers 4.38.2
82
+ - Pytorch 2.1.0+cu121
83
+ - Datasets 2.18.0
84
+ - Tokenizers 0.15.2
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:76f28798147341da79a7771190cce8746dbba9b686fcdca6fdce2da404725369
3
  size 2369344
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b28bb8e35929a6210295ca62464197e6f9b0b2e1cae62951b0ae6289f8830036
3
  size 2369344
runs/Mar08_22-05-04_cdda3d6febef/events.out.tfevents.1709935506.cdda3d6febef.1069.3 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5e96474542beb73fd12a39b42c085c9297197f38178b55aba36ec26c7c10e65b
3
- size 16568
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0aa2d65d6fdf9ad6b2982793f539fe022fbc2e9d060333386a055722c59bc12
3
+ size 17503