Minata commited on
Commit
2ec62ea
1 Parent(s): 543739d

End of training

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - generated_from_trainer
6
+ base_model: mistralai/Mistral-7B-v0.1
7
+ model-index:
8
+ - name: 512_block_src_fm_fc_ms_ff_method2testcases_method2test-mistral-7B_v0
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # 512_block_src_fm_fc_ms_ff_method2testcases_method2test-mistral-7B_v0
16
+
17
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 1.1088
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 2.5e-05
39
+ - train_batch_size: 4
40
+ - eval_batch_size: 4
41
+ - seed: 42
42
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
+ - lr_scheduler_type: linear
44
+ - lr_scheduler_warmup_steps: 100
45
+ - training_steps: 1000
46
+
47
+ ### Training results
48
+
49
+ | Training Loss | Epoch | Step | Validation Loss |
50
+ |:-------------:|:-----:|:----:|:---------------:|
51
+ | 0.9983 | 0.0 | 100 | 1.1595 |
52
+ | 0.9435 | 0.0 | 200 | 1.1294 |
53
+ | 0.906 | 0.0 | 300 | 1.1200 |
54
+ | 0.8821 | 0.0 | 400 | 1.1183 |
55
+ | 0.8983 | 0.0 | 500 | 1.1131 |
56
+ | 0.8696 | 0.01 | 600 | 1.1222 |
57
+ | 0.8919 | 0.01 | 700 | 1.1163 |
58
+ | 0.8876 | 0.01 | 800 | 1.1043 |
59
+ | 0.8582 | 0.01 | 900 | 1.1071 |
60
+ | 0.8854 | 0.01 | 1000 | 1.1088 |
61
+
62
+
63
+ ### Framework versions
64
+
65
+ - PEFT 0.8.2
66
+ - Transformers 4.37.2
67
+ - Pytorch 2.2.0+cu121
68
+ - Datasets 2.17.0
69
+ - Tokenizers 0.15.2