RishuD7 commited on
Commit
282e746
1 Parent(s): 2a513f8

Model save

Browse files
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
4
+ tags:
5
+ - trl
6
+ - sft
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: Llams_3.1_8B_instruct_behaviour_cloning_extra_things_updated_grouped
10
+ results: []
11
+ library_name: peft
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # Llams_3.1_8B_instruct_behaviour_cloning_extra_things_updated_grouped
18
+
19
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.1928
22
+ - Model Preparation Time: 0.0065
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+
39
+ The following `bitsandbytes` quantization config was used during training:
40
+ - quant_method: bitsandbytes
41
+ - _load_in_8bit: False
42
+ - _load_in_4bit: True
43
+ - llm_int8_threshold: 6.0
44
+ - llm_int8_skip_modules: None
45
+ - llm_int8_enable_fp32_cpu_offload: False
46
+ - llm_int8_has_fp16_weight: False
47
+ - bnb_4bit_quant_type: nf4
48
+ - bnb_4bit_use_double_quant: True
49
+ - bnb_4bit_compute_dtype: bfloat16
50
+ - bnb_4bit_quant_storage: uint8
51
+ - load_in_4bit: True
52
+ - load_in_8bit: False
53
+ ### Training hyperparameters
54
+
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 5e-05
57
+ - train_batch_size: 16
58
+ - eval_batch_size: 8
59
+ - seed: 42
60
+ - gradient_accumulation_steps: 2
61
+ - total_train_batch_size: 32
62
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
+ - lr_scheduler_type: linear
64
+ - num_epochs: 3
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |
69
+ |:-------------:|:------:|:----:|:---------------:|:----------------------:|
70
+ | 0.0929 | 0.9996 | 1236 | 0.1423 | 0.0065 |
71
+ | 0.0689 | 2.0 | 2473 | 0.1724 | 0.0065 |
72
+ | 0.0588 | 2.9988 | 3708 | 0.1928 | 0.0065 |
73
+
74
+
75
+ ### Framework versions
76
+
77
+ - PEFT 0.4.0
78
+ - Transformers 4.44.0
79
+ - Pytorch 2.3.1+cu121
80
+ - Datasets 2.13.0
81
+ - Tokenizers 0.19.1
runs/Aug08_13-53-01_a5d94f4c57f7/events.out.tfevents.1723125739.a5d94f4c57f7.9749.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:331c86550a60aa59e3dbe3698dac3f130924d7aa9725a81edb562b6f358d0589
3
- size 85234
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7994b8bd62ee564894f57ff256b7f18f912550580716e6b5312d510da850c1f
3
+ size 85925