Zintoulou commited on
Commit
a26922b
1 Parent(s): a43790f

Model save

Browse files
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ library_name: peft
4
+ tags:
5
+ - generated_from_trainer
6
+ base_model: codellama/CodeLlama-7b-Instruct-hf
7
+ model-index:
8
+ - name: finetuningqv1
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # finetuningqv1
16
+
17
+ This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 1.0262
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 0.001
39
+ - train_batch_size: 20
40
+ - eval_batch_size: 20
41
+ - seed: 42
42
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
+ - lr_scheduler_type: linear
44
+ - num_epochs: 8
45
+
46
+ ### Training results
47
+
48
+ | Training Loss | Epoch | Step | Validation Loss |
49
+ |:-------------:|:-----:|:----:|:---------------:|
50
+ | 2.688 | 1.0 | 1 | 2.7935 |
51
+ | 2.3134 | 2.0 | 2 | 2.3100 |
52
+ | 1.8897 | 3.0 | 3 | 2.0244 |
53
+ | 1.5882 | 4.0 | 4 | 1.7493 |
54
+ | 1.3105 | 5.0 | 5 | 1.4693 |
55
+ | 1.0045 | 6.0 | 6 | 1.2401 |
56
+ | 0.7371 | 7.0 | 7 | 1.0937 |
57
+ | 0.5443 | 8.0 | 8 | 1.0262 |
58
+
59
+
60
+ ### Framework versions
61
+
62
+ - Transformers 4.36.0
63
+ - Pytorch 2.0.1
64
+ - Datasets 2.16.1
65
+ - Tokenizers 0.15.1
66
+ ## Training procedure
67
+
68
+
69
+ ### Framework versions
70
+
71
+
72
+ - PEFT 0.6.0
adapter_config.json CHANGED
@@ -16,8 +16,8 @@
16
  "rank_pattern": {},
17
  "revision": null,
18
  "target_modules": [
19
- "v_proj",
20
- "q_proj"
21
  ],
22
  "task_type": "CAUSAL_LM"
23
  }
 
16
  "rank_pattern": {},
17
  "revision": null,
18
  "target_modules": [
19
+ "q_proj",
20
+ "v_proj"
21
  ],
22
  "task_type": "CAUSAL_LM"
23
  }
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:126382595b9bb370471653cd76f87057a3974e014861d8c9ff97e2dc82cc7a38
3
  size 33571624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b559129f3d281969fe17ebb7feaead9515261fb1fdf5ff41178aa9b47b4ba63
3
  size 33571624
runs/Feb01_23-38-01_notebook-rt231gpu1aeb26e4e66634fdf88cdc063fc8a1afd-6b98dc554pkj/events.out.tfevents.1706830687.notebook-rt231gpu1aeb26e4e66634fdf88cdc063fc8a1afd-6b98dc554pkj.126.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98583ce3909b5ef986bb38e2e7e1b2bba3864e319d963140f9c111d0fe2d3e7f
3
+ size 8168
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e877e1d28ee695e70ee294fdc075a0c7a34471a21c0c0989881cdcbe8ee6a19e
3
  size 4283
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3832c3dc5925f6eacd42e2a32130c35a546f61a5e266f315b98ba54de9de4539
3
  size 4283