flozi00 commited on
Commit
44c5c3a
1 Parent(s): 0070f30

Upload model

Browse files
Files changed (2) hide show
  1. README.md +127 -6
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -1,7 +1,128 @@
1
  ---
2
- datasets:
3
- - flozi00/conversations
4
- language:
5
- - de
6
- - en
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: peft
3
+ ---
4
+ ## Training procedure
5
+
6
+
7
+ The following `bitsandbytes` quantization config was used during training:
8
+ - load_in_8bit: False
9
+ - load_in_4bit: True
10
+ - llm_int8_threshold: 6.0
11
+ - llm_int8_skip_modules: None
12
+ - llm_int8_enable_fp32_cpu_offload: False
13
+ - llm_int8_has_fp16_weight: False
14
+ - bnb_4bit_quant_type: fp4
15
+ - bnb_4bit_use_double_quant: True
16
+ - bnb_4bit_compute_dtype: float16
17
+
18
+ The following `bitsandbytes` quantization config was used during training:
19
+ - load_in_8bit: False
20
+ - load_in_4bit: True
21
+ - llm_int8_threshold: 6.0
22
+ - llm_int8_skip_modules: None
23
+ - llm_int8_enable_fp32_cpu_offload: False
24
+ - llm_int8_has_fp16_weight: False
25
+ - bnb_4bit_quant_type: fp4
26
+ - bnb_4bit_use_double_quant: True
27
+ - bnb_4bit_compute_dtype: float16
28
+
29
+ The following `bitsandbytes` quantization config was used during training:
30
+ - load_in_8bit: False
31
+ - load_in_4bit: True
32
+ - llm_int8_threshold: 6.0
33
+ - llm_int8_skip_modules: None
34
+ - llm_int8_enable_fp32_cpu_offload: False
35
+ - llm_int8_has_fp16_weight: False
36
+ - bnb_4bit_quant_type: fp4
37
+ - bnb_4bit_use_double_quant: True
38
+ - bnb_4bit_compute_dtype: float16
39
+
40
+ The following `bitsandbytes` quantization config was used during training:
41
+ - load_in_8bit: False
42
+ - load_in_4bit: True
43
+ - llm_int8_threshold: 6.0
44
+ - llm_int8_skip_modules: None
45
+ - llm_int8_enable_fp32_cpu_offload: False
46
+ - llm_int8_has_fp16_weight: False
47
+ - bnb_4bit_quant_type: fp4
48
+ - bnb_4bit_use_double_quant: True
49
+ - bnb_4bit_compute_dtype: float16
50
+
51
+ The following `bitsandbytes` quantization config was used during training:
52
+ - load_in_8bit: False
53
+ - load_in_4bit: True
54
+ - llm_int8_threshold: 6.0
55
+ - llm_int8_skip_modules: None
56
+ - llm_int8_enable_fp32_cpu_offload: False
57
+ - llm_int8_has_fp16_weight: False
58
+ - bnb_4bit_quant_type: fp4
59
+ - bnb_4bit_use_double_quant: True
60
+ - bnb_4bit_compute_dtype: float16
61
+
62
+ The following `bitsandbytes` quantization config was used during training:
63
+ - load_in_8bit: False
64
+ - load_in_4bit: True
65
+ - llm_int8_threshold: 6.0
66
+ - llm_int8_skip_modules: None
67
+ - llm_int8_enable_fp32_cpu_offload: False
68
+ - llm_int8_has_fp16_weight: False
69
+ - bnb_4bit_quant_type: fp4
70
+ - bnb_4bit_use_double_quant: True
71
+ - bnb_4bit_compute_dtype: float16
72
+
73
+ The following `bitsandbytes` quantization config was used during training:
74
+ - load_in_8bit: False
75
+ - load_in_4bit: True
76
+ - llm_int8_threshold: 6.0
77
+ - llm_int8_skip_modules: None
78
+ - llm_int8_enable_fp32_cpu_offload: False
79
+ - llm_int8_has_fp16_weight: False
80
+ - bnb_4bit_quant_type: fp4
81
+ - bnb_4bit_use_double_quant: True
82
+ - bnb_4bit_compute_dtype: float16
83
+
84
+ The following `bitsandbytes` quantization config was used during training:
85
+ - load_in_8bit: False
86
+ - load_in_4bit: True
87
+ - llm_int8_threshold: 6.0
88
+ - llm_int8_skip_modules: None
89
+ - llm_int8_enable_fp32_cpu_offload: False
90
+ - llm_int8_has_fp16_weight: False
91
+ - bnb_4bit_quant_type: fp4
92
+ - bnb_4bit_use_double_quant: True
93
+ - bnb_4bit_compute_dtype: float16
94
+
95
+ The following `bitsandbytes` quantization config was used during training:
96
+ - load_in_8bit: False
97
+ - load_in_4bit: True
98
+ - llm_int8_threshold: 6.0
99
+ - llm_int8_skip_modules: None
100
+ - llm_int8_enable_fp32_cpu_offload: False
101
+ - llm_int8_has_fp16_weight: False
102
+ - bnb_4bit_quant_type: fp4
103
+ - bnb_4bit_use_double_quant: True
104
+ - bnb_4bit_compute_dtype: float16
105
+
106
+ The following `bitsandbytes` quantization config was used during training:
107
+ - load_in_8bit: False
108
+ - load_in_4bit: True
109
+ - llm_int8_threshold: 6.0
110
+ - llm_int8_skip_modules: None
111
+ - llm_int8_enable_fp32_cpu_offload: False
112
+ - llm_int8_has_fp16_weight: False
113
+ - bnb_4bit_quant_type: fp4
114
+ - bnb_4bit_use_double_quant: True
115
+ - bnb_4bit_compute_dtype: float16
116
+ ### Framework versions
117
+
118
+ - PEFT 0.4.0.dev0
119
+ - PEFT 0.4.0.dev0
120
+ - PEFT 0.4.0.dev0
121
+ - PEFT 0.4.0.dev0
122
+ - PEFT 0.4.0.dev0
123
+ - PEFT 0.4.0.dev0
124
+ - PEFT 0.4.0.dev0
125
+ - PEFT 0.4.0.dev0
126
+ - PEFT 0.4.0.dev0
127
+
128
+ - PEFT 0.4.0.dev0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:456b9f48ce91afcb0c5c23298a9f855c42097f40ffba52f957d566c9ac0d606a
3
  size 261189453
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecefc7305e4b1d147c34f68d0f1e701abee437c1f0a6bd2f16789d09fb8043bb
3
  size 261189453