flozi00 commited on
Commit
763272c
1 Parent(s): 44c5c3a

Upload model

Browse files
Files changed (2) hide show
  1. README.md +60 -0
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -103,6 +103,61 @@ The following `bitsandbytes` quantization config was used during training:
103
  - bnb_4bit_use_double_quant: True
104
  - bnb_4bit_compute_dtype: float16
105
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
106
  The following `bitsandbytes` quantization config was used during training:
107
  - load_in_8bit: False
108
  - load_in_4bit: True
@@ -124,5 +179,10 @@ The following `bitsandbytes` quantization config was used during training:
124
  - PEFT 0.4.0.dev0
125
  - PEFT 0.4.0.dev0
126
  - PEFT 0.4.0.dev0
 
 
 
 
 
127
 
128
  - PEFT 0.4.0.dev0
 
103
  - bnb_4bit_use_double_quant: True
104
  - bnb_4bit_compute_dtype: float16
105
 
106
+ The following `bitsandbytes` quantization config was used during training:
107
+ - load_in_8bit: False
108
+ - load_in_4bit: True
109
+ - llm_int8_threshold: 6.0
110
+ - llm_int8_skip_modules: None
111
+ - llm_int8_enable_fp32_cpu_offload: False
112
+ - llm_int8_has_fp16_weight: False
113
+ - bnb_4bit_quant_type: fp4
114
+ - bnb_4bit_use_double_quant: True
115
+ - bnb_4bit_compute_dtype: float16
116
+
117
+ The following `bitsandbytes` quantization config was used during training:
118
+ - load_in_8bit: False
119
+ - load_in_4bit: True
120
+ - llm_int8_threshold: 6.0
121
+ - llm_int8_skip_modules: None
122
+ - llm_int8_enable_fp32_cpu_offload: False
123
+ - llm_int8_has_fp16_weight: False
124
+ - bnb_4bit_quant_type: fp4
125
+ - bnb_4bit_use_double_quant: True
126
+ - bnb_4bit_compute_dtype: float16
127
+
128
+ The following `bitsandbytes` quantization config was used during training:
129
+ - load_in_8bit: False
130
+ - load_in_4bit: True
131
+ - llm_int8_threshold: 6.0
132
+ - llm_int8_skip_modules: None
133
+ - llm_int8_enable_fp32_cpu_offload: False
134
+ - llm_int8_has_fp16_weight: False
135
+ - bnb_4bit_quant_type: fp4
136
+ - bnb_4bit_use_double_quant: True
137
+ - bnb_4bit_compute_dtype: float16
138
+
139
+ The following `bitsandbytes` quantization config was used during training:
140
+ - load_in_8bit: False
141
+ - load_in_4bit: True
142
+ - llm_int8_threshold: 6.0
143
+ - llm_int8_skip_modules: None
144
+ - llm_int8_enable_fp32_cpu_offload: False
145
+ - llm_int8_has_fp16_weight: False
146
+ - bnb_4bit_quant_type: fp4
147
+ - bnb_4bit_use_double_quant: True
148
+ - bnb_4bit_compute_dtype: float16
149
+
150
+ The following `bitsandbytes` quantization config was used during training:
151
+ - load_in_8bit: False
152
+ - load_in_4bit: True
153
+ - llm_int8_threshold: 6.0
154
+ - llm_int8_skip_modules: None
155
+ - llm_int8_enable_fp32_cpu_offload: False
156
+ - llm_int8_has_fp16_weight: False
157
+ - bnb_4bit_quant_type: fp4
158
+ - bnb_4bit_use_double_quant: True
159
+ - bnb_4bit_compute_dtype: float16
160
+
161
  The following `bitsandbytes` quantization config was used during training:
162
  - load_in_8bit: False
163
  - load_in_4bit: True
 
179
  - PEFT 0.4.0.dev0
180
  - PEFT 0.4.0.dev0
181
  - PEFT 0.4.0.dev0
182
+ - PEFT 0.4.0.dev0
183
+ - PEFT 0.4.0.dev0
184
+ - PEFT 0.4.0.dev0
185
+ - PEFT 0.4.0.dev0
186
+ - PEFT 0.4.0.dev0
187
 
188
  - PEFT 0.4.0.dev0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ecefc7305e4b1d147c34f68d0f1e701abee437c1f0a6bd2f16789d09fb8043bb
3
  size 261189453
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:159af2dc65890d5014a561f4d59216da7d85980077f8ac6cd63cdb84858443eb
3
  size 261189453