flozi00 commited on
Commit
1877f67
1 Parent(s): 763272c

Upload model

Browse files
Files changed (2) hide show
  1. README.md +60 -0
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -158,6 +158,61 @@ The following `bitsandbytes` quantization config was used during training:
158
  - bnb_4bit_use_double_quant: True
159
  - bnb_4bit_compute_dtype: float16
160
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
161
  The following `bitsandbytes` quantization config was used during training:
162
  - load_in_8bit: False
163
  - load_in_4bit: True
@@ -184,5 +239,10 @@ The following `bitsandbytes` quantization config was used during training:
184
  - PEFT 0.4.0.dev0
185
  - PEFT 0.4.0.dev0
186
  - PEFT 0.4.0.dev0
 
 
 
 
 
187
 
188
  - PEFT 0.4.0.dev0
 
158
  - bnb_4bit_use_double_quant: True
159
  - bnb_4bit_compute_dtype: float16
160
 
161
+ The following `bitsandbytes` quantization config was used during training:
162
+ - load_in_8bit: False
163
+ - load_in_4bit: True
164
+ - llm_int8_threshold: 6.0
165
+ - llm_int8_skip_modules: None
166
+ - llm_int8_enable_fp32_cpu_offload: False
167
+ - llm_int8_has_fp16_weight: False
168
+ - bnb_4bit_quant_type: fp4
169
+ - bnb_4bit_use_double_quant: True
170
+ - bnb_4bit_compute_dtype: float16
171
+
172
+ The following `bitsandbytes` quantization config was used during training:
173
+ - load_in_8bit: False
174
+ - load_in_4bit: True
175
+ - llm_int8_threshold: 6.0
176
+ - llm_int8_skip_modules: None
177
+ - llm_int8_enable_fp32_cpu_offload: False
178
+ - llm_int8_has_fp16_weight: False
179
+ - bnb_4bit_quant_type: fp4
180
+ - bnb_4bit_use_double_quant: True
181
+ - bnb_4bit_compute_dtype: float16
182
+
183
+ The following `bitsandbytes` quantization config was used during training:
184
+ - load_in_8bit: False
185
+ - load_in_4bit: True
186
+ - llm_int8_threshold: 6.0
187
+ - llm_int8_skip_modules: None
188
+ - llm_int8_enable_fp32_cpu_offload: False
189
+ - llm_int8_has_fp16_weight: False
190
+ - bnb_4bit_quant_type: fp4
191
+ - bnb_4bit_use_double_quant: True
192
+ - bnb_4bit_compute_dtype: float16
193
+
194
+ The following `bitsandbytes` quantization config was used during training:
195
+ - load_in_8bit: False
196
+ - load_in_4bit: True
197
+ - llm_int8_threshold: 6.0
198
+ - llm_int8_skip_modules: None
199
+ - llm_int8_enable_fp32_cpu_offload: False
200
+ - llm_int8_has_fp16_weight: False
201
+ - bnb_4bit_quant_type: fp4
202
+ - bnb_4bit_use_double_quant: True
203
+ - bnb_4bit_compute_dtype: float16
204
+
205
+ The following `bitsandbytes` quantization config was used during training:
206
+ - load_in_8bit: False
207
+ - load_in_4bit: True
208
+ - llm_int8_threshold: 6.0
209
+ - llm_int8_skip_modules: None
210
+ - llm_int8_enable_fp32_cpu_offload: False
211
+ - llm_int8_has_fp16_weight: False
212
+ - bnb_4bit_quant_type: fp4
213
+ - bnb_4bit_use_double_quant: True
214
+ - bnb_4bit_compute_dtype: float16
215
+
216
  The following `bitsandbytes` quantization config was used during training:
217
  - load_in_8bit: False
218
  - load_in_4bit: True
 
239
  - PEFT 0.4.0.dev0
240
  - PEFT 0.4.0.dev0
241
  - PEFT 0.4.0.dev0
242
+ - PEFT 0.4.0.dev0
243
+ - PEFT 0.4.0.dev0
244
+ - PEFT 0.4.0.dev0
245
+ - PEFT 0.4.0.dev0
246
+ - PEFT 0.4.0.dev0
247
 
248
  - PEFT 0.4.0.dev0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:159af2dc65890d5014a561f4d59216da7d85980077f8ac6cd63cdb84858443eb
3
  size 261189453
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca6a9830d69d0cd5e5685bde81233f9ed1664d8f658f46b5be00444913d2428e
3
  size 261189453