File size: 132,114 Bytes
b452ae8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 |
/home/floriadmin/miniforge3/envs/mlc/bin/python -m mlc_llm gen_config /tmp/tmpo8_dluj_/repo --quantization q4f32_1 --conv-template chatml --output /tmp/tmpx3x6wnfw [2024-03-19 01:27:45] INFO auto_config.py:115: [92mFound[0m model configuration: /tmp/tmpo8_dluj_/repo/config.json [2024-03-19 01:27:45] INFO auto_config.py:153: [92mFound[0m model type: [1mllama[0m. Use `--model-type` to override. [2024-03-19 01:27:45] INFO llama_model.py:52: [1mcontext_window_size[0m not found in config.json. Falling back to [1mmax_position_embeddings[0m (4096) [2024-03-19 01:27:45] INFO llama_model.py:72: [1mprefill_chunk_size[0m defaults to [1mcontext_window_size[0m (4096) [2024-03-19 01:27:45] INFO config.py:106: Overriding [1mmax_batch_size[0m from 1 to 80 [2024-03-19 01:27:45] INFO gen_config.py:133: [generation_config.json] Setting [1mbos_token_id[0m: 1 [2024-03-19 01:27:45] INFO gen_config.py:133: [generation_config.json] Setting [1meos_token_id[0m: 2 [2024-03-19 01:27:45] INFO gen_config.py:133: [generation_config.json] Setting [1mpad_token_id[0m: 0 [2024-03-19 01:27:45] INFO gen_config.py:145: [92mFound[0m tokenizer config: /tmp/tmpo8_dluj_/repo/tokenizer.model. Copying to [1m/tmp/tmpx3x6wnfw/tokenizer.model[0m [2024-03-19 01:27:45] INFO gen_config.py:145: [92mFound[0m tokenizer config: /tmp/tmpo8_dluj_/repo/tokenizer.json. Copying to [1m/tmp/tmpx3x6wnfw/tokenizer.json[0m [2024-03-19 01:27:45] INFO gen_config.py:147: [91mNot found[0m tokenizer config: /tmp/tmpo8_dluj_/repo/vocab.json [2024-03-19 01:27:45] INFO gen_config.py:147: [91mNot found[0m tokenizer config: /tmp/tmpo8_dluj_/repo/merges.txt [2024-03-19 01:27:45] INFO gen_config.py:145: [92mFound[0m tokenizer config: /tmp/tmpo8_dluj_/repo/added_tokens.json. Copying to [1m/tmp/tmpx3x6wnfw/added_tokens.json[0m [2024-03-19 01:27:45] INFO gen_config.py:145: [92mFound[0m tokenizer config: /tmp/tmpo8_dluj_/repo/tokenizer_config.json. Copying to [1m/tmp/tmpx3x6wnfw/tokenizer_config.json[0m [2024-03-19 01:27:45] INFO gen_config.py:75: [System default] Setting [1mtemperature[0m: 0.7 [2024-03-19 01:27:45] INFO gen_config.py:75: [System default] Setting [1mpresence_penalty[0m: 0.0 [2024-03-19 01:27:45] INFO gen_config.py:75: [System default] Setting [1mfrequency_penalty[0m: 0.0 [2024-03-19 01:27:45] INFO gen_config.py:75: [System default] Setting [1mrepetition_penalty[0m: 1.0 [2024-03-19 01:27:45] INFO gen_config.py:75: [System default] Setting [1mtop_p[0m: 0.95 [2024-03-19 01:27:45] INFO gen_config.py:75: [System default] Setting [1mmean_gen_len[0m: 128 [2024-03-19 01:27:45] INFO gen_config.py:75: [System default] Setting [1mmax_gen_len[0m: 512 [2024-03-19 01:27:45] INFO gen_config.py:75: [System default] Setting [1mshift_fill_factor[0m: 0.3 [2024-03-19 01:27:45] INFO gen_config.py:198: Dumping configuration file to: [1m/tmp/tmpx3x6wnfw/mlc-chat-config.json[0m /home/floriadmin/miniforge3/envs/mlc/bin/python -m mlc_llm convert_weight /tmp/tmpo8_dluj_/repo --quantization q4f32_1 --source-format auto --output /tmp/tmpx3x6wnfw [2024-03-19 01:27:46] INFO auto_config.py:115: [92mFound[0m model configuration: /tmp/tmpo8_dluj_/repo/config.json [2024-03-19 01:27:47] INFO auto_device.py:76: [92mFound[0m device: cuda:0 [2024-03-19 01:27:47] INFO auto_device.py:76: [92mFound[0m device: cuda:1 [2024-03-19 01:27:47] INFO auto_device.py:76: [92mFound[0m device: cuda:2 [2024-03-19 01:27:47] INFO auto_device.py:76: [92mFound[0m device: cuda:3 [2024-03-19 01:27:47] INFO auto_device.py:76: [92mFound[0m device: cuda:4 [2024-03-19 01:27:47] INFO auto_device.py:76: [92mFound[0m device: cuda:5 [2024-03-19 01:27:47] INFO auto_device.py:76: [92mFound[0m device: cuda:6 [2024-03-19 01:27:47] INFO auto_device.py:76: [92mFound[0m device: cuda:7 [2024-03-19 01:27:47] INFO auto_device.py:76: [92mFound[0m device: cuda:8 [2024-03-19 01:27:47] INFO auto_device.py:76: [92mFound[0m device: cuda:9 [2024-03-19 01:27:48] INFO auto_device.py:85: [91mNot found[0m device: rocm:0 [2024-03-19 01:27:49] INFO auto_device.py:85: [91mNot found[0m device: metal:0 [2024-03-19 01:27:52] INFO auto_device.py:76: [92mFound[0m device: vulkan:0 [2024-03-19 01:27:52] INFO auto_device.py:76: [92mFound[0m device: vulkan:1 [2024-03-19 01:27:52] INFO auto_device.py:76: [92mFound[0m device: vulkan:2 [2024-03-19 01:27:52] INFO auto_device.py:76: [92mFound[0m device: vulkan:3 [2024-03-19 01:27:52] INFO auto_device.py:76: [92mFound[0m device: vulkan:4 [2024-03-19 01:27:52] INFO auto_device.py:76: [92mFound[0m device: vulkan:5 [2024-03-19 01:27:52] INFO auto_device.py:76: [92mFound[0m device: vulkan:6 [2024-03-19 01:27:52] INFO auto_device.py:76: [92mFound[0m device: vulkan:7 [2024-03-19 01:27:52] INFO auto_device.py:76: [92mFound[0m device: vulkan:8 [2024-03-19 01:27:52] INFO auto_device.py:76: [92mFound[0m device: vulkan:9 [2024-03-19 01:27:52] INFO auto_device.py:76: [92mFound[0m device: vulkan:10 [2024-03-19 01:27:53] INFO auto_device.py:85: [91mNot found[0m device: opencl:0 [2024-03-19 01:27:53] INFO auto_device.py:33: Using device: [1mcuda:0[0m [2024-03-19 01:27:53] INFO auto_weight.py:70: Finding weights in: /tmp/tmpo8_dluj_/repo [2024-03-19 01:27:53] INFO auto_weight.py:136: [91mNot found[0m Huggingface PyTorch [2024-03-19 01:27:54] INFO auto_weight.py:160: [92mFound[0m source weight format: huggingface-safetensor. Source configuration: /tmp/tmpo8_dluj_/repo/model.safetensors.index.json [2024-03-19 01:27:54] INFO auto_weight.py:106: Using source weight configuration: [1m/tmp/tmpo8_dluj_/repo/model.safetensors.index.json[0m. Use `--source` to override. [2024-03-19 01:27:54] INFO auto_weight.py:110: Using source weight format: [1mhuggingface-safetensor[0m. Use `--source-format` to override. [2024-03-19 01:27:54] INFO auto_config.py:153: [92mFound[0m model type: [1mllama[0m. Use `--model-type` to override. [2024-03-19 01:27:54] INFO llama_model.py:52: [1mcontext_window_size[0m not found in config.json. Falling back to [1mmax_position_embeddings[0m (4096) [2024-03-19 01:27:54] INFO llama_model.py:72: [1mprefill_chunk_size[0m defaults to [1mcontext_window_size[0m (4096) [1mWeight conversion with arguments:[0m [1m--config[0m /tmp/tmpo8_dluj_/repo/config.json [1m--quantization[0m GroupQuantize(name='q4f32_1', kind='group-quant', group_size=32, quantize_dtype='int4', storage_dtype='uint32', model_dtype='float32', linear_weight_layout='NK', quantize_embedding=True, quantize_final_fc=True, num_elem_per_storage=8, num_storage_per_group=4, max_int_value=7) [1m--model-type[0m llama [1m--device[0m cuda:0 [1m--source[0m /tmp/tmpo8_dluj_/repo/model.safetensors.index.json [1m--source-format[0m huggingface-safetensor [1m--output[0m /tmp/tmpx3x6wnfw Start storing to cache /tmp/tmpx3x6wnfw 0%| | 0/135 [00:00<?, ?it/s] [2024-03-19 01:27:55] INFO huggingface_loader.py:182: Loading HF parameters from: /tmp/tmpo8_dluj_/repo/model.safetensors 0%| | 0/135 [00:00<?, ?it/s] [2024-03-19 01:28:07] INFO group_quantization.py:232: Compiling quantize function for key: ((32002, 2048), float32, cuda, axis=1, output_transpose=False) 0%| | 0/135 [00:12<?, ?it/s] [2024-03-19 01:28:08] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mlm_head.q_weight[0m", shape: (32002, 256), dtype: uint32 0%| | 0/135 [00:13<?, ?it/s] [2024-03-19 01:28:09] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mlm_head.q_scale[0m", shape: (32002, 64), dtype: float32 0%| | 0/135 [00:13<?, ?it/s]/home/floriadmin/miniforge3/envs/mlc/lib/python3.11/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero. setattr(self, word, getattr(machar, word).flat[0]) /home/floriadmin/miniforge3/envs/mlc/lib/python3.11/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero. return self._float_to_str(self.smallest_subnormal) 1%|β | 1/135 [00:13<30:04, 13.46s/it] [2024-03-19 01:28:10] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.embed_tokens.q_weight[0m", shape: (32002, 256), dtype: uint32 1%|β | 1/135 [00:14<30:04, 13.46s/it] [2024-03-19 01:28:10] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.embed_tokens.q_scale[0m", shape: (32002, 64), dtype: float32 1%|β | 1/135 [00:14<30:04, 13.46s/it] 1%|ββ | 2/135 [00:14<14:17, 6.45s/it] [2024-03-19 01:28:10] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.0.input_layernorm.weight[0m", shape: (2048,), dtype: float32 1%|ββ | 2/135 [00:14<14:17, 6.45s/it] [2024-03-19 01:28:10] INFO group_quantization.py:232: Compiling quantize function for key: ((2048, 5632), float32, cuda, axis=1, output_transpose=False) 1%|ββ | 2/135 [00:15<14:17, 6.45s/it] [2024-03-19 01:28:11] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.0.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 1%|ββ | 2/135 [00:15<14:17, 6.45s/it] [2024-03-19 01:28:11] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.0.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 1%|ββ | 2/135 [00:15<14:17, 6.45s/it] 3%|βββ | 4/135 [00:15<05:44, 2.63s/it] [2024-03-19 01:28:11] INFO group_quantization.py:232: Compiling quantize function for key: ((11264, 2048), float32, cuda, axis=1, output_transpose=False) 3%|βββ | 4/135 [00:16<05:44, 2.63s/it] [2024-03-19 01:28:12] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.0.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 3%|βββ | 4/135 [00:16<05:44, 2.63s/it] [2024-03-19 01:28:12] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.0.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 3%|βββ | 4/135 [00:16<05:44, 2.63s/it] 4%|βββ | 5/135 [00:16<04:34, 2.11s/it] [2024-03-19 01:28:12] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.0.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 4%|βββ | 5/135 [00:16<04:34, 2.11s/it] [2024-03-19 01:28:12] INFO group_quantization.py:232: Compiling quantize function for key: ((2560, 2048), float32, cuda, axis=1, output_transpose=False) 4%|βββ | 5/135 [00:16<04:34, 2.11s/it] [2024-03-19 01:28:12] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.0.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 4%|βββ | 5/135 [00:17<04:34, 2.11s/it] [2024-03-19 01:28:12] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.0.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 4%|βββ | 5/135 [00:17<04:34, 2.11s/it] 5%|βββββ | 7/135 [00:17<02:38, 1.24s/it] [2024-03-19 01:28:12] INFO group_quantization.py:232: Compiling quantize function for key: ((2048, 2048), float32, cuda, axis=1, output_transpose=False) 5%|βββββ | 7/135 [00:17<02:38, 1.24s/it] [2024-03-19 01:28:13] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.0.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 5%|βββββ | 7/135 [00:17<02:38, 1.24s/it] [2024-03-19 01:28:13] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.0.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 5%|βββββ | 7/135 [00:17<02:38, 1.24s/it] 6%|βββββ | 8/135 [00:17<02:13, 1.05s/it] [2024-03-19 01:28:13] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.1.input_layernorm.weight[0m", shape: (2048,), dtype: float32 6%|βββββ | 8/135 [00:17<02:13, 1.05s/it] [2024-03-19 01:28:13] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.1.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 6%|βββββ | 8/135 [00:17<02:13, 1.05s/it] [2024-03-19 01:28:13] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.1.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 6%|βββββ | 8/135 [00:17<02:13, 1.05s/it] 7%|ββββββ | 10/135 [00:17<01:22, 1.51it/s] [2024-03-19 01:28:13] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.1.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 7%|ββββββ | 10/135 [00:18<01:22, 1.51it/s] [2024-03-19 01:28:13] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.1.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 7%|ββββββ | 10/135 [00:18<01:22, 1.51it/s] 8%|βββββββ | 11/135 [00:18<01:13, 1.69it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.1.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 8%|βββββββ | 11/135 [00:18<01:13, 1.69it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.1.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 8%|βββββββ | 11/135 [00:18<01:13, 1.69it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.1.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 8%|βββββββ | 11/135 [00:18<01:13, 1.69it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.1.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 8%|βββββββ | 11/135 [00:18<01:13, 1.69it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.1.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 8%|βββββββ | 11/135 [00:18<01:13, 1.69it/s] 10%|βββββββββ | 14/135 [00:18<00:38, 3.12it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.10.input_layernorm.weight[0m", shape: (2048,), dtype: float32 10%|βββββββββ | 14/135 [00:18<00:38, 3.12it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.10.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 10%|βββββββββ | 14/135 [00:18<00:38, 3.12it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.10.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 10%|βββββββββ | 14/135 [00:18<00:38, 3.12it/s] 12%|ββββββββββ | 16/135 [00:18<00:28, 4.12it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.10.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 12%|ββββββββββ | 16/135 [00:18<00:28, 4.12it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.10.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 12%|ββββββββββ | 16/135 [00:18<00:28, 4.12it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.10.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 12%|ββββββββββ | 16/135 [00:18<00:28, 4.12it/s] 13%|βββββββββββ | 18/135 [00:18<00:27, 4.23it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.10.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 13%|βββββββββββ | 18/135 [00:19<00:27, 4.23it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.10.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 13%|βββββββββββ | 18/135 [00:19<00:27, 4.23it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.10.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 13%|βββββββββββ | 18/135 [00:19<00:27, 4.23it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.10.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 13%|βββββββββββ | 18/135 [00:19<00:27, 4.23it/s] 15%|ββββββββββββ | 20/135 [00:19<00:21, 5.40it/s] [2024-03-19 01:28:14] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.11.input_layernorm.weight[0m", shape: (2048,), dtype: float32 15%|ββββββββββββ | 20/135 [00:19<00:21, 5.40it/s] [2024-03-19 01:28:15] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.11.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 15%|ββββββββββββ | 20/135 [00:19<00:21, 5.40it/s] [2024-03-19 01:28:15] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.11.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 15%|ββββββββββββ | 20/135 [00:19<00:21, 5.40it/s] 16%|βββββββββββββ | 22/135 [00:19<00:17, 6.55it/s] [2024-03-19 01:28:15] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.11.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 16%|βββββββββββββ | 22/135 [00:19<00:17, 6.55it/s] [2024-03-19 01:28:15] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.11.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 16%|βββββββββββββ | 22/135 [00:19<00:17, 6.55it/s] [2024-03-19 01:28:15] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.11.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 16%|βββββββββββββ | 22/135 [00:19<00:17, 6.55it/s] 18%|ββββββββββββββ | 24/135 [00:19<00:21, 5.26it/s] [2024-03-19 01:28:15] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.11.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 18%|ββββββββββββββ | 24/135 [00:19<00:21, 5.26it/s] [2024-03-19 01:28:15] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.11.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 18%|ββββββββββββββ | 24/135 [00:19<00:21, 5.26it/s] [2024-03-19 01:28:15] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.11.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 18%|ββββββββββββββ | 24/135 [00:19<00:21, 5.26it/s] [2024-03-19 01:28:15] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.11.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 18%|ββββββββββββββ | 24/135 [00:19<00:21, 5.26it/s] 19%|ββββββββββββββββ | 26/135 [00:19<00:16, 6.56it/s] [2024-03-19 01:28:15] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.12.input_layernorm.weight[0m", shape: (2048,), dtype: float32 19%|ββββββββββββββββ | 26/135 [00:19<00:16, 6.56it/s] [2024-03-19 01:28:15] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.12.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 19%|ββββββββββββββββ | 26/135 [00:20<00:16, 6.56it/s] [2024-03-19 01:28:15] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.12.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 19%|ββββββββββββββββ | 26/135 [00:20<00:16, 6.56it/s] 21%|βββββββββββββββββ | 28/135 [00:20<00:13, 7.75it/s] [2024-03-19 01:28:16] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.12.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 21%|βββββββββββββββββ | 28/135 [00:20<00:13, 7.75it/s] [2024-03-19 01:28:16] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.12.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 21%|βββββββββββββββββ | 28/135 [00:20<00:13, 7.75it/s] [2024-03-19 01:28:16] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.12.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 21%|βββββββββββββββββ | 28/135 [00:20<00:13, 7.75it/s] 22%|ββββββββββββββββββ | 30/135 [00:20<00:20, 5.08it/s] [2024-03-19 01:28:16] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.12.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 22%|ββββββββββββββββββ | 30/135 [00:20<00:20, 5.08it/s] [2024-03-19 01:28:16] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.12.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 22%|ββββββββββββββββββ | 30/135 [00:20<00:20, 5.08it/s] [2024-03-19 01:28:16] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.12.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 22%|ββββββββββββββββββ | 30/135 [00:20<00:20, 5.08it/s] [2024-03-19 01:28:16] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.12.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 22%|ββββββββββββββββββ | 30/135 [00:20<00:20, 5.08it/s] 24%|βββββββββββββββββββ | 32/135 [00:20<00:16, 6.36it/s] [2024-03-19 01:28:16] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.13.input_layernorm.weight[0m", shape: (2048,), dtype: float32 24%|βββββββββββββββββββ | 32/135 [00:20<00:16, 6.36it/s] [2024-03-19 01:28:16] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.13.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 24%|βββββββββββββββββββ | 32/135 [00:21<00:16, 6.36it/s] [2024-03-19 01:28:16] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.13.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 24%|βββββββββββββββββββ | 32/135 [00:21<00:16, 6.36it/s] 25%|ββββββββββββββββββββ | 34/135 [00:21<00:13, 7.51it/s] [2024-03-19 01:28:17] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.13.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 25%|ββββββββββββββββββββ | 34/135 [00:21<00:13, 7.51it/s] [2024-03-19 01:28:17] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.13.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 25%|ββββββββββββββββββββ | 34/135 [00:21<00:13, 7.51it/s] [2024-03-19 01:28:17] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.13.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 25%|ββββββββββββββββββββ | 34/135 [00:21<00:13, 7.51it/s] 27%|βββββββββββββββββββββ | 36/135 [00:21<00:17, 5.55it/s] [2024-03-19 01:28:17] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.13.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 27%|βββββββββββββββββββββ | 36/135 [00:21<00:17, 5.55it/s] [2024-03-19 01:28:17] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.13.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 27%|βββββββββββββββββββββ | 36/135 [00:21<00:17, 5.55it/s] [2024-03-19 01:28:17] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.13.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 27%|βββββββββββββββββββββ | 36/135 [00:21<00:17, 5.55it/s] [2024-03-19 01:28:17] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.13.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 27%|βββββββββββββββββββββ | 36/135 [00:21<00:17, 5.55it/s] 28%|βββββββββββββββββββββββ | 38/135 [00:21<00:14, 6.91it/s] [2024-03-19 01:28:17] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.14.input_layernorm.weight[0m", shape: (2048,), dtype: float32 28%|βββββββββββββββββββββββ | 38/135 [00:21<00:14, 6.91it/s] [2024-03-19 01:28:17] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.14.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 28%|βββββββββββββββββββββββ | 38/135 [00:21<00:14, 6.91it/s] [2024-03-19 01:28:17] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.14.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 28%|βββββββββββββββββββββββ | 38/135 [00:21<00:14, 6.91it/s] 30%|ββββββββββββββββββββββββ | 40/135 [00:21<00:11, 8.09it/s] [2024-03-19 01:28:18] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.14.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 30%|ββββββββββββββββββββββββ | 40/135 [00:22<00:11, 8.09it/s] [2024-03-19 01:28:18] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.14.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 30%|ββββββββββββββββββββββββ | 40/135 [00:22<00:11, 8.09it/s] [2024-03-19 01:28:18] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.14.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 30%|ββββββββββββββββββββββββ | 40/135 [00:22<00:11, 8.09it/s] 31%|βββββββββββββββββββββββββ | 42/135 [00:22<00:15, 5.90it/s] [2024-03-19 01:28:18] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.14.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 31%|βββββββββββββββββββββββββ | 42/135 [00:22<00:15, 5.90it/s] [2024-03-19 01:28:18] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.14.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 31%|βββββββββββββββββββββββββ | 42/135 [00:22<00:15, 5.90it/s] [2024-03-19 01:28:18] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.14.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 31%|βββββββββββββββββββββββββ | 42/135 [00:22<00:15, 5.90it/s] [2024-03-19 01:28:18] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.14.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 31%|βββββββββββββββββββββββββ | 42/135 [00:22<00:15, 5.90it/s] 33%|ββββββββββββββββββββββββββ | 44/135 [00:22<00:12, 7.28it/s] [2024-03-19 01:28:18] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.15.input_layernorm.weight[0m", shape: (2048,), dtype: float32 33%|ββββββββββββββββββββββββββ | 44/135 [00:22<00:12, 7.28it/s] [2024-03-19 01:28:18] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.15.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 33%|ββββββββββββββββββββββββββ | 44/135 [00:22<00:12, 7.28it/s] [2024-03-19 01:28:18] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.15.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 33%|ββββββββββββββββββββββββββ | 44/135 [00:22<00:12, 7.28it/s] 34%|βββββββββββββββββββββββββββ | 46/135 [00:22<00:10, 8.45it/s] [2024-03-19 01:28:18] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.15.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 34%|βββββββββββββββββββββββββββ | 46/135 [00:23<00:10, 8.45it/s] [2024-03-19 01:28:19] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.15.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 34%|βββββββββββββββββββββββββββ | 46/135 [00:23<00:10, 8.45it/s] [2024-03-19 01:28:19] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.15.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 34%|βββββββββββββββββββββββββββ | 46/135 [00:23<00:10, 8.45it/s] 36%|ββββββββββββββββββββββββββββ | 48/135 [00:23<00:13, 6.22it/s] [2024-03-19 01:28:19] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.15.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 36%|ββββββββββββββββββββββββββββ | 48/135 [00:23<00:13, 6.22it/s] [2024-03-19 01:28:19] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.15.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 36%|ββββββββββββββββββββββββββββ | 48/135 [00:23<00:13, 6.22it/s] [2024-03-19 01:28:19] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.15.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 36%|ββββββββββββββββββββββββββββ | 48/135 [00:23<00:13, 6.22it/s] [2024-03-19 01:28:19] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.15.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 36%|ββββββββββββββββββββββββββββ | 48/135 [00:23<00:13, 6.22it/s] 37%|ββββββββββββββββββββββββββββββ | 50/135 [00:23<00:11, 7.61it/s] [2024-03-19 01:28:19] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.16.input_layernorm.weight[0m", shape: (2048,), dtype: float32 37%|ββββββββββββββββββββββββββββββ | 50/135 [00:23<00:11, 7.61it/s] [2024-03-19 01:28:19] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.16.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 37%|ββββββββββββββββββββββββββββββ | 50/135 [00:23<00:11, 7.61it/s] [2024-03-19 01:28:19] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.16.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 37%|ββββββββββββββββββββββββββββββ | 50/135 [00:23<00:11, 7.61it/s] 39%|βββββββββββββββββββββββββββββββ | 52/135 [00:23<00:09, 8.76it/s] [2024-03-19 01:28:19] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.16.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 39%|βββββββββββββββββββββββββββββββ | 52/135 [00:24<00:09, 8.76it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.16.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 39%|βββββββββββββββββββββββββββββββ | 52/135 [00:24<00:09, 8.76it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.16.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 39%|βββββββββββββββββββββββββββββββ | 52/135 [00:24<00:09, 8.76it/s] 40%|ββββββββββββββββββββββββββββββββ | 54/135 [00:24<00:15, 5.17it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.16.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 40%|ββββββββββββββββββββββββββββββββ | 54/135 [00:24<00:15, 5.17it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.16.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 40%|ββββββββββββββββββββββββββββββββ | 54/135 [00:24<00:15, 5.17it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.16.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 40%|ββββββββββββββββββββββββββββββββ | 54/135 [00:24<00:15, 5.17it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.16.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 40%|ββββββββββββββββββββββββββββββββ | 54/135 [00:24<00:15, 5.17it/s] 41%|βββββββββββββββββββββββββββββββββ | 56/135 [00:24<00:12, 6.48it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.17.input_layernorm.weight[0m", shape: (2048,), dtype: float32 41%|βββββββββββββββββββββββββββββββββ | 56/135 [00:24<00:12, 6.48it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.17.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 41%|βββββββββββββββββββββββββββββββββ | 56/135 [00:24<00:12, 6.48it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.17.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 41%|βββββββββββββββββββββββββββββββββ | 56/135 [00:24<00:12, 6.48it/s] 43%|ββββββββββββββββββββββββββββββββββ | 58/135 [00:24<00:10, 7.66it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.17.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 43%|ββββββββββββββββββββββββββββββββββ | 58/135 [00:25<00:10, 7.66it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.17.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 43%|ββββββββββββββββββββββββββββββββββ | 58/135 [00:25<00:10, 7.66it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.17.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 43%|ββββββββββββββββββββββββββββββββββ | 58/135 [00:25<00:10, 7.66it/s] 44%|βββββββββββββββββββββββββββββββββββ | 60/135 [00:25<00:12, 6.06it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.17.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 44%|βββββββββββββββββββββββββββββββββββ | 60/135 [00:25<00:12, 6.06it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.17.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 44%|βββββββββββββββββββββββββββββββββββ | 60/135 [00:25<00:12, 6.06it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.17.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 44%|βββββββββββββββββββββββββββββββββββ | 60/135 [00:25<00:12, 6.06it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.17.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 44%|βββββββββββββββββββββββββββββββββββ | 60/135 [00:25<00:12, 6.06it/s] 46%|βββββββββββββββββββββββββββββββββββββ | 62/135 [00:25<00:09, 7.45it/s] [2024-03-19 01:28:20] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.18.input_layernorm.weight[0m", shape: (2048,), dtype: float32 46%|βββββββββββββββββββββββββββββββββββββ | 62/135 [00:25<00:09, 7.45it/s] [2024-03-19 01:28:21] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.18.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 46%|βββββββββββββββββββββββββββββββββββββ | 62/135 [00:25<00:09, 7.45it/s] [2024-03-19 01:28:21] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.18.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 46%|βββββββββββββββββββββββββββββββββββββ | 62/135 [00:25<00:09, 7.45it/s] 47%|ββββββββββββββββββββββββββββββββββββββ | 64/135 [00:25<00:08, 8.60it/s] [2024-03-19 01:28:21] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.18.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 47%|ββββββββββββββββββββββββββββββββββββββ | 64/135 [00:25<00:08, 8.60it/s] [2024-03-19 01:28:21] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.18.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 47%|ββββββββββββββββββββββββββββββββββββββ | 64/135 [00:25<00:08, 8.60it/s] [2024-03-19 01:28:21] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.18.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 47%|ββββββββββββββββββββββββββββββββββββββ | 64/135 [00:25<00:08, 8.60it/s] 49%|βββββββββββββββββββββββββββββββββββββββ | 66/135 [00:25<00:10, 6.74it/s] [2024-03-19 01:28:21] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.18.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 49%|βββββββββββββββββββββββββββββββββββββββ | 66/135 [00:25<00:10, 6.74it/s] [2024-03-19 01:28:21] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.18.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 49%|βββββββββββββββββββββββββββββββββββββββ | 66/135 [00:25<00:10, 6.74it/s] [2024-03-19 01:28:21] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.18.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 49%|βββββββββββββββββββββββββββββββββββββββ | 66/135 [00:25<00:10, 6.74it/s] [2024-03-19 01:28:21] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.18.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 49%|βββββββββββββββββββββββββββββββββββββββ | 66/135 [00:25<00:10, 6.74it/s] 50%|ββββββββββββββββββββββββββββββββββββββββ | 68/135 [00:25<00:08, 8.15it/s] [2024-03-19 01:28:21] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.19.input_layernorm.weight[0m", shape: (2048,), dtype: float32 50%|ββββββββββββββββββββββββββββββββββββββββ | 68/135 [00:25<00:08, 8.15it/s] [2024-03-19 01:28:21] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.19.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 50%|ββββββββββββββββββββββββββββββββββββββββ | 68/135 [00:26<00:08, 8.15it/s] [2024-03-19 01:28:21] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.19.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 50%|ββββββββββββββββββββββββββββββββββββββββ | 68/135 [00:26<00:08, 8.15it/s] 52%|βββββββββββββββββββββββββββββββββββββββββ | 70/135 [00:26<00:07, 9.24it/s] [2024-03-19 01:28:22] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.19.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 52%|βββββββββββββββββββββββββββββββββββββββββ | 70/135 [00:26<00:07, 9.24it/s] [2024-03-19 01:28:22] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.19.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 52%|βββββββββββββββββββββββββββββββββββββββββ | 70/135 [00:26<00:07, 9.24it/s] [2024-03-19 01:28:22] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.19.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 52%|βββββββββββββββββββββββββββββββββββββββββ | 70/135 [00:26<00:07, 9.24it/s] 53%|βββββββββββββββββββββββββββββββββββββββββββ | 72/135 [00:26<00:10, 6.23it/s] [2024-03-19 01:28:22] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.19.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 53%|βββββββββββββββββββββββββββββββββββββββββββ | 72/135 [00:26<00:10, 6.23it/s] [2024-03-19 01:28:22] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.19.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 53%|βββββββββββββββββββββββββββββββββββββββββββ | 72/135 [00:26<00:10, 6.23it/s] [2024-03-19 01:28:22] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.19.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 53%|βββββββββββββββββββββββββββββββββββββββββββ | 72/135 [00:26<00:10, 6.23it/s] [2024-03-19 01:28:22] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.19.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 53%|βββββββββββββββββββββββββββββββββββββββββββ | 72/135 [00:26<00:10, 6.23it/s] 55%|ββββββββββββββββββββββββββββββββββββββββββββ | 74/135 [00:26<00:07, 7.63it/s] [2024-03-19 01:28:22] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.2.input_layernorm.weight[0m", shape: (2048,), dtype: float32 55%|ββββββββββββββββββββββββββββββββββββββββββββ | 74/135 [00:26<00:07, 7.63it/s] [2024-03-19 01:28:22] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.2.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 55%|ββββββββββββββββββββββββββββββββββββββββββββ | 74/135 [00:26<00:07, 7.63it/s] [2024-03-19 01:28:22] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.2.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 55%|ββββββββββββββββββββββββββββββββββββββββββββ | 74/135 [00:26<00:07, 7.63it/s] 56%|βββββββββββββββββββββββββββββββββββββββββββββ | 76/135 [00:26<00:06, 8.76it/s] [2024-03-19 01:28:23] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.2.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 56%|βββββββββββββββββββββββββββββββββββββββββββββ | 76/135 [00:27<00:06, 8.76it/s] [2024-03-19 01:28:23] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.2.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 56%|βββββββββββββββββββββββββββββββββββββββββββββ | 76/135 [00:27<00:06, 8.76it/s] [2024-03-19 01:28:23] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.2.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 56%|βββββββββββββββββββββββββββββββββββββββββββββ | 76/135 [00:27<00:06, 8.76it/s] 58%|ββββββββββββββββββββββββββββββββββββββββββββββ | 78/135 [00:27<00:09, 5.81it/s] [2024-03-19 01:28:23] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.2.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 58%|ββββββββββββββββββββββββββββββββββββββββββββββ | 78/135 [00:27<00:09, 5.81it/s] [2024-03-19 01:28:23] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.2.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 58%|ββββββββββββββββββββββββββββββββββββββββββββββ | 78/135 [00:27<00:09, 5.81it/s] [2024-03-19 01:28:23] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.2.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 58%|ββββββββββββββββββββββββββββββββββββββββββββββ | 78/135 [00:27<00:09, 5.81it/s] [2024-03-19 01:28:23] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.2.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 58%|ββββββββββββββββββββββββββββββββββββββββββββββ | 78/135 [00:27<00:09, 5.81it/s] 59%|βββββββββββββββββββββββββββββββββββββββββββββββ | 80/135 [00:27<00:07, 7.17it/s] [2024-03-19 01:28:23] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.20.input_layernorm.weight[0m", shape: (2048,), dtype: float32 59%|βββββββββββββββββββββββββββββββββββββββββββββββ | 80/135 [00:27<00:07, 7.17it/s] [2024-03-19 01:28:23] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.20.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 59%|βββββββββββββββββββββββββββββββββββββββββββββββ | 80/135 [00:27<00:07, 7.17it/s] [2024-03-19 01:28:23] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.20.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 59%|βββββββββββββββββββββββββββββββββββββββββββββββ | 80/135 [00:27<00:07, 7.17it/s] 61%|ββββββββββββββββββββββββββββββββββββββββββββββββ | 82/135 [00:27<00:06, 8.33it/s] [2024-03-19 01:28:24] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.20.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 61%|ββββββββββββββββββββββββββββββββββββββββββββββββ | 82/135 [00:28<00:06, 8.33it/s] [2024-03-19 01:28:24] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.20.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 61%|ββββββββββββββββββββββββββββββββββββββββββββββββ | 82/135 [00:28<00:06, 8.33it/s] [2024-03-19 01:28:24] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.20.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 61%|ββββββββββββββββββββββββββββββββββββββββββββββββ | 82/135 [00:28<00:06, 8.33it/s] 62%|ββββββββββββββββββββββββββββββββββββββββββββββββββ | 84/135 [00:28<00:08, 6.03it/s] [2024-03-19 01:28:24] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.20.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 62%|ββββββββββββββββββββββββββββββββββββββββββββββββββ | 84/135 [00:28<00:08, 6.03it/s] [2024-03-19 01:28:24] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.20.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 62%|ββββββββββββββββββββββββββββββββββββββββββββββββββ | 84/135 [00:28<00:08, 6.03it/s] [2024-03-19 01:28:24] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.20.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 62%|ββββββββββββββββββββββββββββββββββββββββββββββββββ | 84/135 [00:28<00:08, 6.03it/s] [2024-03-19 01:28:24] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.20.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 62%|ββββββββββββββββββββββββββββββββββββββββββββββββββ | 84/135 [00:28<00:08, 6.03it/s] 64%|βββββββββββββββββββββββββββββββββββββββββββββββββββ | 86/135 [00:28<00:06, 7.41it/s] [2024-03-19 01:28:24] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.21.input_layernorm.weight[0m", shape: (2048,), dtype: float32 64%|βββββββββββββββββββββββββββββββββββββββββββββββββββ | 86/135 [00:28<00:06, 7.41it/s] [2024-03-19 01:28:24] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.21.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 64%|βββββββββββββββββββββββββββββββββββββββββββββββββββ | 86/135 [00:28<00:06, 7.41it/s] [2024-03-19 01:28:24] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.21.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 64%|βββββββββββββββββββββββββββββββββββββββββββββββββββ | 86/135 [00:28<00:06, 7.41it/s] 65%|ββββββββββββββββββββββββββββββββββββββββββββββββββββ | 88/135 [00:28<00:05, 8.57it/s] [2024-03-19 01:28:25] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.21.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 65%|ββββββββββββββββββββββββββββββββββββββββββββββββββββ | 88/135 [00:29<00:05, 8.57it/s] [2024-03-19 01:28:25] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.21.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 65%|ββββββββββββββββββββββββββββββββββββββββββββββββββββ | 88/135 [00:29<00:05, 8.57it/s] [2024-03-19 01:28:25] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.21.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 65%|ββββββββββββββββββββββββββββββββββββββββββββββββββββ | 88/135 [00:29<00:05, 8.57it/s] 67%|βββββββββββββββββββββββββββββββββββββββββββββββββββββ | 90/135 [00:29<00:09, 4.99it/s] [2024-03-19 01:28:25] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.21.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 67%|βββββββββββββββββββββββββββββββββββββββββββββββββββββ | 90/135 [00:29<00:09, 4.99it/s] [2024-03-19 01:28:25] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.21.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 67%|βββββββββββββββββββββββββββββββββββββββββββββββββββββ | 90/135 [00:29<00:09, 4.99it/s] [2024-03-19 01:28:25] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.21.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 67%|βββββββββββββββββββββββββββββββββββββββββββββββββββββ | 90/135 [00:29<00:09, 4.99it/s] [2024-03-19 01:28:25] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.21.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 67%|βββββββββββββββββββββββββββββββββββββββββββββββββββββ | 90/135 [00:29<00:09, 4.99it/s] 68%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 92/135 [00:29<00:06, 6.29it/s] [2024-03-19 01:28:25] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.3.input_layernorm.weight[0m", shape: (2048,), dtype: float32 68%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 92/135 [00:29<00:06, 6.29it/s] [2024-03-19 01:28:25] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.3.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 68%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 92/135 [00:29<00:06, 6.29it/s] [2024-03-19 01:28:25] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.3.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 68%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 92/135 [00:29<00:06, 6.29it/s] 70%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 94/135 [00:29<00:05, 7.49it/s] [2024-03-19 01:28:25] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.3.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 70%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 94/135 [00:30<00:05, 7.49it/s] [2024-03-19 01:28:26] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.3.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 70%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 94/135 [00:30<00:05, 7.49it/s] [2024-03-19 01:28:26] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.3.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 70%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 94/135 [00:30<00:05, 7.49it/s] 71%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 96/135 [00:30<00:07, 5.50it/s] [2024-03-19 01:28:26] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.3.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 71%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 96/135 [00:30<00:07, 5.50it/s] [2024-03-19 01:28:26] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.3.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 71%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 96/135 [00:30<00:07, 5.50it/s] [2024-03-19 01:28:26] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.3.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 71%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 96/135 [00:30<00:07, 5.50it/s] [2024-03-19 01:28:26] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.3.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 71%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 96/135 [00:30<00:07, 5.50it/s] 73%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 98/135 [00:30<00:05, 6.84it/s] [2024-03-19 01:28:26] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.4.input_layernorm.weight[0m", shape: (2048,), dtype: float32 73%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 98/135 [00:30<00:05, 6.84it/s] [2024-03-19 01:28:26] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.4.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 73%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 98/135 [00:30<00:05, 6.84it/s] [2024-03-19 01:28:26] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.4.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 73%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 98/135 [00:30<00:05, 6.84it/s] 74%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 100/135 [00:30<00:04, 8.04it/s] [2024-03-19 01:28:26] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.4.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 74%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 100/135 [00:31<00:04, 8.04it/s] [2024-03-19 01:28:26] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.4.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 74%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 100/135 [00:31<00:04, 8.04it/s] [2024-03-19 01:28:26] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.4.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 74%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 100/135 [00:31<00:04, 8.04it/s] 76%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 102/135 [00:31<00:06, 5.44it/s] [2024-03-19 01:28:27] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.4.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 76%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 102/135 [00:31<00:06, 5.44it/s] [2024-03-19 01:28:27] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.4.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 76%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 102/135 [00:31<00:06, 5.44it/s] [2024-03-19 01:28:27] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.4.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 76%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 102/135 [00:31<00:06, 5.44it/s] [2024-03-19 01:28:27] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.4.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 76%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 102/135 [00:31<00:06, 5.44it/s] 77%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 104/135 [00:31<00:04, 6.78it/s] [2024-03-19 01:28:27] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.5.input_layernorm.weight[0m", shape: (2048,), dtype: float32 77%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 104/135 [00:31<00:04, 6.78it/s] [2024-03-19 01:28:27] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.5.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 77%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 104/135 [00:31<00:04, 6.78it/s] [2024-03-19 01:28:27] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.5.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 77%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 104/135 [00:31<00:04, 6.78it/s] 79%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 106/135 [00:31<00:03, 7.92it/s] [2024-03-19 01:28:27] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.5.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 79%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 106/135 [00:31<00:03, 7.92it/s] [2024-03-19 01:28:27] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.5.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 79%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 106/135 [00:32<00:03, 7.92it/s] [2024-03-19 01:28:27] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.5.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 79%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 106/135 [00:32<00:03, 7.92it/s] 80%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 108/135 [00:32<00:04, 6.09it/s] [2024-03-19 01:28:27] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.5.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 80%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 108/135 [00:32<00:04, 6.09it/s] [2024-03-19 01:28:27] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.5.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 80%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 108/135 [00:32<00:04, 6.09it/s] [2024-03-19 01:28:27] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.5.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 80%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 108/135 [00:32<00:04, 6.09it/s] [2024-03-19 01:28:27] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.5.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 80%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 108/135 [00:32<00:04, 6.09it/s] 81%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 110/135 [00:32<00:03, 7.43it/s] [2024-03-19 01:28:27] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.6.input_layernorm.weight[0m", shape: (2048,), dtype: float32 81%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 110/135 [00:32<00:03, 7.43it/s] [2024-03-19 01:28:28] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.6.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 81%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 110/135 [00:32<00:03, 7.43it/s] [2024-03-19 01:28:28] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.6.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 81%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 110/135 [00:32<00:03, 7.43it/s] 83%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 112/135 [00:32<00:02, 8.50it/s] [2024-03-19 01:28:28] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.6.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 83%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 112/135 [00:32<00:02, 8.50it/s] [2024-03-19 01:28:28] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.6.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 83%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 112/135 [00:32<00:02, 8.50it/s] [2024-03-19 01:28:28] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.6.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 83%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 112/135 [00:32<00:02, 8.50it/s] 84%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 114/135 [00:32<00:02, 7.09it/s] [2024-03-19 01:28:28] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.6.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 84%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 114/135 [00:32<00:02, 7.09it/s] [2024-03-19 01:28:28] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.6.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 84%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 114/135 [00:32<00:02, 7.09it/s] [2024-03-19 01:28:28] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.6.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 84%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 114/135 [00:32<00:02, 7.09it/s] [2024-03-19 01:28:28] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.6.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 84%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 114/135 [00:32<00:02, 7.09it/s] 86%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 116/135 [00:32<00:02, 8.52it/s] [2024-03-19 01:28:28] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.7.input_layernorm.weight[0m", shape: (2048,), dtype: float32 86%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 116/135 [00:32<00:02, 8.52it/s] [2024-03-19 01:28:28] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.7.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 86%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 116/135 [00:32<00:02, 8.52it/s] [2024-03-19 01:28:28] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.7.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 86%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 116/135 [00:32<00:02, 8.52it/s] 87%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 118/135 [00:32<00:01, 9.59it/s] [2024-03-19 01:28:29] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.7.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 87%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 118/135 [00:33<00:01, 9.59it/s] [2024-03-19 01:28:29] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.7.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 87%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 118/135 [00:33<00:01, 9.59it/s] [2024-03-19 01:28:29] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.7.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 87%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 118/135 [00:33<00:01, 9.59it/s] 89%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 120/135 [00:33<00:01, 7.64it/s] [2024-03-19 01:28:29] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.7.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 89%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 120/135 [00:33<00:01, 7.64it/s] [2024-03-19 01:28:29] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.7.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 89%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 120/135 [00:33<00:01, 7.64it/s] [2024-03-19 01:28:29] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.7.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 89%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 120/135 [00:33<00:01, 7.64it/s] [2024-03-19 01:28:29] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.7.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 89%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 120/135 [00:33<00:01, 7.64it/s] 90%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/135 [00:33<00:01, 9.07it/s] [2024-03-19 01:28:29] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.8.input_layernorm.weight[0m", shape: (2048,), dtype: float32 90%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/135 [00:33<00:01, 9.07it/s] [2024-03-19 01:28:29] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.8.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 90%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/135 [00:33<00:01, 9.07it/s] [2024-03-19 01:28:29] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.8.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 90%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/135 [00:33<00:01, 9.07it/s] 92%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 124/135 [00:33<00:01, 10.07it/s] [2024-03-19 01:28:29] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.8.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 92%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 124/135 [00:34<00:01, 10.07it/s] [2024-03-19 01:28:29] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.8.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 92%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 124/135 [00:34<00:01, 10.07it/s] [2024-03-19 01:28:29] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.8.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 92%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 124/135 [00:34<00:01, 10.07it/s] 93%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 126/135 [00:34<00:01, 6.36it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.8.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 93%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 126/135 [00:34<00:01, 6.36it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.8.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 93%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 126/135 [00:34<00:01, 6.36it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.8.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 93%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 126/135 [00:34<00:01, 6.36it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.8.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 93%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 126/135 [00:34<00:01, 6.36it/s] 95%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 128/135 [00:34<00:00, 7.71it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.9.input_layernorm.weight[0m", shape: (2048,), dtype: float32 95%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 128/135 [00:34<00:00, 7.71it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.9.mlp.down_proj.q_weight[0m", shape: (2048, 704), dtype: uint32 95%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 128/135 [00:34<00:00, 7.71it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.9.mlp.down_proj.q_scale[0m", shape: (2048, 176), dtype: float32 95%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 128/135 [00:34<00:00, 7.71it/s] 96%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 130/135 [00:34<00:00, 8.79it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.9.mlp.gate_up_proj.q_weight[0m", shape: (11264, 256), dtype: uint32 96%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 130/135 [00:34<00:00, 8.79it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.9.mlp.gate_up_proj.q_scale[0m", shape: (11264, 64), dtype: float32 96%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 130/135 [00:34<00:00, 8.79it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.layers.9.post_attention_layernorm.weight[0m", shape: (2048,), dtype: float32 96%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 130/135 [00:34<00:00, 8.79it/s] 98%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 132/135 [00:34<00:00, 7.36it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.9.self_attn.qkv_proj.q_weight[0m", shape: (2560, 256), dtype: uint32 98%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 132/135 [00:34<00:00, 7.36it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.9.self_attn.qkv_proj.q_scale[0m", shape: (2560, 64), dtype: float32 98%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 132/135 [00:34<00:00, 7.36it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.9.self_attn.o_proj.q_weight[0m", shape: (2048, 256), dtype: uint32 98%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 132/135 [00:34<00:00, 7.36it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:164: [Quantized] Parameter: "[1mmodel.layers.9.self_attn.o_proj.q_scale[0m", shape: (2048, 64), dtype: float32 98%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 132/135 [00:34<00:00, 7.36it/s] 99%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 134/135 [00:34<00:00, 8.79it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:172: [Not quantized] Parameter: "[1mmodel.norm.weight[0m", shape: (2048,), dtype: float32 99%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 134/135 [00:34<00:00, 8.79it/s] 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 135/135 [00:34<00:00, 3.86it/s] [2024-03-19 01:28:30] INFO huggingface_loader.py:194: Unloading HF weight file: /tmp/tmpo8_dluj_/repo/model.safetensors [2024-03-19 01:28:30] INFO stats.py:76: [92mTime usage[0m: HF loading: 11.095 sec; Pre-quantization mapping: 5.531 sec; Quantization: 3.180 sec [2024-03-19 01:28:30] INFO stats.py:90: [92mRAM usage[0m: Peak RAM: 4.098 GB. Total bytes loaded from disk: 4.098 GB [2024-03-19 01:28:30] INFO convert_weight.py:156: [92mParameter size[0m after quantization: 0.641 GB [2024-03-19 01:28:30] INFO convert_weight.py:161: [92mTotal parameters[0m: 1,100,056,576 [2024-03-19 01:28:30] INFO convert_weight.py:162: [92mBits per parameter[0m: 5.002 [2024-03-19 01:28:30] INFO convert_weight.py:167: Saved to directory: [1m/tmp/tmpx3x6wnfw[0m All finished, 24 total shards committed, record saved to /tmp/tmpx3x6wnfw/ndarray-cache.json Also saved a bf16 record to /tmp/tmpx3x6wnfw/ndarray-cache-b16.json |