/usr/bin/python3 -m mlc_llm gen_config dist/models/stablelm-2-zephyr-1_6b --quantization q0f32 --conv-template stablelm-2 --output /tmp/tmpjjd44ai4 --context-window-size 4096 [2024-05-24 18:23:54] INFO auto_config.py:115: Found model configuration: dist/models/stablelm-2-zephyr-1_6b/config.json [2024-05-24 18:23:54] INFO auto_config.py:153: Found model type: stablelm. Use `--model-type` to override. [2024-05-24 18:23:54] INFO stablelm_model.py:49: context_window_size not found in config.json. Falling back to max_position_embeddings (4096) [2024-05-24 18:23:54] INFO stablelm_model.py:66: prefill_chunk_size defaults to 2048 [2024-05-24 18:23:54] INFO config.py:106: Overriding context_window_size from 4096 to 4096 [2024-05-24 18:23:54] INFO config.py:106: Overriding max_batch_size from 1 to 80 [2024-05-24 18:23:54] INFO gen_config.py:255: [generation_config.json] Setting bos_token_id: 100257 [2024-05-24 18:23:54] INFO gen_config.py:255: [generation_config.json] Setting eos_token_id: 100257 [2024-05-24 18:23:54] INFO gen_config.py:269: Not found tokenizer config: dist/models/stablelm-2-zephyr-1_6b/tokenizer.model [2024-05-24 18:23:54] INFO gen_config.py:267: Found tokenizer config: dist/models/stablelm-2-zephyr-1_6b/tokenizer.json. Copying to /tmp/tmpjjd44ai4/tokenizer.json [2024-05-24 18:23:54] INFO gen_config.py:267: Found tokenizer config: dist/models/stablelm-2-zephyr-1_6b/vocab.json. Copying to /tmp/tmpjjd44ai4/vocab.json [2024-05-24 18:23:54] INFO gen_config.py:267: Found tokenizer config: dist/models/stablelm-2-zephyr-1_6b/merges.txt. Copying to /tmp/tmpjjd44ai4/merges.txt [2024-05-24 18:23:54] INFO gen_config.py:269: Not found tokenizer config: dist/models/stablelm-2-zephyr-1_6b/added_tokens.json [2024-05-24 18:23:54] INFO gen_config.py:267: Found tokenizer config: dist/models/stablelm-2-zephyr-1_6b/tokenizer_config.json. Copying to /tmp/tmpjjd44ai4/tokenizer_config.json [2024-05-24 18:23:54] INFO gen_config.py:80: [System default] Setting pad_token_id: 0 [2024-05-24 18:23:54] INFO gen_config.py:80: [System default] Setting temperature: 0.7 [2024-05-24 18:23:54] INFO gen_config.py:80: [System default] Setting presence_penalty: 0.0 [2024-05-24 18:23:54] INFO gen_config.py:80: [System default] Setting frequency_penalty: 0.0 [2024-05-24 18:23:54] INFO gen_config.py:80: [System default] Setting repetition_penalty: 1.0 [2024-05-24 18:23:54] INFO gen_config.py:80: [System default] Setting top_p: 0.95 [2024-05-24 18:23:54] INFO gen_config.py:80: [System default] Setting mean_gen_len: 128 [2024-05-24 18:23:54] INFO gen_config.py:80: [System default] Setting max_gen_len: 512 [2024-05-24 18:23:54] INFO gen_config.py:80: [System default] Setting shift_fill_factor: 0.3 [2024-05-24 18:23:54] INFO gen_config.py:335: Dumping configuration file to: /tmp/tmpjjd44ai4/mlc-chat-config.json /usr/bin/python3 -m mlc_llm convert_weight dist/models/stablelm-2-zephyr-1_6b --quantization q0f32 --source-format auto --output /tmp/tmpjjd44ai4 [2024-05-24 18:23:55] INFO auto_config.py:115: Found model configuration: dist/models/stablelm-2-zephyr-1_6b/config.json [2024-05-24 18:23:58] INFO auto_device.py:79: Found device: cuda:0 [2024-05-24 18:23:59] INFO auto_device.py:88: Not found device: rocm:0 [2024-05-24 18:24:00] INFO auto_device.py:88: Not found device: metal:0 [2024-05-24 18:24:01] INFO auto_device.py:88: Not found device: vulkan:0 [2024-05-24 18:24:01] INFO auto_device.py:88: Not found device: opencl:0 [2024-05-24 18:24:01] INFO auto_device.py:35: Using device: cuda:0 [2024-05-24 18:24:01] INFO auto_weight.py:70: Finding weights in: dist/models/stablelm-2-zephyr-1_6b [2024-05-24 18:24:01] INFO auto_weight.py:136: Not found Huggingface PyTorch [2024-05-24 18:24:01] INFO auto_weight.py:143: Found source weight format: huggingface-safetensor. Source configuration: dist/models/stablelm-2-zephyr-1_6b/model.safetensors.index.json [2024-05-24 18:24:01] INFO auto_weight.py:106: Using source weight configuration: dist/models/stablelm-2-zephyr-1_6b/model.safetensors.index.json. Use `--source` to override. [2024-05-24 18:24:01] INFO auto_weight.py:110: Using source weight format: huggingface-safetensor. Use `--source-format` to override. [2024-05-24 18:24:01] INFO auto_config.py:153: Found model type: stablelm. Use `--model-type` to override. [2024-05-24 18:24:01] INFO stablelm_model.py:49: context_window_size not found in config.json. Falling back to max_position_embeddings (4096) [2024-05-24 18:24:01] INFO stablelm_model.py:66: prefill_chunk_size defaults to 2048 Weight conversion with arguments: --config dist/models/stablelm-2-zephyr-1_6b/config.json --quantization NoQuantize(name='q0f32', kind='no-quant', model_dtype='float32') --model-type stablelm --device cuda:0 --source dist/models/stablelm-2-zephyr-1_6b/model.safetensors.index.json --source-format huggingface-safetensor --output /tmp/tmpjjd44ai4 Start storing to cache /tmp/tmpjjd44ai4 0%| | 0/220 [00:00 type is zero. setattr(self, word, getattr(machar, word).flat[0]) /home/tlopex/.local/lib/python3.8/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for type is zero. return self._float_to_str(self.smallest_subnormal) 0%| | 1/220 [00:04<17:42, 4.85s/it] [2024-05-24 18:24:09] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.embed_tokens.weight", shape: (100352, 2048), dtype: float32 0%| | 1/220 [00:05<17:42, 4.85s/it] 1%| | 2/220 [00:08<15:30, 4.27s/it] [2024-05-24 18:24:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.0.input_layernorm.bias", shape: (2048,), dtype: float32 1%| | 2/220 [00:08<15:30, 4.27s/it] [2024-05-24 18:24:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.0.input_layernorm.weight", shape: (2048,), dtype: float32 1%| | 2/220 [00:08<15:30, 4.27s/it] [2024-05-24 18:24:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.0.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 1%| | 2/220 [00:08<15:30, 4.27s/it] 2%|▏ | 5/220 [00:08<04:31, 1.26s/it] [2024-05-24 18:24:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.0.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 2%|▏ | 5/220 [00:09<04:31, 1.26s/it] 3%|▎ | 6/220 [00:09<03:45, 1.05s/it] [2024-05-24 18:24:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.0.post_attention_layernorm.bias", shape: (2048,), dtype: float32 3%|▎ | 6/220 [00:09<03:45, 1.05s/it] [2024-05-24 18:24:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.0.post_attention_layernorm.weight", shape: (2048,), dtype: float32 3%|▎ | 6/220 [00:09<03:45, 1.05s/it] [2024-05-24 18:24:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.0.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 3%|▎ | 6/220 [00:09<03:45, 1.05s/it] [2024-05-24 18:24:12] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.0.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 3%|▎ | 6/220 [00:09<03:45, 1.05s/it] 5%|▍ | 10/220 [00:09<01:37, 2.15it/s] [2024-05-24 18:24:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.0.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 5%|▍ | 10/220 [00:09<01:37, 2.15it/s] [2024-05-24 18:24:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.1.input_layernorm.bias", shape: (2048,), dtype: float32 5%|▍ | 10/220 [00:09<01:37, 2.15it/s] [2024-05-24 18:24:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.1.input_layernorm.weight", shape: (2048,), dtype: float32 5%|▍ | 10/220 [00:09<01:37, 2.15it/s] [2024-05-24 18:24:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.1.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 5%|▍ | 10/220 [00:09<01:37, 2.15it/s] 6%|▋ | 14/220 [00:09<00:58, 3.54it/s] [2024-05-24 18:24:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.1.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 6%|▋ | 14/220 [00:09<00:58, 3.54it/s] [2024-05-24 18:24:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.1.post_attention_layernorm.bias", shape: (2048,), dtype: float32 6%|▋ | 14/220 [00:10<00:58, 3.54it/s] 7%|▋ | 16/220 [00:10<00:54, 3.74it/s] [2024-05-24 18:24:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.1.post_attention_layernorm.weight", shape: (2048,), dtype: float32 7%|▋ | 16/220 [00:10<00:54, 3.74it/s] [2024-05-24 18:24:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.1.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 7%|▋ | 16/220 [00:10<00:54, 3.74it/s] [2024-05-24 18:24:13] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.1.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 7%|▋ | 16/220 [00:10<00:54, 3.74it/s] 9%|▊ | 19/220 [00:10<00:40, 4.95it/s] [2024-05-24 18:24:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.1.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 9%|▊ | 19/220 [00:10<00:40, 4.95it/s] [2024-05-24 18:24:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.10.input_layernorm.bias", shape: (2048,), dtype: float32 9%|▊ | 19/220 [00:10<00:40, 4.95it/s] 10%|▉ | 21/220 [00:10<00:34, 5.85it/s] [2024-05-24 18:24:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.10.input_layernorm.weight", shape: (2048,), dtype: float32 10%|▉ | 21/220 [00:10<00:34, 5.85it/s] [2024-05-24 18:24:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.10.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 10%|▉ | 21/220 [00:10<00:34, 5.85it/s] 10%|█ | 23/220 [00:10<00:29, 6.72it/s] [2024-05-24 18:24:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.10.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 10%|█ | 23/220 [00:10<00:29, 6.72it/s] [2024-05-24 18:24:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.10.post_attention_layernorm.bias", shape: (2048,), dtype: float32 10%|█ | 23/220 [00:11<00:29, 6.72it/s] 11%|█▏ | 25/220 [00:11<00:32, 5.95it/s] [2024-05-24 18:24:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.10.post_attention_layernorm.weight", shape: (2048,), dtype: float32 11%|█▏ | 25/220 [00:11<00:32, 5.95it/s] [2024-05-24 18:24:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.10.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 11%|█▏ | 25/220 [00:11<00:32, 5.95it/s] [2024-05-24 18:24:14] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.10.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 11%|█▏ | 25/220 [00:11<00:32, 5.95it/s] 13%|█▎ | 28/220 [00:11<00:25, 7.41it/s] [2024-05-24 18:24:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.10.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 13%|█▎ | 28/220 [00:11<00:25, 7.41it/s] [2024-05-24 18:24:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.11.input_layernorm.bias", shape: (2048,), dtype: float32 13%|█▎ | 28/220 [00:11<00:25, 7.41it/s] 14%|█▎ | 30/220 [00:11<00:22, 8.63it/s] [2024-05-24 18:24:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.11.input_layernorm.weight", shape: (2048,), dtype: float32 14%|█▎ | 30/220 [00:11<00:22, 8.63it/s] [2024-05-24 18:24:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.11.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 14%|█▎ | 30/220 [00:11<00:22, 8.63it/s] 15%|█▍ | 32/220 [00:11<00:20, 9.32it/s] [2024-05-24 18:24:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.11.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 15%|█▍ | 32/220 [00:11<00:20, 9.32it/s] [2024-05-24 18:24:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.11.post_attention_layernorm.bias", shape: (2048,), dtype: float32 15%|█▍ | 32/220 [00:12<00:20, 9.32it/s] 15%|█▌ | 34/220 [00:12<00:25, 7.22it/s] [2024-05-24 18:24:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.11.post_attention_layernorm.weight", shape: (2048,), dtype: float32 15%|█▌ | 34/220 [00:12<00:25, 7.22it/s] [2024-05-24 18:24:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.11.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 15%|█▌ | 34/220 [00:12<00:25, 7.22it/s] [2024-05-24 18:24:15] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.11.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 15%|█▌ | 34/220 [00:12<00:25, 7.22it/s] 17%|█▋ | 37/220 [00:12<00:21, 8.63it/s] [2024-05-24 18:24:16] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.11.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 17%|█▋ | 37/220 [00:12<00:21, 8.63it/s] [2024-05-24 18:24:16] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.12.input_layernorm.bias", shape: (2048,), dtype: float32 17%|█▋ | 37/220 [00:12<00:21, 8.63it/s] 18%|█▊ | 39/220 [00:12<00:18, 9.88it/s] [2024-05-24 18:24:16] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.12.input_layernorm.weight", shape: (2048,), dtype: float32 18%|█▊ | 39/220 [00:12<00:18, 9.88it/s] [2024-05-24 18:24:16] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.12.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 18%|█▊ | 39/220 [00:12<00:18, 9.88it/s] 19%|█▊ | 41/220 [00:12<00:17, 10.35it/s] [2024-05-24 18:24:16] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.12.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 19%|█▊ | 41/220 [00:12<00:17, 10.35it/s] [2024-05-24 18:24:16] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.12.post_attention_layernorm.bias", shape: (2048,), dtype: float32 19%|█▊ | 41/220 [00:13<00:17, 10.35it/s] 20%|█▉ | 43/220 [00:13<00:23, 7.63it/s] [2024-05-24 18:24:16] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.12.post_attention_layernorm.weight", shape: (2048,), dtype: float32 20%|█▉ | 43/220 [00:13<00:23, 7.63it/s] [2024-05-24 18:24:16] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.12.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 20%|█▉ | 43/220 [00:13<00:23, 7.63it/s] [2024-05-24 18:24:16] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.12.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 20%|█▉ | 43/220 [00:13<00:23, 7.63it/s] 21%|██ | 46/220 [00:13<00:19, 9.01it/s] [2024-05-24 18:24:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.12.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 21%|██ | 46/220 [00:13<00:19, 9.01it/s] [2024-05-24 18:24:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.13.input_layernorm.bias", shape: (2048,), dtype: float32 21%|██ | 46/220 [00:13<00:19, 9.01it/s] 22%|██▏ | 48/220 [00:13<00:16, 10.25it/s] [2024-05-24 18:24:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.13.input_layernorm.weight", shape: (2048,), dtype: float32 22%|██▏ | 48/220 [00:13<00:16, 10.25it/s] [2024-05-24 18:24:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.13.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 22%|██▏ | 48/220 [00:13<00:16, 10.25it/s] 23%|██▎ | 50/220 [00:13<00:15, 10.64it/s] [2024-05-24 18:24:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.13.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 23%|██▎ | 50/220 [00:13<00:15, 10.64it/s] [2024-05-24 18:24:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.13.post_attention_layernorm.bias", shape: (2048,), dtype: float32 23%|██▎ | 50/220 [00:14<00:15, 10.64it/s] 24%|██▎ | 52/220 [00:14<00:21, 7.74it/s] [2024-05-24 18:24:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.13.post_attention_layernorm.weight", shape: (2048,), dtype: float32 24%|██▎ | 52/220 [00:14<00:21, 7.74it/s] [2024-05-24 18:24:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.13.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 24%|██▎ | 52/220 [00:14<00:21, 7.74it/s] [2024-05-24 18:24:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.13.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 24%|██▎ | 52/220 [00:14<00:21, 7.74it/s] 25%|██▌ | 55/220 [00:14<00:18, 9.10it/s] [2024-05-24 18:24:17] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.13.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 25%|██▌ | 55/220 [00:14<00:18, 9.10it/s] [2024-05-24 18:24:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.14.input_layernorm.bias", shape: (2048,), dtype: float32 25%|██▌ | 55/220 [00:14<00:18, 9.10it/s] 26%|██▌ | 57/220 [00:14<00:15, 10.34it/s] [2024-05-24 18:24:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.14.input_layernorm.weight", shape: (2048,), dtype: float32 26%|██▌ | 57/220 [00:14<00:15, 10.34it/s] [2024-05-24 18:24:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.14.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 26%|██▌ | 57/220 [00:14<00:15, 10.34it/s] 27%|██▋ | 59/220 [00:14<00:15, 10.71it/s] [2024-05-24 18:24:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.14.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 27%|██▋ | 59/220 [00:14<00:15, 10.71it/s] [2024-05-24 18:24:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.14.post_attention_layernorm.bias", shape: (2048,), dtype: float32 27%|██▋ | 59/220 [00:15<00:15, 10.71it/s] 28%|██▊ | 61/220 [00:15<00:20, 7.76it/s] [2024-05-24 18:24:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.14.post_attention_layernorm.weight", shape: (2048,), dtype: float32 28%|██▊ | 61/220 [00:15<00:20, 7.76it/s] [2024-05-24 18:24:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.14.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 28%|██▊ | 61/220 [00:15<00:20, 7.76it/s] [2024-05-24 18:24:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.14.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 28%|██▊ | 61/220 [00:15<00:20, 7.76it/s] 29%|██▉ | 64/220 [00:15<00:17, 9.12it/s] [2024-05-24 18:24:18] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.14.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 29%|██▉ | 64/220 [00:15<00:17, 9.12it/s] [2024-05-24 18:24:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.15.input_layernorm.bias", shape: (2048,), dtype: float32 29%|██▉ | 64/220 [00:15<00:17, 9.12it/s] 30%|███ | 66/220 [00:15<00:14, 10.35it/s] [2024-05-24 18:24:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.15.input_layernorm.weight", shape: (2048,), dtype: float32 30%|███ | 66/220 [00:15<00:14, 10.35it/s] [2024-05-24 18:24:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.15.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 30%|███ | 66/220 [00:15<00:14, 10.35it/s] 31%|███ | 68/220 [00:15<00:14, 10.74it/s] [2024-05-24 18:24:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.15.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 31%|███ | 68/220 [00:15<00:14, 10.74it/s] [2024-05-24 18:24:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.15.post_attention_layernorm.bias", shape: (2048,), dtype: float32 31%|███ | 68/220 [00:16<00:14, 10.74it/s] 32%|███▏ | 70/220 [00:16<00:19, 7.77it/s] [2024-05-24 18:24:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.15.post_attention_layernorm.weight", shape: (2048,), dtype: float32 32%|███▏ | 70/220 [00:16<00:19, 7.77it/s] [2024-05-24 18:24:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.15.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 32%|███▏ | 70/220 [00:16<00:19, 7.77it/s] [2024-05-24 18:24:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.15.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 32%|███▏ | 70/220 [00:16<00:19, 7.77it/s] 33%|███▎ | 73/220 [00:16<00:16, 9.14it/s] [2024-05-24 18:24:19] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.15.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 33%|███▎ | 73/220 [00:16<00:16, 9.14it/s] [2024-05-24 18:24:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.16.input_layernorm.bias", shape: (2048,), dtype: float32 33%|███▎ | 73/220 [00:16<00:16, 9.14it/s] 34%|███▍ | 75/220 [00:16<00:13, 10.37it/s] [2024-05-24 18:24:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.16.input_layernorm.weight", shape: (2048,), dtype: float32 34%|███▍ | 75/220 [00:16<00:13, 10.37it/s] [2024-05-24 18:24:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.16.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 34%|███▍ | 75/220 [00:16<00:13, 10.37it/s] 35%|███▌ | 77/220 [00:16<00:13, 10.74it/s] [2024-05-24 18:24:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.16.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 35%|███▌ | 77/220 [00:16<00:13, 10.74it/s] [2024-05-24 18:24:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.16.post_attention_layernorm.bias", shape: (2048,), dtype: float32 35%|███▌ | 77/220 [00:17<00:13, 10.74it/s] 36%|███▌ | 79/220 [00:17<00:18, 7.77it/s] [2024-05-24 18:24:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.16.post_attention_layernorm.weight", shape: (2048,), dtype: float32 36%|███▌ | 79/220 [00:17<00:18, 7.77it/s] [2024-05-24 18:24:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.16.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 36%|███▌ | 79/220 [00:17<00:18, 7.77it/s] [2024-05-24 18:24:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.16.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 36%|███▌ | 79/220 [00:17<00:18, 7.77it/s] 37%|███▋ | 82/220 [00:17<00:15, 9.14it/s] [2024-05-24 18:24:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.16.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 37%|███▋ | 82/220 [00:17<00:15, 9.14it/s] [2024-05-24 18:24:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.17.input_layernorm.bias", shape: (2048,), dtype: float32 37%|███▋ | 82/220 [00:17<00:15, 9.14it/s] 38%|███▊ | 84/220 [00:17<00:13, 10.36it/s] [2024-05-24 18:24:20] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.17.input_layernorm.weight", shape: (2048,), dtype: float32 38%|███▊ | 84/220 [00:17<00:13, 10.36it/s] [2024-05-24 18:24:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.17.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 38%|███▊ | 84/220 [00:17<00:13, 10.36it/s] 39%|███▉ | 86/220 [00:17<00:12, 10.74it/s] [2024-05-24 18:24:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.17.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 39%|███▉ | 86/220 [00:17<00:12, 10.74it/s] [2024-05-24 18:24:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.17.post_attention_layernorm.bias", shape: (2048,), dtype: float32 39%|███▉ | 86/220 [00:18<00:12, 10.74it/s] 40%|████ | 88/220 [00:18<00:17, 7.76it/s] [2024-05-24 18:24:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.17.post_attention_layernorm.weight", shape: (2048,), dtype: float32 40%|████ | 88/220 [00:18<00:17, 7.76it/s] [2024-05-24 18:24:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.17.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 40%|████ | 88/220 [00:18<00:17, 7.76it/s] [2024-05-24 18:24:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.17.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 40%|████ | 88/220 [00:18<00:17, 7.76it/s] 41%|████▏ | 91/220 [00:18<00:14, 9.13it/s] [2024-05-24 18:24:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.17.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 41%|████▏ | 91/220 [00:18<00:14, 9.13it/s] [2024-05-24 18:24:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.18.input_layernorm.bias", shape: (2048,), dtype: float32 41%|████▏ | 91/220 [00:18<00:14, 9.13it/s] 42%|████▏ | 93/220 [00:18<00:12, 10.37it/s] [2024-05-24 18:24:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.18.input_layernorm.weight", shape: (2048,), dtype: float32 42%|████▏ | 93/220 [00:18<00:12, 10.37it/s] [2024-05-24 18:24:21] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.18.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 42%|████▏ | 93/220 [00:18<00:12, 10.37it/s] 43%|████▎ | 95/220 [00:18<00:11, 10.74it/s] [2024-05-24 18:24:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.18.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 43%|████▎ | 95/220 [00:18<00:11, 10.74it/s] [2024-05-24 18:24:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.18.post_attention_layernorm.bias", shape: (2048,), dtype: float32 43%|████▎ | 95/220 [00:19<00:11, 10.74it/s] 44%|████▍ | 97/220 [00:19<00:15, 7.77it/s] [2024-05-24 18:24:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.18.post_attention_layernorm.weight", shape: (2048,), dtype: float32 44%|████▍ | 97/220 [00:19<00:15, 7.77it/s] [2024-05-24 18:24:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.18.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 44%|████▍ | 97/220 [00:19<00:15, 7.77it/s] [2024-05-24 18:24:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.18.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 44%|████▍ | 97/220 [00:19<00:15, 7.77it/s] 45%|████▌ | 100/220 [00:19<00:13, 9.13it/s] [2024-05-24 18:24:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.18.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 45%|████▌ | 100/220 [00:19<00:13, 9.13it/s] [2024-05-24 18:24:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.19.input_layernorm.bias", shape: (2048,), dtype: float32 45%|████▌ | 100/220 [00:19<00:13, 9.13it/s] 46%|████▋ | 102/220 [00:19<00:11, 10.37it/s] [2024-05-24 18:24:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.19.input_layernorm.weight", shape: (2048,), dtype: float32 46%|████▋ | 102/220 [00:19<00:11, 10.37it/s] [2024-05-24 18:24:22] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.19.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 46%|████▋ | 102/220 [00:19<00:11, 10.37it/s] 47%|████▋ | 104/220 [00:19<00:10, 10.74it/s] [2024-05-24 18:24:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.19.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 47%|████▋ | 104/220 [00:19<00:10, 10.74it/s] [2024-05-24 18:24:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.19.post_attention_layernorm.bias", shape: (2048,), dtype: float32 47%|████▋ | 104/220 [00:19<00:10, 10.74it/s] 48%|████▊ | 106/220 [00:19<00:14, 7.78it/s] [2024-05-24 18:24:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.19.post_attention_layernorm.weight", shape: (2048,), dtype: float32 48%|████▊ | 106/220 [00:19<00:14, 7.78it/s] [2024-05-24 18:24:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.19.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 48%|████▊ | 106/220 [00:19<00:14, 7.78it/s] [2024-05-24 18:24:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.19.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 48%|████▊ | 106/220 [00:20<00:14, 7.78it/s] 50%|████▉ | 109/220 [00:20<00:12, 9.14it/s] [2024-05-24 18:24:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.19.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 50%|████▉ | 109/220 [00:20<00:12, 9.14it/s] [2024-05-24 18:24:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.2.input_layernorm.bias", shape: (2048,), dtype: float32 50%|████▉ | 109/220 [00:20<00:12, 9.14it/s] 50%|█████ | 111/220 [00:20<00:10, 10.38it/s] [2024-05-24 18:24:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.2.input_layernorm.weight", shape: (2048,), dtype: float32 50%|█████ | 111/220 [00:20<00:10, 10.38it/s] [2024-05-24 18:24:23] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.2.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 50%|█████ | 111/220 [00:20<00:10, 10.38it/s] 51%|█████▏ | 113/220 [00:20<00:09, 10.76it/s] [2024-05-24 18:24:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.2.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 51%|█████▏ | 113/220 [00:20<00:09, 10.76it/s] [2024-05-24 18:24:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.2.post_attention_layernorm.bias", shape: (2048,), dtype: float32 51%|█████▏ | 113/220 [00:20<00:09, 10.76it/s] 52%|█████▏ | 115/220 [00:20<00:13, 7.75it/s] [2024-05-24 18:24:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.2.post_attention_layernorm.weight", shape: (2048,), dtype: float32 52%|█████▏ | 115/220 [00:20<00:13, 7.75it/s] [2024-05-24 18:24:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.2.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 52%|█████▏ | 115/220 [00:20<00:13, 7.75it/s] [2024-05-24 18:24:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.2.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 52%|█████▏ | 115/220 [00:20<00:13, 7.75it/s] 54%|█████▎ | 118/220 [00:21<00:11, 9.11it/s] [2024-05-24 18:24:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.2.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 54%|█████▎ | 118/220 [00:21<00:11, 9.11it/s] [2024-05-24 18:24:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.20.input_layernorm.bias", shape: (2048,), dtype: float32 54%|█████▎ | 118/220 [00:21<00:11, 9.11it/s] 55%|█████▍ | 120/220 [00:21<00:09, 10.34it/s] [2024-05-24 18:24:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.20.input_layernorm.weight", shape: (2048,), dtype: float32 55%|█████▍ | 120/220 [00:21<00:09, 10.34it/s] [2024-05-24 18:24:24] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.20.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 55%|█████▍ | 120/220 [00:21<00:09, 10.34it/s] 55%|█████▌ | 122/220 [00:21<00:09, 10.71it/s] [2024-05-24 18:24:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.20.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 55%|█████▌ | 122/220 [00:21<00:09, 10.71it/s] [2024-05-24 18:24:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.20.post_attention_layernorm.bias", shape: (2048,), dtype: float32 55%|█████▌ | 122/220 [00:21<00:09, 10.71it/s] 56%|█████▋ | 124/220 [00:21<00:12, 7.76it/s] [2024-05-24 18:24:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.20.post_attention_layernorm.weight", shape: (2048,), dtype: float32 56%|█████▋ | 124/220 [00:21<00:12, 7.76it/s] [2024-05-24 18:24:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.20.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 56%|█████▋ | 124/220 [00:21<00:12, 7.76it/s] [2024-05-24 18:24:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.20.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 56%|█████▋ | 124/220 [00:21<00:12, 7.76it/s] 58%|█████▊ | 127/220 [00:22<00:10, 9.13it/s] [2024-05-24 18:24:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.20.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 58%|█████▊ | 127/220 [00:22<00:10, 9.13it/s] [2024-05-24 18:24:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.21.input_layernorm.bias", shape: (2048,), dtype: float32 58%|█████▊ | 127/220 [00:22<00:10, 9.13it/s] 59%|█████▊ | 129/220 [00:22<00:08, 10.36it/s] [2024-05-24 18:24:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.21.input_layernorm.weight", shape: (2048,), dtype: float32 59%|█████▊ | 129/220 [00:22<00:08, 10.36it/s] [2024-05-24 18:24:25] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.21.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 59%|█████▊ | 129/220 [00:22<00:08, 10.36it/s] 60%|█████▉ | 131/220 [00:22<00:08, 10.73it/s] [2024-05-24 18:24:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.21.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 60%|█████▉ | 131/220 [00:22<00:08, 10.73it/s] [2024-05-24 18:24:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.21.post_attention_layernorm.bias", shape: (2048,), dtype: float32 60%|█████▉ | 131/220 [00:22<00:08, 10.73it/s] 60%|██████ | 133/220 [00:22<00:11, 7.77it/s] [2024-05-24 18:24:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.21.post_attention_layernorm.weight", shape: (2048,), dtype: float32 60%|██████ | 133/220 [00:22<00:11, 7.77it/s] [2024-05-24 18:24:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.21.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 60%|██████ | 133/220 [00:22<00:11, 7.77it/s] [2024-05-24 18:24:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.21.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 60%|██████ | 133/220 [00:22<00:11, 7.77it/s] 62%|██████▏ | 136/220 [00:23<00:09, 9.14it/s] [2024-05-24 18:24:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.21.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 62%|██████▏ | 136/220 [00:23<00:09, 9.14it/s] [2024-05-24 18:24:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.22.input_layernorm.bias", shape: (2048,), dtype: float32 62%|██████▏ | 136/220 [00:23<00:09, 9.14it/s] 63%|██████▎ | 138/220 [00:23<00:07, 10.37it/s] [2024-05-24 18:24:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.22.input_layernorm.weight", shape: (2048,), dtype: float32 63%|██████▎ | 138/220 [00:23<00:07, 10.37it/s] [2024-05-24 18:24:26] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.22.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 63%|██████▎ | 138/220 [00:23<00:07, 10.37it/s] 64%|██████▎ | 140/220 [00:23<00:07, 10.74it/s] [2024-05-24 18:24:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.22.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 64%|██████▎ | 140/220 [00:23<00:07, 10.74it/s] [2024-05-24 18:24:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.22.post_attention_layernorm.bias", shape: (2048,), dtype: float32 64%|██████▎ | 140/220 [00:23<00:07, 10.74it/s] 65%|██████▍ | 142/220 [00:23<00:10, 7.78it/s] [2024-05-24 18:24:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.22.post_attention_layernorm.weight", shape: (2048,), dtype: float32 65%|██████▍ | 142/220 [00:23<00:10, 7.78it/s] [2024-05-24 18:24:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.22.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 65%|██████▍ | 142/220 [00:23<00:10, 7.78it/s] [2024-05-24 18:24:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.22.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 65%|██████▍ | 142/220 [00:23<00:10, 7.78it/s] 66%|██████▌ | 145/220 [00:24<00:08, 9.15it/s] [2024-05-24 18:24:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.22.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 66%|██████▌ | 145/220 [00:24<00:08, 9.15it/s] [2024-05-24 18:24:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.23.input_layernorm.bias", shape: (2048,), dtype: float32 66%|██████▌ | 145/220 [00:24<00:08, 9.15it/s] 67%|██████▋ | 147/220 [00:24<00:07, 10.38it/s] [2024-05-24 18:24:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.23.input_layernorm.weight", shape: (2048,), dtype: float32 67%|██████▋ | 147/220 [00:24<00:07, 10.38it/s] [2024-05-24 18:24:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.23.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 67%|██████▋ | 147/220 [00:24<00:07, 10.38it/s] 68%|██████▊ | 149/220 [00:24<00:06, 10.75it/s] [2024-05-24 18:24:27] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.23.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 68%|██████▊ | 149/220 [00:24<00:06, 10.75it/s] [2024-05-24 18:24:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.23.post_attention_layernorm.bias", shape: (2048,), dtype: float32 68%|██████▊ | 149/220 [00:24<00:06, 10.75it/s] 69%|██████▊ | 151/220 [00:24<00:08, 7.78it/s] [2024-05-24 18:24:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.23.post_attention_layernorm.weight", shape: (2048,), dtype: float32 69%|██████▊ | 151/220 [00:24<00:08, 7.78it/s] [2024-05-24 18:24:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.23.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 69%|██████▊ | 151/220 [00:24<00:08, 7.78it/s] [2024-05-24 18:24:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.23.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 69%|██████▊ | 151/220 [00:24<00:08, 7.78it/s] 70%|███████ | 154/220 [00:25<00:07, 9.12it/s] [2024-05-24 18:24:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.23.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 70%|███████ | 154/220 [00:25<00:07, 9.12it/s] [2024-05-24 18:24:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.3.input_layernorm.bias", shape: (2048,), dtype: float32 70%|███████ | 154/220 [00:25<00:07, 9.12it/s] 71%|███████ | 156/220 [00:25<00:06, 10.37it/s] [2024-05-24 18:24:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.3.input_layernorm.weight", shape: (2048,), dtype: float32 71%|███████ | 156/220 [00:25<00:06, 10.37it/s] [2024-05-24 18:24:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.3.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 71%|███████ | 156/220 [00:25<00:06, 10.37it/s] 72%|███████▏ | 158/220 [00:25<00:05, 10.75it/s] [2024-05-24 18:24:28] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.3.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 72%|███████▏ | 158/220 [00:25<00:05, 10.75it/s] [2024-05-24 18:24:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.3.post_attention_layernorm.bias", shape: (2048,), dtype: float32 72%|███████▏ | 158/220 [00:25<00:05, 10.75it/s] 73%|███████▎ | 160/220 [00:25<00:07, 7.77it/s] [2024-05-24 18:24:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.3.post_attention_layernorm.weight", shape: (2048,), dtype: float32 73%|███████▎ | 160/220 [00:25<00:07, 7.77it/s] [2024-05-24 18:24:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.3.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 73%|███████▎ | 160/220 [00:25<00:07, 7.77it/s] [2024-05-24 18:24:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.3.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 73%|███████▎ | 160/220 [00:25<00:07, 7.77it/s] 74%|███████▍ | 163/220 [00:25<00:06, 9.13it/s] [2024-05-24 18:24:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.3.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 74%|███████▍ | 163/220 [00:25<00:06, 9.13it/s] [2024-05-24 18:24:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.4.input_layernorm.bias", shape: (2048,), dtype: float32 74%|███████▍ | 163/220 [00:26<00:06, 9.13it/s] 75%|███████▌ | 165/220 [00:26<00:05, 10.37it/s] [2024-05-24 18:24:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.4.input_layernorm.weight", shape: (2048,), dtype: float32 75%|███████▌ | 165/220 [00:26<00:05, 10.37it/s] [2024-05-24 18:24:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.4.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 75%|███████▌ | 165/220 [00:26<00:05, 10.37it/s] 76%|███████▌ | 167/220 [00:26<00:04, 10.75it/s] [2024-05-24 18:24:29] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.4.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 76%|███████▌ | 167/220 [00:26<00:04, 10.75it/s] [2024-05-24 18:24:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.4.post_attention_layernorm.bias", shape: (2048,), dtype: float32 76%|███████▌ | 167/220 [00:26<00:04, 10.75it/s] 77%|███████▋ | 169/220 [00:26<00:06, 7.77it/s] [2024-05-24 18:24:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.4.post_attention_layernorm.weight", shape: (2048,), dtype: float32 77%|███████▋ | 169/220 [00:26<00:06, 7.77it/s] [2024-05-24 18:24:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.4.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 77%|███████▋ | 169/220 [00:26<00:06, 7.77it/s] [2024-05-24 18:24:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.4.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 77%|███████▋ | 169/220 [00:26<00:06, 7.77it/s] 78%|███████▊ | 172/220 [00:26<00:05, 9.13it/s] [2024-05-24 18:24:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.4.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 78%|███████▊ | 172/220 [00:26<00:05, 9.13it/s] [2024-05-24 18:24:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.5.input_layernorm.bias", shape: (2048,), dtype: float32 78%|███████▊ | 172/220 [00:27<00:05, 9.13it/s] 79%|███████▉ | 174/220 [00:27<00:04, 10.36it/s] [2024-05-24 18:24:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.5.input_layernorm.weight", shape: (2048,), dtype: float32 79%|███████▉ | 174/220 [00:27<00:04, 10.36it/s] [2024-05-24 18:24:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.5.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 79%|███████▉ | 174/220 [00:27<00:04, 10.36it/s] 80%|████████ | 176/220 [00:27<00:04, 10.74it/s] [2024-05-24 18:24:30] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.5.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 80%|████████ | 176/220 [00:27<00:04, 10.74it/s] [2024-05-24 18:24:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.5.post_attention_layernorm.bias", shape: (2048,), dtype: float32 80%|████████ | 176/220 [00:27<00:04, 10.74it/s] 81%|████████ | 178/220 [00:27<00:05, 7.77it/s] [2024-05-24 18:24:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.5.post_attention_layernorm.weight", shape: (2048,), dtype: float32 81%|████████ | 178/220 [00:27<00:05, 7.77it/s] [2024-05-24 18:24:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.5.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 81%|████████ | 178/220 [00:27<00:05, 7.77it/s] [2024-05-24 18:24:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.5.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 81%|████████ | 178/220 [00:27<00:05, 7.77it/s] 82%|████████▏ | 181/220 [00:27<00:04, 9.13it/s] [2024-05-24 18:24:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.5.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 82%|████████▏ | 181/220 [00:27<00:04, 9.13it/s] [2024-05-24 18:24:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.6.input_layernorm.bias", shape: (2048,), dtype: float32 82%|████████▏ | 181/220 [00:28<00:04, 9.13it/s] 83%|████████▎ | 183/220 [00:28<00:03, 10.37it/s] [2024-05-24 18:24:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.6.input_layernorm.weight", shape: (2048,), dtype: float32 83%|████████▎ | 183/220 [00:28<00:03, 10.37it/s] [2024-05-24 18:24:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.6.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 83%|████████▎ | 183/220 [00:28<00:03, 10.37it/s] 84%|████████▍ | 185/220 [00:28<00:03, 10.73it/s] [2024-05-24 18:24:31] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.6.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 84%|████████▍ | 185/220 [00:28<00:03, 10.73it/s] [2024-05-24 18:24:32] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.6.post_attention_layernorm.bias", shape: (2048,), dtype: float32 84%|████████▍ | 185/220 [00:28<00:03, 10.73it/s] 85%|████████▌ | 187/220 [00:28<00:04, 7.76it/s] [2024-05-24 18:24:32] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.6.post_attention_layernorm.weight", shape: (2048,), dtype: float32 85%|████████▌ | 187/220 [00:28<00:04, 7.76it/s] [2024-05-24 18:24:32] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.6.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 85%|████████▌ | 187/220 [00:28<00:04, 7.76it/s] [2024-05-24 18:24:32] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.6.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 85%|████████▌ | 187/220 [00:28<00:04, 7.76it/s] 86%|████████▋ | 190/220 [00:28<00:03, 9.13it/s] [2024-05-24 18:24:32] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.6.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 86%|████████▋ | 190/220 [00:28<00:03, 9.13it/s] [2024-05-24 18:24:32] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.7.input_layernorm.bias", shape: (2048,), dtype: float32 86%|████████▋ | 190/220 [00:28<00:03, 9.13it/s] 87%|████████▋ | 192/220 [00:28<00:02, 10.37it/s] [2024-05-24 18:24:32] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.7.input_layernorm.weight", shape: (2048,), dtype: float32 87%|████████▋ | 192/220 [00:28<00:02, 10.37it/s] [2024-05-24 18:24:32] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.7.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 87%|████████▋ | 192/220 [00:29<00:02, 10.37it/s] 88%|████████▊ | 194/220 [00:29<00:02, 10.73it/s] [2024-05-24 18:24:32] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.7.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 88%|████████▊ | 194/220 [00:29<00:02, 10.73it/s] [2024-05-24 18:24:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.7.post_attention_layernorm.bias", shape: (2048,), dtype: float32 88%|████████▊ | 194/220 [00:29<00:02, 10.73it/s] 89%|████████▉ | 196/220 [00:29<00:03, 7.77it/s] [2024-05-24 18:24:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.7.post_attention_layernorm.weight", shape: (2048,), dtype: float32 89%|████████▉ | 196/220 [00:29<00:03, 7.77it/s] [2024-05-24 18:24:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.7.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 89%|████████▉ | 196/220 [00:29<00:03, 7.77it/s] [2024-05-24 18:24:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.7.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 89%|████████▉ | 196/220 [00:29<00:03, 7.77it/s] 90%|█████████ | 199/220 [00:29<00:02, 9.14it/s] [2024-05-24 18:24:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.7.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 90%|█████████ | 199/220 [00:29<00:02, 9.14it/s] [2024-05-24 18:24:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.8.input_layernorm.bias", shape: (2048,), dtype: float32 90%|█████████ | 199/220 [00:29<00:02, 9.14it/s] 91%|█████████▏| 201/220 [00:29<00:01, 10.38it/s] [2024-05-24 18:24:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.8.input_layernorm.weight", shape: (2048,), dtype: float32 91%|█████████▏| 201/220 [00:29<00:01, 10.38it/s] [2024-05-24 18:24:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.8.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 91%|█████████▏| 201/220 [00:29<00:01, 10.38it/s] 92%|█████████▏| 203/220 [00:30<00:01, 10.74it/s] [2024-05-24 18:24:33] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.8.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 92%|█████████▏| 203/220 [00:30<00:01, 10.74it/s] [2024-05-24 18:24:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.8.post_attention_layernorm.bias", shape: (2048,), dtype: float32 92%|█████████▏| 203/220 [00:30<00:01, 10.74it/s] 93%|█████████▎| 205/220 [00:30<00:01, 7.78it/s] [2024-05-24 18:24:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.8.post_attention_layernorm.weight", shape: (2048,), dtype: float32 93%|█████████▎| 205/220 [00:30<00:01, 7.78it/s] [2024-05-24 18:24:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.8.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 93%|█████████▎| 205/220 [00:30<00:01, 7.78it/s] [2024-05-24 18:24:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.8.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 93%|█████████▎| 205/220 [00:30<00:01, 7.78it/s] 95%|█████████▍| 208/220 [00:30<00:01, 9.14it/s] [2024-05-24 18:24:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.8.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 95%|█████████▍| 208/220 [00:30<00:01, 9.14it/s] [2024-05-24 18:24:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.9.input_layernorm.bias", shape: (2048,), dtype: float32 95%|█████████▍| 208/220 [00:30<00:01, 9.14it/s] 95%|█████████▌| 210/220 [00:30<00:00, 10.37it/s] [2024-05-24 18:24:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.9.input_layernorm.weight", shape: (2048,), dtype: float32 95%|█████████▌| 210/220 [00:30<00:00, 10.37it/s] [2024-05-24 18:24:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.9.mlp.down_proj.weight", shape: (2048, 5632), dtype: float32 95%|█████████▌| 210/220 [00:30<00:00, 10.37it/s] 96%|█████████▋| 212/220 [00:31<00:00, 10.17it/s] [2024-05-24 18:24:34] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.9.mlp.gate_up_proj.weight", shape: (11264, 2048), dtype: float32 96%|█████████▋| 212/220 [00:31<00:00, 10.17it/s] [2024-05-24 18:24:35] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.9.post_attention_layernorm.bias", shape: (2048,), dtype: float32 96%|█████████▋| 212/220 [00:31<00:00, 10.17it/s] 97%|█████████▋| 214/220 [00:31<00:00, 7.56it/s] [2024-05-24 18:24:35] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.9.post_attention_layernorm.weight", shape: (2048,), dtype: float32 97%|█████████▋| 214/220 [00:31<00:00, 7.56it/s] [2024-05-24 18:24:35] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.9.self_attn.qkv_proj.bias", shape: (6144,), dtype: float32 97%|█████████▋| 214/220 [00:31<00:00, 7.56it/s] [2024-05-24 18:24:35] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.9.self_attn.qkv_proj.weight", shape: (6144, 2048), dtype: float32 97%|█████████▋| 214/220 [00:31<00:00, 7.56it/s] 99%|█████████▊| 217/220 [00:31<00:00, 8.96it/s] [2024-05-24 18:24:35] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.layers.9.self_attn.o_proj.weight", shape: (2048, 2048), dtype: float32 99%|█████████▊| 217/220 [00:31<00:00, 8.96it/s] [2024-05-24 18:24:35] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.norm.bias", shape: (2048,), dtype: float32 99%|█████████▊| 217/220 [00:31<00:00, 8.96it/s] 100%|█████████▉| 219/220 [00:31<00:00, 10.20it/s] [2024-05-24 18:24:35] INFO huggingface_loader.py:174: [Not quantized] Parameter: "model.norm.weight", shape: (2048,), dtype: float32 100%|█████████▉| 219/220 [00:31<00:00, 10.20it/s] 100%|██████████| 220/220 [00:31<00:00, 6.90it/s] [2024-05-24 18:24:35] INFO huggingface_loader.py:196: Unloading HF weight file: dist/models/stablelm-2-zephyr-1_6b/model.safetensors [2024-05-24 18:24:35] INFO stats.py:76: Time usage: HF loading: 1.017 sec; Pre-quantization mapping: 5.571 sec; Quantization: 0.000 sec [2024-05-24 18:24:35] INFO stats.py:90: RAM usage: Peak RAM: 3.063 GB. Total bytes loaded from disk: 3.063 GB [2024-05-24 18:24:35] INFO convert_weight.py:155: Parameter size after quantization: 6.126 GB [2024-05-24 18:24:35] INFO convert_weight.py:160: Total parameters: 1,644,515,328 [2024-05-24 18:24:35] INFO convert_weight.py:161: Bits per parameter: 32.000 [2024-05-24 18:24:35] INFO convert_weight.py:166: Saved to directory: /tmp/tmpjjd44ai4 All finished, 75 total shards committed, record saved to /tmp/tmpjjd44ai4/ndarray-cache.json Also saved a bf16 record to /tmp/tmpjjd44ai4/ndarray-cache-b16.json