|
/opt/conda/envs/py310/bin/python -m mlc_llm gen_config /models/Qwen2-72B-Instruct --quantization q0f16 --conv-template chatml --output /models/mlc-delivery/hf/mlc-ai/Qwen2-72B-Instruct-q0f16-MLC |
|
[2024-06-07 00:35:55] INFO auto_config.py:116: [92mFound[0m model configuration: /models/Qwen2-72B-Instruct/config.json |
|
[2024-06-07 00:35:55] INFO auto_config.py:154: [92mFound[0m model type: [1mqwen2[0m. Use `--model-type` to override. |
|
[2024-06-07 00:35:55] INFO qwen2_model.py:49: [1mcontext_window_size[0m not found in config.json. Falling back to [1mmax_position_embeddings[0m (32768) |
|
[2024-06-07 00:35:55] INFO qwen2_model.py:66: [1mprefill_chunk_size[0m defaults to 2048 |
|
[2024-06-07 00:35:55] INFO config.py:107: Overriding [1mmax_batch_size[0m from 1 to 80 |
|
[2024-06-07 00:35:55] INFO gen_config.py:143: [generation_config.json] Setting [1mbos_token_id[0m: 151643 |
|
[2024-06-07 00:35:55] INFO gen_config.py:143: [generation_config.json] Setting [1mpad_token_id[0m: 151643 |
|
[2024-06-07 00:35:55] INFO gen_config.py:143: [generation_config.json] Setting [1meos_token_id[0m: [151645, 151643] |
|
[2024-06-07 00:35:55] INFO gen_config.py:143: [generation_config.json] Setting [1mrepetition_penalty[0m: 1.05 |
|
[2024-06-07 00:35:55] INFO gen_config.py:143: [generation_config.json] Setting [1mtemperature[0m: 0.7 |
|
[2024-06-07 00:35:55] INFO gen_config.py:143: [generation_config.json] Setting [1mtop_p[0m: 0.8 |
|
[2024-06-07 00:35:55] INFO gen_config.py:157: [91mNot found[0m tokenizer config: /models/Qwen2-72B-Instruct/tokenizer.model |
|
[2024-06-07 00:35:55] INFO gen_config.py:155: [92mFound[0m tokenizer config: /models/Qwen2-72B-Instruct/tokenizer.json. Copying to [1m/models/mlc-delivery/hf/mlc-ai/Qwen2-72B-Instruct-q0f16-MLC/tokenizer.json[0m |
|
[2024-06-07 00:35:55] INFO gen_config.py:155: [92mFound[0m tokenizer config: /models/Qwen2-72B-Instruct/vocab.json. Copying to [1m/models/mlc-delivery/hf/mlc-ai/Qwen2-72B-Instruct-q0f16-MLC/vocab.json[0m |
|
[2024-06-07 00:35:55] INFO gen_config.py:155: [92mFound[0m tokenizer config: /models/Qwen2-72B-Instruct/merges.txt. Copying to [1m/models/mlc-delivery/hf/mlc-ai/Qwen2-72B-Instruct-q0f16-MLC/merges.txt[0m |
|
[2024-06-07 00:35:55] INFO gen_config.py:157: [91mNot found[0m tokenizer config: /models/Qwen2-72B-Instruct/added_tokens.json |
|
[2024-06-07 00:35:55] INFO gen_config.py:155: [92mFound[0m tokenizer config: /models/Qwen2-72B-Instruct/tokenizer_config.json. Copying to [1m/models/mlc-delivery/hf/mlc-ai/Qwen2-72B-Instruct-q0f16-MLC/tokenizer_config.json[0m |
|
[2024-06-07 00:35:55] INFO gen_config.py:216: Detected tokenizer info: {'token_postproc_method': 'byte_level', 'prepend_space_in_encode': False, 'strip_space_in_decode': False} |
|
[2024-06-07 00:35:55] INFO gen_config.py:32: [System default] Setting [1mpresence_penalty[0m: 0.0 |
|
[2024-06-07 00:35:55] INFO gen_config.py:32: [System default] Setting [1mfrequency_penalty[0m: 0.0 |
|
[2024-06-07 00:35:55] INFO gen_config.py:223: Dumping configuration file to: [1m/models/mlc-delivery/hf/mlc-ai/Qwen2-72B-Instruct-q0f16-MLC/mlc-chat-config.json[0m |
|
/opt/conda/envs/py310/bin/python -m mlc_llm convert_weight /models/Qwen2-72B-Instruct --quantization q0f16 --output /models/mlc-delivery/hf/mlc-ai/Qwen2-72B-Instruct-q0f16-MLC |
|
[2024-06-07 00:35:56] INFO auto_config.py:116: [92mFound[0m model configuration: /models/Qwen2-72B-Instruct/config.json |
|
[2024-06-07 00:35:58] INFO auto_device.py:79: [92mFound[0m device: cuda:0 |
|
[2024-06-07 00:35:59] INFO auto_device.py:88: [91mNot found[0m device: rocm:0 |
|
[2024-06-07 00:36:01] INFO auto_device.py:88: [91mNot found[0m device: metal:0 |
|
[2024-06-07 00:36:02] INFO auto_device.py:79: [92mFound[0m device: vulkan:0 |
|
[2024-06-07 00:36:02] INFO auto_device.py:79: [92mFound[0m device: vulkan:1 |
|
[2024-06-07 00:36:02] INFO auto_device.py:79: [92mFound[0m device: vulkan:2 |
|
[2024-06-07 00:36:02] INFO auto_device.py:79: [92mFound[0m device: vulkan:3 |
|
[2024-06-07 00:36:04] INFO auto_device.py:88: [91mNot found[0m device: opencl:0 |
|
[2024-06-07 00:36:04] INFO auto_device.py:35: Using device: [1mcuda:0[0m |
|
[2024-06-07 00:36:04] INFO auto_weight.py:71: Finding weights in: /models/Qwen2-72B-Instruct |
|
[2024-06-07 00:36:04] INFO auto_weight.py:137: [91mNot found[0m Huggingface PyTorch |
|
[2024-06-07 00:36:04] INFO auto_weight.py:144: [92mFound[0m source weight format: huggingface-safetensor. Source configuration: /models/Qwen2-72B-Instruct/model.safetensors.index.json |
|
[2024-06-07 00:36:04] INFO auto_weight.py:107: Using source weight configuration: [1m/models/Qwen2-72B-Instruct/model.safetensors.index.json[0m. Use `--source` to override. |
|
[2024-06-07 00:36:04] INFO auto_weight.py:111: Using source weight format: [1mhuggingface-safetensor[0m. Use `--source-format` to override. |
|
[2024-06-07 00:36:04] INFO auto_config.py:154: [92mFound[0m model type: [1mqwen2[0m. Use `--model-type` to override. |
|
[2024-06-07 00:36:04] INFO qwen2_model.py:49: [1mcontext_window_size[0m not found in config.json. Falling back to [1mmax_position_embeddings[0m (32768) |
|
[2024-06-07 00:36:04] INFO qwen2_model.py:66: [1mprefill_chunk_size[0m defaults to 2048 |
|
[1mWeight conversion with arguments:[0m |
|
[1m--config[0m /models/Qwen2-72B-Instruct/config.json |
|
[1m--quantization[0m NoQuantize(name='q0f16', kind='no-quant', model_dtype='float16') |
|
[1m--model-type[0m qwen2 |
|
[1m--device[0m cuda:0 |
|
[1m--source[0m /models/Qwen2-72B-Instruct/model.safetensors.index.json |
|
[1m--source-format[0m huggingface-safetensor |
|
[1m--output[0m /models/mlc-delivery/hf/mlc-ai/Qwen2-72B-Instruct-q0f16-MLC |
|
Start storing to cache /models/mlc-delivery/hf/mlc-ai/Qwen2-72B-Instruct-q0f16-MLC |
|
0%| | 0/563 [00:00<?, ?it/s]
[2024-06-07 00:36:08] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00037-of-00037.safetensors |
|
0%| | 0/563 [00:00<?, ?it/s]
[2024-06-07 00:36:23] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mlm_head.weight[0m", shape: (152064, 8192), dtype: float16 |
|
0%| | 0/563 [00:15<?, ?it/s]
0%| | 1/563 [00:24<3:51:07, 24.67s/it]
[2024-06-07 00:36:33] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.79.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
0%| | 1/563 [00:24<3:51:07, 24.67s/it]
0%| | 2/563 [00:25<1:36:54, 10.36s/it]
[2024-06-07 00:36:34] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.79.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
0%| | 2/563 [00:25<1:36:54, 10.36s/it]
1%| | 3/563 [00:27<1:01:52, 6.63s/it]
[2024-06-07 00:36:35] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00036-of-00037.safetensors |
|
1%| | 3/563 [00:27<1:01:52, 6.63s/it]
[2024-06-07 00:36:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.79.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
1%| | 3/563 [00:35<1:01:52, 6.63s/it]
1%| | 4/563 [00:38<1:18:03, 8.38s/it]
[2024-06-07 00:36:46] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.79.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
1%| | 4/563 [00:38<1:18:03, 8.38s/it]
1%| | 5/563 [00:38<50:09, 5.39s/it]
[2024-06-07 00:36:46] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.norm.weight[0m", shape: (8192,), dtype: float16 |
|
1%| | 5/563 [00:38<50:09, 5.39s/it]
[2024-06-07 00:36:46] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00036-of-00037.safetensors |
|
1%| | 5/563 [00:38<50:09, 5.39s/it]
[2024-06-07 00:36:47] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00037-of-00037.safetensors |
|
1%| | 5/563 [00:38<50:09, 5.39s/it]
[2024-06-07 00:36:47] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00001-of-00037.safetensors |
|
1%| | 5/563 [00:39<50:09, 5.39s/it]
[2024-06-07 00:37:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.embed_tokens.weight[0m", shape: (152064, 8192), dtype: float16 |
|
1%| | 5/563 [00:52<50:09, 5.39s/it]
1%| | 7/563 [00:59<1:14:25, 8.03s/it]
[2024-06-07 00:37:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.0.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
1%| | 7/563 [01:05<1:14:25, 8.03s/it]
1%|β | 8/563 [01:07<1:14:56, 8.10s/it]
[2024-06-07 00:37:16] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.0.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
1%|β | 8/563 [01:07<1:14:56, 8.10s/it]
2%|β | 9/563 [01:07<54:30, 5.90s/it]
[2024-06-07 00:37:16] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.0.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
2%|β | 9/563 [01:08<54:30, 5.90s/it]
2%|β | 10/563 [01:08<41:12, 4.47s/it]
[2024-06-07 00:37:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.0.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
2%|β | 10/563 [01:09<41:12, 4.47s/it]
2%|β | 11/563 [01:09<30:59, 3.37s/it]
[2024-06-07 00:37:17] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00001-of-00037.safetensors |
|
2%|β | 11/563 [01:09<30:59, 3.37s/it]
[2024-06-07 00:37:18] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00002-of-00037.safetensors |
|
2%|β | 11/563 [01:09<30:59, 3.37s/it]
[2024-06-07 00:37:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.0.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
2%|β | 11/563 [01:17<30:59, 3.37s/it]
2%|β | 12/563 [01:17<44:48, 4.88s/it]
[2024-06-07 00:37:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.0.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
2%|β | 12/563 [01:18<44:48, 4.88s/it]
2%|β | 13/563 [01:20<37:19, 4.07s/it]
[2024-06-07 00:37:28] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.0.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
2%|β | 13/563 [01:20<37:19, 4.07s/it]
[2024-06-07 00:37:28] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.1.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
2%|β | 13/563 [01:20<37:19, 4.07s/it]
[2024-06-07 00:37:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.1.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
2%|β | 13/563 [01:21<37:19, 4.07s/it]
3%|β | 16/563 [01:22<20:06, 2.21s/it]
[2024-06-07 00:37:33] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.1.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
3%|β | 16/563 [01:25<20:06, 2.21s/it]
3%|β | 17/563 [01:27<26:42, 2.94s/it]
[2024-06-07 00:37:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.1.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
3%|β | 17/563 [01:27<26:42, 2.94s/it]
[2024-06-07 00:37:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.1.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
3%|β | 17/563 [01:28<26:42, 2.94s/it]
[2024-06-07 00:37:37] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.1.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
3%|β | 17/563 [01:28<26:42, 2.94s/it]
4%|β | 20/563 [01:29<15:29, 1.71s/it]
[2024-06-07 00:37:37] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.1.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
4%|β | 20/563 [01:29<15:29, 1.71s/it]
4%|β | 21/563 [01:29<13:39, 1.51s/it]
[2024-06-07 00:37:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.2.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
4%|β | 21/563 [01:29<13:39, 1.51s/it]
[2024-06-07 00:37:39] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.2.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
4%|β | 21/563 [01:30<13:39, 1.51s/it]
4%|β | 23/563 [01:31<12:09, 1.35s/it]
[2024-06-07 00:37:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.2.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
4%|β | 23/563 [01:36<12:09, 1.35s/it]
4%|β | 24/563 [01:39<23:02, 2.56s/it]
[2024-06-07 00:37:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.2.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
4%|β | 24/563 [01:39<23:02, 2.56s/it]
[2024-06-07 00:37:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.2.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
4%|β | 24/563 [01:39<23:02, 2.56s/it]
[2024-06-07 00:37:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.2.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
4%|β | 24/563 [01:39<23:02, 2.56s/it]
5%|β | 27/563 [01:40<13:43, 1.54s/it]
[2024-06-07 00:37:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.2.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
5%|β | 27/563 [01:40<13:43, 1.54s/it]
5%|β | 28/563 [01:40<12:15, 1.38s/it]
[2024-06-07 00:37:49] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00002-of-00037.safetensors |
|
5%|β | 28/563 [01:40<12:15, 1.38s/it]
[2024-06-07 00:37:49] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00006-of-00037.safetensors |
|
5%|β | 28/563 [01:41<12:15, 1.38s/it]
[2024-06-07 00:38:00] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.10.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
5%|β | 28/563 [01:51<12:15, 1.38s/it]
5%|β | 29/563 [01:51<29:55, 3.36s/it]
[2024-06-07 00:38:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.10.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
5%|β | 29/563 [01:53<29:55, 3.36s/it]
5%|β | 30/563 [01:55<29:21, 3.31s/it]
[2024-06-07 00:38:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.10.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
5%|β | 30/563 [02:01<29:21, 3.31s/it]
6%|β | 31/563 [02:04<41:46, 4.71s/it]
[2024-06-07 00:38:12] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.10.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
6%|β | 31/563 [02:04<41:46, 4.71s/it]
6%|β | 32/563 [02:04<31:10, 3.52s/it]
[2024-06-07 00:38:12] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.10.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
6%|β | 32/563 [02:04<31:10, 3.52s/it]
[2024-06-07 00:38:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.10.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
6%|β | 32/563 [02:04<31:10, 3.52s/it]
6%|β | 34/563 [02:05<19:22, 2.20s/it]
[2024-06-07 00:38:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.10.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
6%|β | 34/563 [02:05<19:22, 2.20s/it]
6%|β | 35/563 [02:05<16:08, 1.83s/it]
[2024-06-07 00:38:14] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.11.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
6%|β | 35/563 [02:05<16:08, 1.83s/it]
[2024-06-07 00:38:15] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.11.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
6%|β | 35/563 [02:06<16:08, 1.83s/it]
7%|β | 37/563 [02:07<13:28, 1.54s/it]
[2024-06-07 00:38:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.11.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
7%|β | 37/563 [02:12<13:28, 1.54s/it]
7%|β | 38/563 [02:15<25:37, 2.93s/it]
[2024-06-07 00:38:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.11.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
7%|β | 38/563 [02:15<25:37, 2.93s/it]
7%|β | 39/563 [02:15<19:39, 2.25s/it]
[2024-06-07 00:38:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.11.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
7%|β | 39/563 [02:15<19:39, 2.25s/it]
[2024-06-07 00:38:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.11.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
7%|β | 39/563 [02:16<19:39, 2.25s/it]
7%|β | 41/563 [02:16<13:04, 1.50s/it]
[2024-06-07 00:38:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.11.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
7%|β | 41/563 [02:16<13:04, 1.50s/it]
7%|β | 42/563 [02:17<11:21, 1.31s/it]
[2024-06-07 00:38:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.9.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
7%|β | 42/563 [02:17<11:21, 1.31s/it]
[2024-06-07 00:38:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.9.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
7%|β | 42/563 [02:18<11:21, 1.31s/it]
8%|β | 44/563 [02:19<10:39, 1.23s/it]
[2024-06-07 00:38:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.9.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
8%|β | 44/563 [02:19<10:39, 1.23s/it]
[2024-06-07 00:38:28] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00006-of-00037.safetensors |
|
8%|β | 44/563 [02:19<10:39, 1.23s/it]
[2024-06-07 00:38:28] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00007-of-00037.safetensors |
|
8%|β | 44/563 [02:20<10:39, 1.23s/it]
[2024-06-07 00:38:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.12.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
8%|β | 44/563 [02:27<10:39, 1.23s/it]
8%|β | 46/563 [02:27<19:30, 2.26s/it]
[2024-06-07 00:38:37] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.12.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
8%|β | 46/563 [02:29<19:30, 2.26s/it]
8%|β | 47/563 [02:31<21:21, 2.48s/it]
[2024-06-07 00:38:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.12.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
8%|β | 47/563 [02:36<21:21, 2.48s/it]
9%|β | 48/563 [02:39<32:51, 3.83s/it]
[2024-06-07 00:38:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.12.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
9%|β | 48/563 [02:39<32:51, 3.83s/it]
9%|β | 49/563 [02:39<25:01, 2.92s/it]
[2024-06-07 00:38:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.12.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
9%|β | 49/563 [02:39<25:01, 2.92s/it]
[2024-06-07 00:38:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.12.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
9%|β | 49/563 [02:39<25:01, 2.92s/it]
9%|β | 51/563 [02:40<16:05, 1.89s/it]
[2024-06-07 00:38:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.12.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
9%|β | 51/563 [02:40<16:05, 1.89s/it]
9%|β | 52/563 [02:41<13:38, 1.60s/it]
[2024-06-07 00:38:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.13.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
9%|β | 52/563 [02:41<13:38, 1.60s/it]
[2024-06-07 00:38:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.13.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
9%|β | 52/563 [02:41<13:38, 1.60s/it]
10%|β | 54/563 [02:43<11:56, 1.41s/it]
[2024-06-07 00:38:56] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.13.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
10%|β | 54/563 [02:48<11:56, 1.41s/it]
10%|β | 55/563 [02:50<23:40, 2.80s/it]
[2024-06-07 00:38:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.13.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
10%|β | 55/563 [02:50<23:40, 2.80s/it]
10%|β | 56/563 [02:51<18:11, 2.15s/it]
[2024-06-07 00:38:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.13.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
10%|β | 56/563 [02:51<18:11, 2.15s/it]
[2024-06-07 00:38:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.13.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
10%|β | 56/563 [02:51<18:11, 2.15s/it]
10%|β | 58/563 [02:51<12:08, 1.44s/it]
[2024-06-07 00:39:00] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.13.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
10%|β | 58/563 [02:52<12:08, 1.44s/it]
10%|β | 59/563 [02:52<10:35, 1.26s/it]
[2024-06-07 00:39:00] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.14.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
10%|β | 59/563 [02:52<10:35, 1.26s/it]
[2024-06-07 00:39:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.14.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
10%|β | 59/563 [02:52<10:35, 1.26s/it]
11%|β | 61/563 [02:53<07:45, 1.08it/s]
[2024-06-07 00:39:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.14.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
11%|β | 61/563 [02:53<07:45, 1.08it/s]
11%|β | 62/563 [02:53<07:11, 1.16it/s]
[2024-06-07 00:39:02] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00007-of-00037.safetensors |
|
11%|β | 62/563 [02:53<07:11, 1.16it/s]
[2024-06-07 00:39:02] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00008-of-00037.safetensors |
|
11%|β | 62/563 [02:54<07:11, 1.16it/s]
[2024-06-07 00:39:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.14.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
11%|β | 62/563 [03:03<07:11, 1.16it/s]
11%|β | 63/563 [03:03<24:13, 2.91s/it]
[2024-06-07 00:39:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.14.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
11%|β | 63/563 [03:04<24:13, 2.91s/it]
11%|ββ | 64/563 [03:06<25:06, 3.02s/it]
[2024-06-07 00:39:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.14.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
11%|ββ | 64/563 [03:10<25:06, 3.02s/it]
12%|ββ | 65/563 [03:13<32:29, 3.91s/it]
[2024-06-07 00:39:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.14.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
12%|ββ | 65/563 [03:13<32:29, 3.91s/it]
12%|ββ | 66/563 [03:13<23:44, 2.87s/it]
[2024-06-07 00:39:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.15.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
12%|ββ | 66/563 [03:13<23:44, 2.87s/it]
[2024-06-07 00:39:22] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.15.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
12%|ββ | 66/563 [03:14<23:44, 2.87s/it]
12%|ββ | 68/563 [03:15<17:06, 2.07s/it]
[2024-06-07 00:39:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.15.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
12%|ββ | 68/563 [03:17<17:06, 2.07s/it]
12%|ββ | 69/563 [03:20<22:19, 2.71s/it]
[2024-06-07 00:39:28] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.15.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
12%|ββ | 69/563 [03:20<22:19, 2.71s/it]
12%|ββ | 70/563 [03:20<16:50, 2.05s/it]
[2024-06-07 00:39:28] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.15.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
12%|ββ | 70/563 [03:20<16:50, 2.05s/it]
[2024-06-07 00:39:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.15.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
12%|ββ | 70/563 [03:20<16:50, 2.05s/it]
13%|ββ | 72/563 [03:21<11:07, 1.36s/it]
[2024-06-07 00:39:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.15.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
13%|ββ | 72/563 [03:21<11:07, 1.36s/it]
13%|ββ | 73/563 [03:21<09:41, 1.19s/it]
[2024-06-07 00:39:30] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00009-of-00037.safetensors |
|
13%|ββ | 73/563 [03:21<09:41, 1.19s/it]
[2024-06-07 00:39:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.16.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
13%|ββ | 73/563 [03:35<09:41, 1.19s/it]
13%|ββ | 74/563 [03:38<42:10, 5.18s/it]
[2024-06-07 00:39:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.16.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
13%|ββ | 74/563 [03:38<42:10, 5.18s/it]
13%|ββ | 75/563 [03:39<31:18, 3.85s/it]
[2024-06-07 00:39:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.16.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
13%|ββ | 75/563 [03:39<31:18, 3.85s/it]
13%|ββ | 76/563 [03:39<24:28, 3.01s/it]
[2024-06-07 00:39:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.16.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
13%|ββ | 76/563 [03:40<24:28, 3.01s/it]
14%|ββ | 77/563 [03:40<18:58, 2.34s/it]
[2024-06-07 00:39:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.16.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
14%|ββ | 77/563 [03:40<18:58, 2.34s/it]
[2024-06-07 00:39:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.16.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
14%|ββ | 77/563 [03:41<18:58, 2.34s/it]
14%|ββ | 79/563 [03:42<14:17, 1.77s/it]
[2024-06-07 00:39:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.16.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
14%|ββ | 79/563 [03:42<14:17, 1.77s/it]
[2024-06-07 00:39:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.17.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
14%|ββ | 79/563 [03:42<14:17, 1.77s/it]
[2024-06-07 00:39:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.17.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
14%|ββ | 79/563 [03:43<14:17, 1.77s/it]
15%|ββ | 82/563 [03:44<10:08, 1.27s/it]
[2024-06-07 00:39:58] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.17.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
15%|ββ | 82/563 [03:50<10:08, 1.27s/it]
15%|ββ | 83/563 [03:52<20:24, 2.55s/it]
[2024-06-07 00:40:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.17.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
15%|ββ | 83/563 [03:52<20:24, 2.55s/it]
15%|ββ | 84/563 [03:52<16:12, 2.03s/it]
[2024-06-07 00:40:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.17.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
15%|ββ | 84/563 [03:52<16:12, 2.03s/it]
[2024-06-07 00:40:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.17.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
15%|ββ | 84/563 [03:53<16:12, 2.03s/it]
15%|ββ | 86/563 [03:53<11:14, 1.41s/it]
[2024-06-07 00:40:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.17.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
15%|ββ | 86/563 [03:53<11:14, 1.41s/it]
15%|ββ | 87/563 [03:54<09:51, 1.24s/it]
[2024-06-07 00:40:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.18.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
15%|ββ | 87/563 [03:58<09:51, 1.24s/it]
16%|ββ | 88/563 [04:01<20:05, 2.54s/it]
[2024-06-07 00:40:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.18.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
16%|ββ | 88/563 [04:01<20:05, 2.54s/it]
[2024-06-07 00:40:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.18.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
16%|ββ | 88/563 [04:01<20:05, 2.54s/it]
16%|ββ | 90/563 [04:02<13:21, 1.70s/it]
[2024-06-07 00:40:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.18.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
16%|ββ | 90/563 [04:02<13:21, 1.70s/it]
16%|ββ | 91/563 [04:02<11:26, 1.45s/it]
[2024-06-07 00:40:11] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00008-of-00037.safetensors |
|
16%|ββ | 91/563 [04:02<11:26, 1.45s/it]
[2024-06-07 00:40:11] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00009-of-00037.safetensors |
|
16%|ββ | 91/563 [04:03<11:26, 1.45s/it]
[2024-06-07 00:40:11] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00010-of-00037.safetensors |
|
16%|ββ | 91/563 [04:03<11:26, 1.45s/it]
[2024-06-07 00:40:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.18.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
16%|ββ | 91/563 [04:11<11:26, 1.45s/it]
16%|ββ | 92/563 [04:11<25:12, 3.21s/it]
[2024-06-07 00:40:20] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.18.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
16%|ββ | 92/563 [04:12<25:12, 3.21s/it]
17%|ββ | 93/563 [04:13<22:47, 2.91s/it]
[2024-06-07 00:40:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.18.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
17%|ββ | 93/563 [04:13<22:47, 2.91s/it]
[2024-06-07 00:40:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.19.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
17%|ββ | 93/563 [04:13<22:47, 2.91s/it]
[2024-06-07 00:40:22] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.19.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
17%|ββ | 93/563 [04:14<22:47, 2.91s/it]
17%|ββ | 96/563 [04:15<13:35, 1.75s/it]
[2024-06-07 00:40:28] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.19.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
17%|ββ | 96/563 [04:20<13:35, 1.75s/it]
17%|ββ | 97/563 [04:22<21:57, 2.83s/it]
[2024-06-07 00:40:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.19.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
17%|ββ | 97/563 [04:22<21:57, 2.83s/it]
[2024-06-07 00:40:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.19.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
17%|ββ | 97/563 [04:22<21:57, 2.83s/it]
[2024-06-07 00:40:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.19.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
17%|ββ | 97/563 [04:23<21:57, 2.83s/it]
18%|ββ | 100/563 [04:23<12:47, 1.66s/it]
[2024-06-07 00:40:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.19.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
18%|ββ | 100/563 [04:23<12:47, 1.66s/it]
18%|ββ | 101/563 [04:24<11:17, 1.47s/it]
[2024-06-07 00:40:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.20.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
18%|ββ | 101/563 [04:24<11:17, 1.47s/it]
[2024-06-07 00:40:33] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.20.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
18%|ββ | 101/563 [04:24<11:17, 1.47s/it]
18%|ββ | 103/563 [04:26<10:06, 1.32s/it]
[2024-06-07 00:40:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.20.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
18%|ββ | 103/563 [04:31<10:06, 1.32s/it]
18%|ββ | 104/563 [04:34<20:21, 2.66s/it]
[2024-06-07 00:40:42] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.20.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
18%|ββ | 104/563 [04:34<20:21, 2.66s/it]
19%|ββ | 105/563 [04:34<16:03, 2.10s/it]
[2024-06-07 00:40:42] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.20.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
19%|ββ | 105/563 [04:34<16:03, 2.10s/it]
[2024-06-07 00:40:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.20.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
19%|ββ | 105/563 [04:34<16:03, 2.10s/it]
19%|ββ | 107/563 [04:35<10:58, 1.44s/it]
[2024-06-07 00:40:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.20.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
19%|ββ | 107/563 [04:35<10:58, 1.44s/it]
19%|ββ | 108/563 [04:35<09:36, 1.27s/it]
[2024-06-07 00:40:44] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00010-of-00037.safetensors |
|
19%|ββ | 108/563 [04:35<09:36, 1.27s/it]
[2024-06-07 00:40:44] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00011-of-00037.safetensors |
|
19%|ββ | 108/563 [04:36<09:36, 1.27s/it]
[2024-06-07 00:40:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.21.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
19%|ββ | 108/563 [04:45<09:36, 1.27s/it]
19%|ββ | 109/563 [04:45<24:04, 3.18s/it]
[2024-06-07 00:40:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.21.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
19%|ββ | 109/563 [04:46<24:04, 3.18s/it]
20%|ββ | 110/563 [04:47<22:12, 2.94s/it]
[2024-06-07 00:40:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.21.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
20%|ββ | 110/563 [04:50<22:12, 2.94s/it]
20%|ββ | 111/563 [04:53<27:39, 3.67s/it]
[2024-06-07 00:41:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.21.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
20%|ββ | 111/563 [04:53<27:39, 3.67s/it]
20%|ββ | 112/563 [04:53<20:15, 2.69s/it]
[2024-06-07 00:41:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.21.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
20%|ββ | 112/563 [04:53<20:15, 2.69s/it]
[2024-06-07 00:41:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.21.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
20%|ββ | 112/563 [04:53<20:15, 2.69s/it]
20%|ββ | 114/563 [04:54<12:36, 1.68s/it]
[2024-06-07 00:41:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.21.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
20%|ββ | 114/563 [04:54<12:36, 1.68s/it]
20%|ββ | 115/563 [04:54<10:40, 1.43s/it]
[2024-06-07 00:41:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.22.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
20%|ββ | 115/563 [04:54<10:40, 1.43s/it]
[2024-06-07 00:41:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.22.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
20%|ββ | 115/563 [04:55<10:40, 1.43s/it]
21%|ββ | 117/563 [04:56<09:32, 1.28s/it]
[2024-06-07 00:41:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.22.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
21%|ββ | 117/563 [04:58<09:32, 1.28s/it]
21%|ββ | 118/563 [05:01<14:53, 2.01s/it]
[2024-06-07 00:41:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.22.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
21%|ββ | 118/563 [05:01<14:53, 2.01s/it]
[2024-06-07 00:41:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.22.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
21%|ββ | 118/563 [05:01<14:53, 2.01s/it]
[2024-06-07 00:41:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.22.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
21%|ββ | 118/563 [05:01<14:53, 2.01s/it]
21%|βββ | 121/563 [05:02<08:35, 1.17s/it]
[2024-06-07 00:41:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.22.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
21%|βββ | 121/563 [05:02<08:35, 1.17s/it]
22%|βββ | 122/563 [05:02<07:47, 1.06s/it]
[2024-06-07 00:41:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.23.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
22%|βββ | 122/563 [05:02<07:47, 1.06s/it]
[2024-06-07 00:41:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.23.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
22%|βββ | 122/563 [05:03<07:47, 1.06s/it]
22%|βββ | 124/563 [05:03<06:02, 1.21it/s]
[2024-06-07 00:41:12] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.23.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
22%|βββ | 124/563 [05:03<06:02, 1.21it/s]
22%|βββ | 125/563 [05:04<05:42, 1.28it/s]
[2024-06-07 00:41:12] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00011-of-00037.safetensors |
|
22%|βββ | 125/563 [05:04<05:42, 1.28it/s]
[2024-06-07 00:41:13] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00012-of-00037.safetensors |
|
22%|βββ | 125/563 [05:04<05:42, 1.28it/s]
[2024-06-07 00:41:23] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.23.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
22%|βββ | 125/563 [05:14<05:42, 1.28it/s]
22%|βββ | 126/563 [05:14<21:40, 2.98s/it]
[2024-06-07 00:41:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.23.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
22%|βββ | 126/563 [05:16<21:40, 2.98s/it]
23%|βββ | 127/563 [05:18<22:04, 3.04s/it]
[2024-06-07 00:41:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.23.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
23%|βββ | 127/563 [05:23<22:04, 3.04s/it]
23%|βββ | 128/563 [05:26<31:18, 4.32s/it]
[2024-06-07 00:41:34] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.23.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
23%|βββ | 128/563 [05:26<31:18, 4.32s/it]
23%|βββ | 129/563 [05:26<23:04, 3.19s/it]
[2024-06-07 00:41:34] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.24.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
23%|βββ | 129/563 [05:26<23:04, 3.19s/it]
[2024-06-07 00:41:35] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.24.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
23%|βββ | 129/563 [05:27<23:04, 3.19s/it]
23%|βββ | 131/563 [05:28<16:20, 2.27s/it]
[2024-06-07 00:41:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.24.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
23%|βββ | 131/563 [05:32<16:20, 2.27s/it]
23%|βββ | 132/563 [05:35<23:45, 3.31s/it]
[2024-06-07 00:41:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.24.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
23%|βββ | 132/563 [05:35<23:45, 3.31s/it]
[2024-06-07 00:41:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.24.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
23%|βββ | 132/563 [05:35<23:45, 3.31s/it]
[2024-06-07 00:41:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.24.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
23%|βββ | 132/563 [05:35<23:45, 3.31s/it]
24%|βββ | 135/563 [05:35<12:46, 1.79s/it]
[2024-06-07 00:41:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.24.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
24%|βββ | 135/563 [05:36<12:46, 1.79s/it]
24%|βββ | 136/563 [05:36<11:07, 1.56s/it]
[2024-06-07 00:41:44] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00013-of-00037.safetensors |
|
24%|βββ | 136/563 [05:36<11:07, 1.56s/it]
[2024-06-07 00:42:00] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.25.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
24%|βββ | 136/563 [05:52<11:07, 1.56s/it]
24%|βββ | 137/563 [05:55<37:32, 5.29s/it]
[2024-06-07 00:42:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.25.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
24%|βββ | 137/563 [05:55<37:32, 5.29s/it]
25%|βββ | 138/563 [05:55<28:47, 4.06s/it]
[2024-06-07 00:42:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.25.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
25%|βββ | 138/563 [05:55<28:47, 4.06s/it]
25%|βββ | 139/563 [05:56<22:53, 3.24s/it]
[2024-06-07 00:42:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.25.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
25%|βββ | 139/563 [05:56<22:53, 3.24s/it]
25%|βββ | 140/563 [05:56<17:56, 2.55s/it]
[2024-06-07 00:42:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.25.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
25%|βββ | 140/563 [05:56<17:56, 2.55s/it]
[2024-06-07 00:42:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.25.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
25%|βββ | 140/563 [05:57<17:56, 2.55s/it]
25%|βββ | 142/563 [05:59<13:29, 1.92s/it]
[2024-06-07 00:42:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.25.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
25%|βββ | 142/563 [05:59<13:29, 1.92s/it]
[2024-06-07 00:42:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.26.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
25%|βββ | 142/563 [05:59<13:29, 1.92s/it]
[2024-06-07 00:42:08] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.26.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
25%|βββ | 142/563 [05:59<13:29, 1.92s/it]
26%|βββ | 145/563 [06:01<09:28, 1.36s/it]
[2024-06-07 00:42:15] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.26.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
26%|βββ | 145/563 [06:07<09:28, 1.36s/it]
26%|βββ | 146/563 [06:10<19:04, 2.74s/it]
[2024-06-07 00:42:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.26.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
26%|βββ | 146/563 [06:10<19:04, 2.74s/it]
26%|βββ | 147/563 [06:10<15:11, 2.19s/it]
[2024-06-07 00:42:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.26.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
26%|βββ | 147/563 [06:10<15:11, 2.19s/it]
[2024-06-07 00:42:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.26.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
26%|βββ | 147/563 [06:10<15:11, 2.19s/it]
26%|βββ | 149/563 [06:10<10:27, 1.51s/it]
[2024-06-07 00:42:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.26.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
26%|βββ | 149/563 [06:11<10:27, 1.51s/it]
27%|βββ | 150/563 [06:11<09:06, 1.32s/it]
[2024-06-07 00:42:23] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.27.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
27%|βββ | 150/563 [06:14<09:06, 1.32s/it]
27%|βββ | 151/563 [06:17<15:47, 2.30s/it]
[2024-06-07 00:42:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.27.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
27%|βββ | 151/563 [06:17<15:47, 2.30s/it]
[2024-06-07 00:42:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.27.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
27%|βββ | 151/563 [06:17<15:47, 2.30s/it]
27%|βββ | 153/563 [06:17<10:37, 1.55s/it]
[2024-06-07 00:42:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.27.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
27%|βββ | 153/563 [06:18<10:37, 1.55s/it]
27%|βββ | 154/563 [06:18<09:10, 1.35s/it]
[2024-06-07 00:42:27] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00013-of-00037.safetensors |
|
27%|βββ | 154/563 [06:18<09:10, 1.35s/it]
[2024-06-07 00:42:27] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00012-of-00037.safetensors |
|
27%|βββ | 154/563 [06:19<09:10, 1.35s/it]
[2024-06-07 00:42:27] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00014-of-00037.safetensors |
|
27%|βββ | 154/563 [06:19<09:10, 1.35s/it]
[2024-06-07 00:42:35] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.27.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
27%|βββ | 154/563 [06:27<09:10, 1.35s/it]
28%|βββ | 155/563 [06:27<21:20, 3.14s/it]
[2024-06-07 00:42:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.27.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
28%|βββ | 155/563 [06:28<21:20, 3.14s/it]
28%|βββ | 156/563 [06:29<19:29, 2.87s/it]
[2024-06-07 00:42:37] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.27.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
28%|βββ | 156/563 [06:29<19:29, 2.87s/it]
[2024-06-07 00:42:37] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.28.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
28%|βββ | 156/563 [06:29<19:29, 2.87s/it]
[2024-06-07 00:42:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.28.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
28%|βββ | 156/563 [06:30<19:29, 2.87s/it]
28%|βββ | 159/563 [06:32<12:27, 1.85s/it]
[2024-06-07 00:42:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.28.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
28%|βββ | 159/563 [06:40<12:27, 1.85s/it]
28%|βββ | 160/563 [06:44<25:53, 3.86s/it]
[2024-06-07 00:42:52] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.28.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
28%|βββ | 160/563 [06:44<25:53, 3.86s/it]
29%|βββ | 161/563 [06:44<20:16, 3.03s/it]
[2024-06-07 00:42:52] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.28.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
29%|βββ | 161/563 [06:44<20:16, 3.03s/it]
[2024-06-07 00:42:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.28.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
29%|βββ | 161/563 [06:44<20:16, 3.03s/it]
29%|βββ | 163/563 [06:45<13:27, 2.02s/it]
[2024-06-07 00:42:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.28.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
29%|βββ | 163/563 [06:45<13:27, 2.02s/it]
29%|βββ | 164/563 [06:45<11:31, 1.73s/it]
[2024-06-07 00:42:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.29.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
29%|βββ | 164/563 [06:45<11:31, 1.73s/it]
[2024-06-07 00:42:55] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.29.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
29%|βββ | 164/563 [06:46<11:31, 1.73s/it]
29%|βββ | 166/563 [06:48<09:54, 1.50s/it]
[2024-06-07 00:43:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.29.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
29%|βββ | 166/563 [06:54<09:54, 1.50s/it]
30%|βββ | 167/563 [06:58<22:49, 3.46s/it]
[2024-06-07 00:43:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.29.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
30%|βββ | 167/563 [06:58<22:49, 3.46s/it]
30%|βββ | 168/563 [06:58<17:39, 2.68s/it]
[2024-06-07 00:43:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.29.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
30%|βββ | 168/563 [06:58<17:39, 2.68s/it]
[2024-06-07 00:43:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.29.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
30%|βββ | 168/563 [06:59<17:39, 2.68s/it]
30%|βββ | 170/563 [06:59<11:45, 1.79s/it]
[2024-06-07 00:43:08] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.29.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
30%|βββ | 170/563 [07:00<11:45, 1.79s/it]
30%|βββ | 171/563 [07:00<10:07, 1.55s/it]
[2024-06-07 00:43:08] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00014-of-00037.safetensors |
|
30%|βββ | 171/563 [07:00<10:07, 1.55s/it]
[2024-06-07 00:43:09] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00003-of-00037.safetensors |
|
30%|βββ | 171/563 [07:01<10:07, 1.55s/it]
[2024-06-07 00:43:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.3.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
30%|βββ | 171/563 [07:10<10:07, 1.55s/it]
31%|βββ | 172/563 [07:10<23:41, 3.64s/it]
[2024-06-07 00:43:20] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.3.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
31%|βββ | 172/563 [07:12<23:41, 3.64s/it]
31%|βββ | 173/563 [07:14<24:20, 3.75s/it]
[2024-06-07 00:43:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.3.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
31%|βββ | 173/563 [07:23<24:20, 3.75s/it]
31%|βββ | 174/563 [07:26<39:01, 6.02s/it]
[2024-06-07 00:43:35] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.3.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
31%|βββ | 174/563 [07:26<39:01, 6.02s/it]
31%|βββ | 175/563 [07:27<28:24, 4.39s/it]
[2024-06-07 00:43:35] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.3.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
31%|βββ | 175/563 [07:27<28:24, 4.39s/it]
[2024-06-07 00:43:35] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.3.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
31%|βββ | 175/563 [07:27<28:24, 4.39s/it]
31%|ββββ | 177/563 [07:28<17:09, 2.67s/it]
[2024-06-07 00:43:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.3.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
31%|ββββ | 177/563 [07:28<17:09, 2.67s/it]
32%|ββββ | 178/563 [07:28<14:08, 2.20s/it]
[2024-06-07 00:43:37] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.4.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
32%|ββββ | 178/563 [07:28<14:08, 2.20s/it]
[2024-06-07 00:43:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.4.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
32%|ββββ | 178/563 [07:29<14:08, 2.20s/it]
32%|ββββ | 180/563 [07:31<11:31, 1.81s/it]
[2024-06-07 00:43:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.4.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
32%|ββββ | 180/563 [07:39<11:31, 1.81s/it]
32%|ββββ | 181/563 [07:42<24:38, 3.87s/it]
[2024-06-07 00:43:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.4.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
32%|ββββ | 181/563 [07:42<24:38, 3.87s/it]
32%|ββββ | 182/563 [07:42<18:47, 2.96s/it]
[2024-06-07 00:43:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.4.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
32%|ββββ | 182/563 [07:42<18:47, 2.96s/it]
[2024-06-07 00:43:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.4.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
32%|ββββ | 182/563 [07:42<18:47, 2.96s/it]
33%|ββββ | 184/563 [07:43<12:08, 1.92s/it]
[2024-06-07 00:43:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.4.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
33%|ββββ | 184/563 [07:43<12:08, 1.92s/it]
33%|ββββ | 185/563 [07:43<10:18, 1.64s/it]
[2024-06-07 00:43:52] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.5.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
33%|ββββ | 185/563 [07:43<10:18, 1.64s/it]
[2024-06-07 00:43:52] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.5.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
33%|ββββ | 185/563 [07:44<10:18, 1.64s/it]
33%|ββββ | 187/563 [07:44<07:20, 1.17s/it]
[2024-06-07 00:43:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.5.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
33%|ββββ | 187/563 [07:45<07:20, 1.17s/it]
33%|ββββ | 188/563 [07:45<06:39, 1.06s/it]
[2024-06-07 00:43:53] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00003-of-00037.safetensors |
|
33%|ββββ | 188/563 [07:45<06:39, 1.06s/it]
[2024-06-07 00:43:54] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00015-of-00037.safetensors |
|
33%|ββββ | 188/563 [07:46<06:39, 1.06s/it]
[2024-06-07 00:44:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.30.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
33%|ββββ | 188/563 [07:53<06:39, 1.06s/it]
34%|ββββ | 189/563 [07:53<16:43, 2.68s/it]
[2024-06-07 00:44:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.30.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
34%|ββββ | 189/563 [07:54<16:43, 2.68s/it]
34%|ββββ | 190/563 [07:55<16:07, 2.59s/it]
[2024-06-07 00:44:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.30.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
34%|ββββ | 190/563 [08:01<16:07, 2.59s/it]
34%|ββββ | 191/563 [08:04<26:10, 4.22s/it]
[2024-06-07 00:44:12] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.30.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
34%|ββββ | 191/563 [08:04<26:10, 4.22s/it]
34%|ββββ | 192/563 [08:04<19:06, 3.09s/it]
[2024-06-07 00:44:12] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.30.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
34%|ββββ | 192/563 [08:04<19:06, 3.09s/it]
[2024-06-07 00:44:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.30.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
34%|ββββ | 192/563 [08:04<19:06, 3.09s/it]
34%|ββββ | 194/563 [08:05<11:44, 1.91s/it]
[2024-06-07 00:44:14] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.30.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
34%|ββββ | 194/563 [08:05<11:44, 1.91s/it]
35%|ββββ | 195/563 [08:05<09:51, 1.61s/it]
[2024-06-07 00:44:14] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.31.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
35%|ββββ | 195/563 [08:05<09:51, 1.61s/it]
[2024-06-07 00:44:15] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.31.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
35%|ββββ | 195/563 [08:06<09:51, 1.61s/it]
35%|ββββ | 197/563 [08:08<08:33, 1.40s/it]
[2024-06-07 00:44:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.31.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
35%|ββββ | 197/563 [08:10<08:33, 1.40s/it]
35%|ββββ | 198/563 [08:13<13:42, 2.25s/it]
[2024-06-07 00:44:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.31.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
35%|ββββ | 198/563 [08:13<13:42, 2.25s/it]
[2024-06-07 00:44:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.31.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
35%|ββββ | 198/563 [08:13<13:42, 2.25s/it]
[2024-06-07 00:44:22] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.31.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
35%|ββββ | 198/563 [08:13<13:42, 2.25s/it]
36%|ββββ | 201/563 [08:14<07:48, 1.29s/it]
[2024-06-07 00:44:22] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.31.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
36%|ββββ | 201/563 [08:14<07:48, 1.29s/it]
36%|ββββ | 202/563 [08:14<07:01, 1.17s/it]
[2024-06-07 00:44:23] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.32.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
36%|ββββ | 202/563 [08:14<07:01, 1.17s/it]
[2024-06-07 00:44:23] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.32.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
36%|ββββ | 202/563 [08:15<07:01, 1.17s/it]
36%|ββββ | 204/563 [08:15<05:23, 1.11it/s]
[2024-06-07 00:44:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.32.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
36%|ββββ | 204/563 [08:15<05:23, 1.11it/s]
36%|ββββ | 205/563 [08:16<05:03, 1.18it/s]
[2024-06-07 00:44:24] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00015-of-00037.safetensors |
|
36%|ββββ | 205/563 [08:16<05:03, 1.18it/s]
[2024-06-07 00:44:25] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00016-of-00037.safetensors |
|
36%|ββββ | 205/563 [08:16<05:03, 1.18it/s]
[2024-06-07 00:44:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.32.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
36%|ββββ | 205/563 [08:24<05:03, 1.18it/s]
37%|ββββ | 206/563 [08:24<14:21, 2.41s/it]
[2024-06-07 00:44:33] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.32.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
37%|ββββ | 206/563 [08:25<14:21, 2.41s/it]
37%|ββββ | 207/563 [08:27<15:19, 2.58s/it]
[2024-06-07 00:44:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.32.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
37%|ββββ | 207/563 [08:32<15:19, 2.58s/it]
37%|ββββ | 208/563 [08:35<23:43, 4.01s/it]
[2024-06-07 00:44:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.32.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
37%|ββββ | 208/563 [08:35<23:43, 4.01s/it]
[2024-06-07 00:44:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.33.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
37%|ββββ | 208/563 [08:35<23:43, 4.01s/it]
[2024-06-07 00:44:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.33.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
37%|ββββ | 208/563 [08:36<23:43, 4.01s/it]
37%|ββββ | 211/563 [08:37<13:28, 2.30s/it]
[2024-06-07 00:44:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.33.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
37%|ββββ | 211/563 [08:42<13:28, 2.30s/it]
38%|ββββ | 212/563 [08:45<19:29, 3.33s/it]
[2024-06-07 00:44:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.33.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
38%|ββββ | 212/563 [08:45<19:29, 3.33s/it]
38%|ββββ | 213/563 [08:45<15:15, 2.61s/it]
[2024-06-07 00:44:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.33.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
38%|ββββ | 213/563 [08:45<15:15, 2.61s/it]
[2024-06-07 00:44:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.33.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
38%|ββββ | 213/563 [08:45<15:15, 2.61s/it]
38%|ββββ | 215/563 [08:46<10:09, 1.75s/it]
[2024-06-07 00:44:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.33.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
38%|ββββ | 215/563 [08:46<10:09, 1.75s/it]
38%|ββββ | 216/563 [08:46<08:42, 1.50s/it]
[2024-06-07 00:44:55] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00017-of-00037.safetensors |
|
38%|ββββ | 216/563 [08:46<08:42, 1.50s/it]
[2024-06-07 00:45:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.34.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
38%|ββββ | 216/563 [08:59<08:42, 1.50s/it]
39%|ββββ | 217/563 [09:01<27:10, 4.71s/it]
[2024-06-07 00:45:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.34.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
39%|ββββ | 217/563 [09:01<27:10, 4.71s/it]
39%|ββββ | 218/563 [09:01<20:23, 3.55s/it]
[2024-06-07 00:45:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.34.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
39%|ββββ | 218/563 [09:02<20:23, 3.55s/it]
39%|ββββ | 219/563 [09:02<16:08, 2.82s/it]
[2024-06-07 00:45:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.34.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
39%|ββββ | 219/563 [09:02<16:08, 2.82s/it]
39%|ββββ | 220/563 [09:03<12:39, 2.21s/it]
[2024-06-07 00:45:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.34.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
39%|ββββ | 220/563 [09:03<12:39, 2.21s/it]
[2024-06-07 00:45:12] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.34.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
39%|ββββ | 220/563 [09:04<12:39, 2.21s/it]
39%|ββββ | 222/563 [09:05<09:40, 1.70s/it]
[2024-06-07 00:45:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.34.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
39%|ββββ | 222/563 [09:05<09:40, 1.70s/it]
[2024-06-07 00:45:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.35.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
39%|ββββ | 222/563 [09:05<09:40, 1.70s/it]
[2024-06-07 00:45:14] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.35.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
39%|ββββ | 222/563 [09:06<09:40, 1.70s/it]
40%|ββββ | 225/563 [09:07<06:52, 1.22s/it]
[2024-06-07 00:45:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.35.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
40%|ββββ | 225/563 [09:12<06:52, 1.22s/it]
40%|ββββ | 226/563 [09:15<14:19, 2.55s/it]
[2024-06-07 00:45:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.35.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
40%|ββββ | 226/563 [09:15<14:19, 2.55s/it]
40%|ββββ | 227/563 [09:15<11:23, 2.03s/it]
[2024-06-07 00:45:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.35.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
40%|ββββ | 227/563 [09:15<11:23, 2.03s/it]
[2024-06-07 00:45:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.35.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
40%|ββββ | 227/563 [09:16<11:23, 2.03s/it]
41%|ββββ | 229/563 [09:16<07:52, 1.42s/it]
[2024-06-07 00:45:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.35.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
41%|ββββ | 229/563 [09:16<07:52, 1.42s/it]
41%|ββββ | 230/563 [09:17<06:54, 1.25s/it]
[2024-06-07 00:45:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.36.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
41%|ββββ | 230/563 [09:21<06:54, 1.25s/it]
41%|ββββ | 231/563 [09:24<14:05, 2.55s/it]
[2024-06-07 00:45:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.36.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
41%|ββββ | 231/563 [09:24<14:05, 2.55s/it]
[2024-06-07 00:45:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.36.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
41%|ββββ | 231/563 [09:24<14:05, 2.55s/it]
41%|βββββ | 233/563 [09:24<09:22, 1.70s/it]
[2024-06-07 00:45:33] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.36.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
41%|βββββ | 233/563 [09:25<09:22, 1.70s/it]
42%|βββββ | 234/563 [09:25<08:02, 1.47s/it]
[2024-06-07 00:45:34] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00017-of-00037.safetensors |
|
42%|βββββ | 234/563 [09:25<08:02, 1.47s/it]
[2024-06-07 00:45:34] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00016-of-00037.safetensors |
|
42%|βββββ | 234/563 [09:26<08:02, 1.47s/it]
[2024-06-07 00:45:34] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00018-of-00037.safetensors |
|
42%|βββββ | 234/563 [09:26<08:02, 1.47s/it]
[2024-06-07 00:45:42] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.36.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
42%|βββββ | 234/563 [09:34<08:02, 1.47s/it]
42%|βββββ | 235/563 [09:34<17:43, 3.24s/it]
[2024-06-07 00:45:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.36.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
42%|βββββ | 235/563 [09:35<17:43, 3.24s/it]
42%|βββββ | 236/563 [09:36<16:16, 2.99s/it]
[2024-06-07 00:45:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.36.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
42%|βββββ | 236/563 [09:36<16:16, 2.99s/it]
[2024-06-07 00:45:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.37.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
42%|βββββ | 236/563 [09:36<16:16, 2.99s/it]
[2024-06-07 00:45:46] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.37.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
42%|βββββ | 236/563 [09:37<16:16, 2.99s/it]
42%|βββββ | 239/563 [09:38<09:50, 1.82s/it]
[2024-06-07 00:45:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.37.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
42%|βββββ | 239/563 [09:46<09:50, 1.82s/it]
43%|βββββ | 240/563 [09:48<18:34, 3.45s/it]
[2024-06-07 00:45:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.37.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
43%|βββββ | 240/563 [09:48<18:34, 3.45s/it]
[2024-06-07 00:45:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.37.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
43%|βββββ | 240/563 [09:49<18:34, 3.45s/it]
[2024-06-07 00:45:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.37.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
43%|βββββ | 240/563 [09:49<18:34, 3.45s/it]
43%|βββββ | 243/563 [09:49<10:41, 2.00s/it]
[2024-06-07 00:45:58] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.37.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
43%|βββββ | 243/563 [09:50<10:41, 2.00s/it]
43%|βββββ | 244/563 [09:50<09:21, 1.76s/it]
[2024-06-07 00:45:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.38.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
43%|βββββ | 244/563 [09:50<09:21, 1.76s/it]
[2024-06-07 00:45:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.38.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
43%|βββββ | 244/563 [09:51<09:21, 1.76s/it]
44%|βββββ | 246/563 [09:52<08:08, 1.54s/it]
[2024-06-07 00:46:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.38.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
44%|βββββ | 246/563 [09:57<08:08, 1.54s/it]
44%|βββββ | 247/563 [10:00<14:33, 2.76s/it]
[2024-06-07 00:46:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.38.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
44%|βββββ | 247/563 [10:00<14:33, 2.76s/it]
44%|βββββ | 248/563 [10:00<11:28, 2.19s/it]
[2024-06-07 00:46:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.38.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
44%|βββββ | 248/563 [10:00<11:28, 2.19s/it]
[2024-06-07 00:46:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.38.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
44%|βββββ | 248/563 [10:01<11:28, 2.19s/it]
44%|βββββ | 250/563 [10:01<07:53, 1.51s/it]
[2024-06-07 00:46:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.38.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
44%|βββββ | 250/563 [10:01<07:53, 1.51s/it]
45%|βββββ | 251/563 [10:02<06:53, 1.33s/it]
[2024-06-07 00:46:10] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00018-of-00037.safetensors |
|
45%|βββββ | 251/563 [10:02<06:53, 1.33s/it]
[2024-06-07 00:46:11] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00019-of-00037.safetensors |
|
45%|βββββ | 251/563 [10:02<06:53, 1.33s/it]
[2024-06-07 00:46:20] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.39.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
45%|βββββ | 251/563 [10:12<06:53, 1.33s/it]
45%|βββββ | 252/563 [10:12<17:30, 3.38s/it]
[2024-06-07 00:46:22] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.39.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
45%|βββββ | 252/563 [10:14<17:30, 3.38s/it]
45%|βββββ | 253/563 [10:16<18:09, 3.51s/it]
[2024-06-07 00:46:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.39.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
45%|βββββ | 253/563 [10:23<18:09, 3.51s/it]
45%|βββββ | 254/563 [10:27<28:39, 5.56s/it]
[2024-06-07 00:46:35] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.39.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
45%|βββββ | 254/563 [10:27<28:39, 5.56s/it]
45%|βββββ | 255/563 [10:27<20:57, 4.08s/it]
[2024-06-07 00:46:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.39.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
45%|βββββ | 255/563 [10:27<20:57, 4.08s/it]
[2024-06-07 00:46:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.39.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
45%|βββββ | 255/563 [10:28<20:57, 4.08s/it]
46%|βββββ | 257/563 [10:28<12:42, 2.49s/it]
[2024-06-07 00:46:37] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.39.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
46%|βββββ | 257/563 [10:28<12:42, 2.49s/it]
46%|βββββ | 258/563 [10:29<10:30, 2.07s/it]
[2024-06-07 00:46:37] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.40.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
46%|βββββ | 258/563 [10:29<10:30, 2.07s/it]
[2024-06-07 00:46:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.40.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
46%|βββββ | 258/563 [10:30<10:30, 2.07s/it]
46%|βββββ | 260/563 [10:31<08:40, 1.72s/it]
[2024-06-07 00:46:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.40.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
46%|βββββ | 260/563 [10:39<08:40, 1.72s/it]
46%|βββββ | 261/563 [10:42<18:55, 3.76s/it]
[2024-06-07 00:46:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.40.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
46%|βββββ | 261/563 [10:42<18:55, 3.76s/it]
47%|βββββ | 262/563 [10:42<14:26, 2.88s/it]
[2024-06-07 00:46:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.40.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
47%|βββββ | 262/563 [10:42<14:26, 2.88s/it]
[2024-06-07 00:46:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.40.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
47%|βββββ | 262/563 [10:42<14:26, 2.88s/it]
47%|βββββ | 264/563 [10:43<09:20, 1.88s/it]
[2024-06-07 00:46:52] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.40.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
47%|βββββ | 264/563 [10:43<09:20, 1.88s/it]
47%|βββββ | 265/563 [10:44<07:56, 1.60s/it]
[2024-06-07 00:46:52] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.41.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
47%|βββββ | 265/563 [10:44<07:56, 1.60s/it]
[2024-06-07 00:46:52] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.41.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
47%|βββββ | 265/563 [10:44<07:56, 1.60s/it]
47%|βββββ | 267/563 [10:44<05:37, 1.14s/it]
[2024-06-07 00:46:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.41.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
47%|βββββ | 267/563 [10:45<05:37, 1.14s/it]
48%|βββββ | 268/563 [10:45<05:05, 1.03s/it]
[2024-06-07 00:46:54] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00019-of-00037.safetensors |
|
48%|βββββ | 268/563 [10:45<05:05, 1.03s/it]
[2024-06-07 00:46:54] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00020-of-00037.safetensors |
|
48%|βββββ | 268/563 [10:46<05:05, 1.03s/it]
[2024-06-07 00:47:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.41.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
48%|βββββ | 268/563 [10:53<05:05, 1.03s/it]
48%|βββββ | 269/563 [10:54<13:39, 2.79s/it]
[2024-06-07 00:47:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.41.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
48%|βββββ | 269/563 [10:54<13:39, 2.79s/it]
48%|βββββ | 270/563 [10:56<12:57, 2.65s/it]
[2024-06-07 00:47:08] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.41.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
48%|βββββ | 270/563 [11:00<12:57, 2.65s/it]
48%|βββββ | 271/563 [11:02<17:46, 3.65s/it]
[2024-06-07 00:47:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.41.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
48%|βββββ | 271/563 [11:02<17:46, 3.65s/it]
48%|βββββ | 272/563 [11:02<13:04, 2.70s/it]
[2024-06-07 00:47:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.42.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
48%|βββββ | 272/563 [11:02<13:04, 2.70s/it]
[2024-06-07 00:47:12] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.42.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
48%|βββββ | 272/563 [11:03<13:04, 2.70s/it]
49%|βββββ | 274/563 [11:05<09:34, 1.99s/it]
[2024-06-07 00:47:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.42.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
49%|βββββ | 274/563 [11:09<09:34, 1.99s/it]
49%|βββββ | 275/563 [11:12<15:49, 3.30s/it]
[2024-06-07 00:47:20] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.42.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
49%|βββββ | 275/563 [11:12<15:49, 3.30s/it]
49%|βββββ | 276/563 [11:12<11:53, 2.49s/it]
[2024-06-07 00:47:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.42.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
49%|βββββ | 276/563 [11:12<11:53, 2.49s/it]
[2024-06-07 00:47:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.42.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
49%|βββββ | 276/563 [11:12<11:53, 2.49s/it]
49%|βββββ | 278/563 [11:13<07:38, 1.61s/it]
[2024-06-07 00:47:22] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.42.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
49%|βββββ | 278/563 [11:13<07:38, 1.61s/it]
50%|βββββ | 279/563 [11:14<06:32, 1.38s/it]
[2024-06-07 00:47:22] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00021-of-00037.safetensors |
|
50%|βββββ | 279/563 [11:14<06:32, 1.38s/it]
[2024-06-07 00:47:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.43.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
50%|βββββ | 279/563 [11:29<06:32, 1.38s/it]
50%|βββββ | 280/563 [11:32<26:32, 5.63s/it]
[2024-06-07 00:47:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.43.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
50%|βββββ | 280/563 [11:32<26:32, 5.63s/it]
50%|βββββ | 281/563 [11:32<19:39, 4.18s/it]
[2024-06-07 00:47:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.43.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
50%|βββββ | 281/563 [11:32<19:39, 4.18s/it]
50%|βββββ | 282/563 [11:33<15:17, 3.26s/it]
[2024-06-07 00:47:42] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.43.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
50%|βββββ | 282/563 [11:33<15:17, 3.26s/it]
50%|βββββ | 283/563 [11:34<11:47, 2.53s/it]
[2024-06-07 00:47:42] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.43.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
50%|βββββ | 283/563 [11:34<11:47, 2.53s/it]
[2024-06-07 00:47:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.43.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
50%|βββββ | 283/563 [11:34<11:47, 2.53s/it]
51%|βββββ | 285/563 [11:36<08:44, 1.89s/it]
[2024-06-07 00:47:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.43.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
51%|βββββ | 285/563 [11:36<08:44, 1.89s/it]
[2024-06-07 00:47:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.44.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
51%|βββββ | 285/563 [11:36<08:44, 1.89s/it]
[2024-06-07 00:47:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.44.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
51%|βββββ | 285/563 [11:37<08:44, 1.89s/it]
51%|βββββ | 288/563 [11:38<06:05, 1.33s/it]
[2024-06-07 00:47:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.44.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
51%|βββββ | 288/563 [11:41<06:05, 1.33s/it]
51%|ββββββ | 289/563 [11:44<10:04, 2.21s/it]
[2024-06-07 00:47:52] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.44.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
51%|ββββββ | 289/563 [11:44<10:04, 2.21s/it]
[2024-06-07 00:47:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.44.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
51%|ββββββ | 289/563 [11:44<10:04, 2.21s/it]
[2024-06-07 00:47:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.44.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
51%|ββββββ | 289/563 [11:44<10:04, 2.21s/it]
52%|ββββββ | 292/563 [11:45<06:06, 1.35s/it]
[2024-06-07 00:47:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.44.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
52%|ββββββ | 292/563 [11:45<06:06, 1.35s/it]
52%|ββββββ | 293/563 [11:45<05:30, 1.22s/it]
[2024-06-07 00:47:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.45.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
52%|ββββββ | 293/563 [11:48<05:30, 1.22s/it]
52%|ββββββ | 294/563 [11:51<09:10, 2.05s/it]
[2024-06-07 00:47:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.45.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
52%|ββββββ | 294/563 [11:51<09:10, 2.05s/it]
[2024-06-07 00:48:00] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.45.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
52%|ββββββ | 294/563 [11:51<09:10, 2.05s/it]
53%|ββββββ | 296/563 [11:52<06:27, 1.45s/it]
[2024-06-07 00:48:00] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.45.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
53%|ββββββ | 296/563 [11:52<06:27, 1.45s/it]
53%|ββββββ | 297/563 [11:52<05:39, 1.28s/it]
[2024-06-07 00:48:01] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00021-of-00037.safetensors |
|
53%|ββββββ | 297/563 [11:52<05:39, 1.28s/it]
[2024-06-07 00:48:01] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00020-of-00037.safetensors |
|
53%|ββββββ | 297/563 [11:53<05:39, 1.28s/it]
[2024-06-07 00:48:02] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00022-of-00037.safetensors |
|
53%|ββββββ | 297/563 [11:53<05:39, 1.28s/it]
[2024-06-07 00:48:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.45.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
53%|ββββββ | 297/563 [12:02<05:39, 1.28s/it]
53%|ββββββ | 298/563 [12:02<14:10, 3.21s/it]
[2024-06-07 00:48:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.45.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
53%|ββββββ | 298/563 [12:03<14:10, 3.21s/it]
53%|ββββββ | 299/563 [12:04<12:53, 2.93s/it]
[2024-06-07 00:48:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.45.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
53%|ββββββ | 299/563 [12:04<12:53, 2.93s/it]
[2024-06-07 00:48:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.46.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
53%|ββββββ | 299/563 [12:04<12:53, 2.93s/it]
[2024-06-07 00:48:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.46.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
53%|ββββββ | 299/563 [12:05<12:53, 2.93s/it]
54%|ββββββ | 302/563 [12:06<07:49, 1.80s/it]
[2024-06-07 00:48:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.46.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
54%|ββββββ | 302/563 [12:10<07:49, 1.80s/it]
54%|ββββββ | 303/563 [12:13<11:53, 2.75s/it]
[2024-06-07 00:48:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.46.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
54%|ββββββ | 303/563 [12:13<11:53, 2.75s/it]
54%|ββββββ | 304/563 [12:13<09:21, 2.17s/it]
[2024-06-07 00:48:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.46.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
54%|ββββββ | 304/563 [12:13<09:21, 2.17s/it]
[2024-06-07 00:48:22] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.46.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
54%|ββββββ | 304/563 [12:13<09:21, 2.17s/it]
54%|ββββββ | 306/563 [12:14<06:22, 1.49s/it]
[2024-06-07 00:48:22] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.46.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
54%|ββββββ | 306/563 [12:14<06:22, 1.49s/it]
55%|ββββββ | 307/563 [12:14<05:33, 1.30s/it]
[2024-06-07 00:48:23] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.47.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
55%|ββββββ | 307/563 [12:14<05:33, 1.30s/it]
[2024-06-07 00:48:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.47.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
55%|ββββββ | 307/563 [12:15<05:33, 1.30s/it]
55%|ββββββ | 309/563 [12:17<05:10, 1.22s/it]
[2024-06-07 00:48:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.47.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
55%|ββββββ | 309/563 [12:19<05:10, 1.22s/it]
55%|ββββββ | 310/563 [12:21<08:25, 2.00s/it]
[2024-06-07 00:48:30] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.47.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
55%|ββββββ | 310/563 [12:21<08:25, 2.00s/it]
[2024-06-07 00:48:30] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.47.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
55%|ββββββ | 310/563 [12:22<08:25, 2.00s/it]
[2024-06-07 00:48:30] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.47.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
55%|ββββββ | 310/563 [12:22<08:25, 2.00s/it]
56%|ββββββ | 313/563 [12:22<04:55, 1.18s/it]
[2024-06-07 00:48:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.47.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
56%|ββββββ | 313/563 [12:23<04:55, 1.18s/it]
56%|ββββββ | 314/563 [12:23<04:28, 1.08s/it]
[2024-06-07 00:48:31] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00022-of-00037.safetensors |
|
56%|ββββββ | 314/563 [12:23<04:28, 1.08s/it]
[2024-06-07 00:48:32] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00023-of-00037.safetensors |
|
56%|ββββββ | 314/563 [12:23<04:28, 1.08s/it]
[2024-06-07 00:48:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.48.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
56%|ββββββ | 314/563 [12:33<04:28, 1.08s/it]
56%|ββββββ | 315/563 [12:33<12:00, 2.91s/it]
[2024-06-07 00:48:42] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.48.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
56%|ββββββ | 315/563 [12:34<12:00, 2.91s/it]
56%|ββββββ | 316/563 [12:35<11:13, 2.72s/it]
[2024-06-07 00:48:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.48.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
56%|ββββββ | 316/563 [12:37<11:13, 2.72s/it]
56%|ββββββ | 317/563 [12:39<12:56, 3.16s/it]
[2024-06-07 00:48:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.48.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
56%|ββββββ | 317/563 [12:39<12:56, 3.16s/it]
[2024-06-07 00:48:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.48.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
56%|ββββββ | 317/563 [12:39<12:56, 3.16s/it]
[2024-06-07 00:48:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.48.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
56%|ββββββ | 317/563 [12:40<12:56, 3.16s/it]
57%|ββββββ | 320/563 [12:40<06:47, 1.68s/it]
[2024-06-07 00:48:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.48.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
57%|ββββββ | 320/563 [12:40<06:47, 1.68s/it]
57%|ββββββ | 321/563 [12:41<05:55, 1.47s/it]
[2024-06-07 00:48:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.49.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
57%|ββββββ | 321/563 [12:41<05:55, 1.47s/it]
[2024-06-07 00:48:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.49.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
57%|ββββββ | 321/563 [12:42<05:55, 1.47s/it]
57%|ββββββ | 323/563 [12:43<05:15, 1.32s/it]
[2024-06-07 00:48:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.49.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
57%|ββββββ | 323/563 [12:46<05:15, 1.32s/it]
58%|ββββββ | 324/563 [12:48<08:28, 2.13s/it]
[2024-06-07 00:48:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.49.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
58%|ββββββ | 324/563 [12:48<08:28, 2.13s/it]
[2024-06-07 00:48:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.49.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
58%|ββββββ | 324/563 [12:48<08:28, 2.13s/it]
[2024-06-07 00:48:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.49.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
58%|ββββββ | 324/563 [12:49<08:28, 2.13s/it]
58%|ββββββ | 327/563 [12:49<04:59, 1.27s/it]
[2024-06-07 00:48:58] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.49.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
58%|ββββββ | 327/563 [12:49<04:59, 1.27s/it]
58%|ββββββ | 328/563 [12:50<04:30, 1.15s/it]
[2024-06-07 00:48:58] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.50.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
58%|ββββββ | 328/563 [12:50<04:30, 1.15s/it]
[2024-06-07 00:48:58] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.50.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
58%|ββββββ | 328/563 [12:50<04:30, 1.15s/it]
59%|ββββββ | 330/563 [12:50<03:28, 1.12it/s]
[2024-06-07 00:48:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.50.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
59%|ββββββ | 330/563 [12:51<03:28, 1.12it/s]
59%|ββββββ | 331/563 [12:51<03:14, 1.19it/s]
[2024-06-07 00:49:00] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00023-of-00037.safetensors |
|
59%|ββββββ | 331/563 [12:51<03:14, 1.19it/s]
[2024-06-07 00:49:00] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00004-of-00037.safetensors |
|
59%|ββββββ | 331/563 [12:52<03:14, 1.19it/s]
[2024-06-07 00:49:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.5.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
59%|ββββββ | 331/563 [13:01<03:14, 1.19it/s]
59%|ββββββ | 332/563 [13:01<10:58, 2.85s/it]
[2024-06-07 00:49:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.5.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
59%|ββββββ | 332/563 [13:02<10:58, 2.85s/it]
59%|ββββββ | 333/563 [13:04<11:16, 2.94s/it]
[2024-06-07 00:49:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.5.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
59%|ββββββ | 333/563 [13:10<11:16, 2.94s/it]
59%|ββββββ | 334/563 [13:13<17:15, 4.52s/it]
[2024-06-07 00:49:22] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.5.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
59%|ββββββ | 334/563 [13:13<17:15, 4.52s/it]
60%|ββββββ | 335/563 [13:14<12:43, 3.35s/it]
[2024-06-07 00:49:22] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.6.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
60%|ββββββ | 335/563 [13:14<12:43, 3.35s/it]
[2024-06-07 00:49:23] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.6.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
60%|ββββββ | 335/563 [13:14<12:43, 3.35s/it]
60%|ββββββ | 337/563 [13:16<08:53, 2.36s/it]
[2024-06-07 00:49:28] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.6.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
60%|ββββββ | 337/563 [13:19<08:53, 2.36s/it]
60%|ββββββ | 338/563 [13:22<12:20, 3.29s/it]
[2024-06-07 00:49:30] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.6.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
60%|ββββββ | 338/563 [13:22<12:20, 3.29s/it]
[2024-06-07 00:49:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.6.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
60%|ββββββ | 338/563 [13:22<12:20, 3.29s/it]
[2024-06-07 00:49:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.6.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
60%|ββββββ | 338/563 [13:22<12:20, 3.29s/it]
61%|ββββββ | 341/563 [13:23<06:35, 1.78s/it]
[2024-06-07 00:49:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.6.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
61%|ββββββ | 341/563 [13:23<06:35, 1.78s/it]
61%|ββββββ | 342/563 [13:23<05:43, 1.56s/it]
[2024-06-07 00:49:32] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00005-of-00037.safetensors |
|
61%|ββββββ | 342/563 [13:23<05:43, 1.56s/it]
[2024-06-07 00:49:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.7.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
61%|ββββββ | 342/563 [13:31<05:43, 1.56s/it]
61%|ββββββ | 343/563 [13:34<12:55, 3.53s/it]
[2024-06-07 00:49:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.7.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
61%|ββββββ | 343/563 [13:34<12:55, 3.53s/it]
61%|ββββββ | 344/563 [13:34<09:55, 2.72s/it]
[2024-06-07 00:49:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.7.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
61%|ββββββ | 344/563 [13:35<09:55, 2.72s/it]
61%|βββββββ | 345/563 [13:35<08:06, 2.23s/it]
[2024-06-07 00:49:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.7.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
61%|βββββββ | 345/563 [13:35<08:06, 2.23s/it]
61%|βββββββ | 346/563 [13:36<06:31, 1.80s/it]
[2024-06-07 00:49:44] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00004-of-00037.safetensors |
|
61%|βββββββ | 346/563 [13:36<06:31, 1.80s/it]
[2024-06-07 00:49:45] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00005-of-00037.safetensors |
|
61%|βββββββ | 346/563 [13:36<06:31, 1.80s/it]
[2024-06-07 00:49:45] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00024-of-00037.safetensors |
|
61%|βββββββ | 346/563 [13:36<06:31, 1.80s/it]
[2024-06-07 00:49:55] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.50.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
61%|βββββββ | 346/563 [13:46<06:31, 1.80s/it]
62%|βββββββ | 347/563 [13:46<15:17, 4.25s/it]
[2024-06-07 00:49:56] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.50.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
62%|βββββββ | 347/563 [13:47<15:17, 4.25s/it]
62%|βββββββ | 348/563 [13:48<13:03, 3.64s/it]
[2024-06-07 00:49:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.50.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
62%|βββββββ | 348/563 [13:51<13:03, 3.64s/it]
62%|βββββββ | 349/563 [13:53<13:52, 3.89s/it]
[2024-06-07 00:50:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.50.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
62%|βββββββ | 349/563 [13:53<13:52, 3.89s/it]
[2024-06-07 00:50:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.51.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
62%|βββββββ | 349/563 [13:53<13:52, 3.89s/it]
[2024-06-07 00:50:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.51.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
62%|βββββββ | 349/563 [13:54<13:52, 3.89s/it]
63%|βββββββ | 352/563 [13:55<07:30, 2.13s/it]
[2024-06-07 00:50:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.51.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
63%|βββββββ | 352/563 [13:57<07:30, 2.13s/it]
63%|βββββββ | 353/563 [14:00<09:13, 2.63s/it]
[2024-06-07 00:50:08] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.51.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
63%|βββββββ | 353/563 [14:00<09:13, 2.63s/it]
[2024-06-07 00:50:08] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.51.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
63%|βββββββ | 353/563 [14:00<09:13, 2.63s/it]
[2024-06-07 00:50:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.51.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
63%|βββββββ | 353/563 [14:00<09:13, 2.63s/it]
63%|βββββββ | 356/563 [14:00<05:15, 1.52s/it]
[2024-06-07 00:50:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.51.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
63%|βββββββ | 356/563 [14:01<05:15, 1.52s/it]
63%|βββββββ | 357/563 [14:01<04:38, 1.35s/it]
[2024-06-07 00:50:10] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00025-of-00037.safetensors |
|
63%|βββββββ | 357/563 [14:01<04:38, 1.35s/it]
[2024-06-07 00:50:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.52.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
63%|βββββββ | 357/563 [14:16<04:38, 1.35s/it]
64%|βββββββ | 358/563 [14:18<15:59, 4.68s/it]
[2024-06-07 00:50:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.52.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
64%|βββββββ | 358/563 [14:18<15:59, 4.68s/it]
[2024-06-07 00:50:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.52.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
64%|βββββββ | 358/563 [14:19<15:59, 4.68s/it]
64%|βββββββ | 360/563 [14:19<10:29, 3.10s/it]
[2024-06-07 00:50:28] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.52.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
64%|βββββββ | 360/563 [14:20<10:29, 3.10s/it]
64%|βββββββ | 361/563 [14:20<08:39, 2.57s/it]
[2024-06-07 00:50:28] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.52.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
64%|βββββββ | 361/563 [14:20<08:39, 2.57s/it]
[2024-06-07 00:50:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.52.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
64%|βββββββ | 361/563 [14:21<08:39, 2.57s/it]
64%|βββββββ | 363/563 [14:22<06:39, 2.00s/it]
[2024-06-07 00:50:30] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.52.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
64%|βββββββ | 363/563 [14:22<06:39, 2.00s/it]
[2024-06-07 00:50:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.53.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
64%|βββββββ | 363/563 [14:22<06:39, 2.00s/it]
[2024-06-07 00:50:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.53.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
64%|βββββββ | 363/563 [14:23<06:39, 2.00s/it]
65%|βββββββ | 366/563 [14:24<04:40, 1.42s/it]
[2024-06-07 00:50:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.53.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
65%|βββββββ | 366/563 [14:29<04:40, 1.42s/it]
65%|βββββββ | 367/563 [14:32<08:09, 2.50s/it]
[2024-06-07 00:50:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.53.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
65%|βββββββ | 367/563 [14:32<08:09, 2.50s/it]
65%|βββββββ | 368/563 [14:32<06:33, 2.02s/it]
[2024-06-07 00:50:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.53.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
65%|βββββββ | 368/563 [14:32<06:33, 2.02s/it]
[2024-06-07 00:50:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.53.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
65%|βββββββ | 368/563 [14:32<06:33, 2.02s/it]
66%|βββββββ | 370/563 [14:33<04:35, 1.42s/it]
[2024-06-07 00:50:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.53.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
66%|βββββββ | 370/563 [14:33<04:35, 1.42s/it]
66%|βββββββ | 371/563 [14:33<04:01, 1.26s/it]
[2024-06-07 00:50:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.54.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
66%|βββββββ | 371/563 [14:37<04:01, 1.26s/it]
66%|βββββββ | 372/563 [14:39<07:22, 2.31s/it]
[2024-06-07 00:50:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.54.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
66%|βββββββ | 372/563 [14:39<07:22, 2.31s/it]
[2024-06-07 00:50:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.54.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
66%|βββββββ | 372/563 [14:39<07:22, 2.31s/it]
66%|βββββββ | 374/563 [14:40<04:56, 1.57s/it]
[2024-06-07 00:50:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.54.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
66%|βββββββ | 374/563 [14:40<04:56, 1.57s/it]
67%|βββββββ | 375/563 [14:41<04:14, 1.36s/it]
[2024-06-07 00:50:49] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00025-of-00037.safetensors |
|
67%|βββββββ | 375/563 [14:41<04:14, 1.36s/it]
[2024-06-07 00:50:49] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00024-of-00037.safetensors |
|
67%|βββββββ | 375/563 [14:41<04:14, 1.36s/it]
[2024-06-07 00:50:50] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00026-of-00037.safetensors |
|
67%|βββββββ | 375/563 [14:41<04:14, 1.36s/it]
[2024-06-07 00:50:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.54.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
67%|βββββββ | 375/563 [14:49<04:14, 1.36s/it]
67%|βββββββ | 376/563 [14:49<09:23, 3.01s/it]
[2024-06-07 00:50:58] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.54.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
67%|βββββββ | 376/563 [14:50<09:23, 3.01s/it]
67%|βββββββ | 377/563 [14:51<08:34, 2.77s/it]
[2024-06-07 00:50:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.54.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
67%|βββββββ | 377/563 [14:51<08:34, 2.77s/it]
[2024-06-07 00:50:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.55.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
67%|βββββββ | 377/563 [14:51<08:34, 2.77s/it]
[2024-06-07 00:51:00] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.55.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
67%|βββββββ | 377/563 [14:52<08:34, 2.77s/it]
67%|βββββββ | 380/563 [14:53<05:08, 1.68s/it]
[2024-06-07 00:51:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.55.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
67%|βββββββ | 380/563 [14:57<05:08, 1.68s/it]
68%|βββββββ | 381/563 [14:59<07:51, 2.59s/it]
[2024-06-07 00:51:08] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.55.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
68%|βββββββ | 381/563 [14:59<07:51, 2.59s/it]
[2024-06-07 00:51:08] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.55.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
68%|βββββββ | 381/563 [14:59<07:51, 2.59s/it]
[2024-06-07 00:51:08] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.55.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
68%|βββββββ | 381/563 [15:00<07:51, 2.59s/it]
68%|βββββββ | 384/563 [15:00<04:33, 1.53s/it]
[2024-06-07 00:51:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.55.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
68%|βββββββ | 384/563 [15:00<04:33, 1.53s/it]
68%|βββββββ | 385/563 [15:01<04:01, 1.36s/it]
[2024-06-07 00:51:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.56.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
68%|βββββββ | 385/563 [15:01<04:01, 1.36s/it]
[2024-06-07 00:51:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.56.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
68%|βββββββ | 385/563 [15:01<04:01, 1.36s/it]
69%|βββββββ | 387/563 [15:03<03:39, 1.24s/it]
[2024-06-07 00:51:14] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.56.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
69%|βββββββ | 387/563 [15:06<03:39, 1.24s/it]
69%|βββββββ | 388/563 [15:08<06:13, 2.13s/it]
[2024-06-07 00:51:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.56.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
69%|βββββββ | 388/563 [15:08<06:13, 2.13s/it]
[2024-06-07 00:51:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.56.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
69%|βββββββ | 388/563 [15:09<06:13, 2.13s/it]
[2024-06-07 00:51:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.56.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
69%|βββββββ | 388/563 [15:09<06:13, 2.13s/it]
69%|βββββββ | 391/563 [15:09<03:42, 1.29s/it]
[2024-06-07 00:51:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.56.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
69%|βββββββ | 391/563 [15:10<03:42, 1.29s/it]
70%|βββββββ | 392/563 [15:10<03:19, 1.17s/it]
[2024-06-07 00:51:18] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00026-of-00037.safetensors |
|
70%|βββββββ | 392/563 [15:10<03:19, 1.17s/it]
[2024-06-07 00:51:19] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00027-of-00037.safetensors |
|
70%|βββββββ | 392/563 [15:10<03:19, 1.17s/it]
[2024-06-07 00:51:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.57.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
70%|βββββββ | 392/563 [15:19<03:19, 1.17s/it]
70%|βββββββ | 393/563 [15:19<07:51, 2.78s/it]
[2024-06-07 00:51:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.57.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
70%|βββββββ | 393/563 [15:20<07:51, 2.78s/it]
70%|βββββββ | 394/563 [15:22<07:54, 2.81s/it]
[2024-06-07 00:51:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.57.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
70%|βββββββ | 394/563 [15:28<07:54, 2.81s/it]
70%|βββββββ | 395/563 [15:31<11:59, 4.28s/it]
[2024-06-07 00:51:39] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.57.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
70%|βββββββ | 395/563 [15:31<11:59, 4.28s/it]
70%|βββββββ | 396/563 [15:31<08:55, 3.21s/it]
[2024-06-07 00:51:39] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.57.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
70%|βββββββ | 396/563 [15:31<08:55, 3.21s/it]
[2024-06-07 00:51:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.57.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
70%|βββββββ | 396/563 [15:31<08:55, 3.21s/it]
71%|βββββββ | 398/563 [15:32<05:31, 2.01s/it]
[2024-06-07 00:51:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.57.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
71%|βββββββ | 398/563 [15:32<05:31, 2.01s/it]
71%|βββββββ | 399/563 [15:32<04:36, 1.68s/it]
[2024-06-07 00:51:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.58.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
71%|βββββββ | 399/563 [15:32<04:36, 1.68s/it]
[2024-06-07 00:51:42] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.58.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
71%|βββββββ | 399/563 [15:33<04:36, 1.68s/it]
71%|βββββββ | 401/563 [15:34<03:52, 1.44s/it]
[2024-06-07 00:51:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.58.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
71%|βββββββ | 401/563 [15:39<03:52, 1.44s/it]
71%|ββββββββ | 402/563 [15:42<07:29, 2.79s/it]
[2024-06-07 00:51:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.58.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
71%|ββββββββ | 402/563 [15:42<07:29, 2.79s/it]
72%|ββββββββ | 403/563 [15:42<05:42, 2.14s/it]
[2024-06-07 00:51:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.58.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
72%|ββββββββ | 403/563 [15:42<05:42, 2.14s/it]
[2024-06-07 00:51:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.58.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
72%|ββββββββ | 403/563 [15:42<05:42, 2.14s/it]
72%|ββββββββ | 405/563 [15:43<03:45, 1.43s/it]
[2024-06-07 00:51:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.58.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
72%|ββββββββ | 405/563 [15:43<03:45, 1.43s/it]
72%|ββββββββ | 406/563 [15:43<03:14, 1.24s/it]
[2024-06-07 00:51:52] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.59.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
72%|ββββββββ | 406/563 [15:43<03:14, 1.24s/it]
[2024-06-07 00:51:52] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.59.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
72%|ββββββββ | 406/563 [15:44<03:14, 1.24s/it]
72%|ββββββββ | 408/563 [15:44<02:20, 1.10it/s]
[2024-06-07 00:51:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.59.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
72%|ββββββββ | 408/563 [15:44<02:20, 1.10it/s]
73%|ββββββββ | 409/563 [15:45<02:10, 1.18it/s]
[2024-06-07 00:51:53] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00027-of-00037.safetensors |
|
73%|ββββββββ | 409/563 [15:45<02:10, 1.18it/s]
[2024-06-07 00:51:54] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00028-of-00037.safetensors |
|
73%|ββββββββ | 409/563 [15:45<02:10, 1.18it/s]
[2024-06-07 00:52:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.59.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
73%|ββββββββ | 409/563 [15:56<02:10, 1.18it/s]
73%|ββββββββ | 410/563 [15:56<08:10, 3.21s/it]
[2024-06-07 00:52:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.59.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
73%|ββββββββ | 410/563 [15:57<08:10, 3.21s/it]
73%|ββββββββ | 411/563 [15:58<07:52, 3.11s/it]
[2024-06-07 00:52:12] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.59.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
73%|ββββββββ | 411/563 [16:04<07:52, 3.11s/it]
73%|ββββββββ | 412/563 [16:07<11:16, 4.48s/it]
[2024-06-07 00:52:15] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.59.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
73%|ββββββββ | 412/563 [16:07<11:16, 4.48s/it]
73%|ββββββββ | 413/563 [16:07<08:11, 3.28s/it]
[2024-06-07 00:52:15] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.60.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
73%|ββββββββ | 413/563 [16:07<08:11, 3.28s/it]
[2024-06-07 00:52:16] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.60.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
73%|ββββββββ | 413/563 [16:08<08:11, 3.28s/it]
74%|ββββββββ | 415/563 [16:09<05:39, 2.30s/it]
[2024-06-07 00:52:22] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.60.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
74%|ββββββββ | 415/563 [16:14<05:39, 2.30s/it]
74%|ββββββββ | 416/563 [16:16<08:33, 3.50s/it]
[2024-06-07 00:52:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.60.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
74%|ββββββββ | 416/563 [16:16<08:33, 3.50s/it]
74%|ββββββββ | 417/563 [16:16<06:24, 2.63s/it]
[2024-06-07 00:52:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.60.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
74%|ββββββββ | 417/563 [16:16<06:24, 2.63s/it]
[2024-06-07 00:52:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.60.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
74%|ββββββββ | 417/563 [16:17<06:24, 2.63s/it]
74%|ββββββββ | 419/563 [16:17<04:02, 1.69s/it]
[2024-06-07 00:52:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.60.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
74%|ββββββββ | 419/563 [16:17<04:02, 1.69s/it]
75%|ββββββββ | 420/563 [16:18<03:25, 1.44s/it]
[2024-06-07 00:52:26] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00029-of-00037.safetensors |
|
75%|ββββββββ | 420/563 [16:18<03:25, 1.44s/it]
[2024-06-07 00:52:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.61.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
75%|ββββββββ | 420/563 [16:32<03:25, 1.44s/it]
75%|ββββββββ | 421/563 [16:35<12:40, 5.35s/it]
[2024-06-07 00:52:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.61.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
75%|ββββββββ | 421/563 [16:35<12:40, 5.35s/it]
[2024-06-07 00:52:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.61.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
75%|ββββββββ | 421/563 [16:35<12:40, 5.35s/it]
75%|ββββββββ | 423/563 [16:36<07:43, 3.31s/it]
[2024-06-07 00:52:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.61.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
75%|ββββββββ | 423/563 [16:36<07:43, 3.31s/it]
75%|ββββββββ | 424/563 [16:36<06:13, 2.69s/it]
[2024-06-07 00:52:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.61.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
75%|ββββββββ | 424/563 [16:36<06:13, 2.69s/it]
[2024-06-07 00:52:46] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.61.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
75%|ββββββββ | 424/563 [16:37<06:13, 2.69s/it]
76%|ββββββββ | 426/563 [16:38<04:38, 2.04s/it]
[2024-06-07 00:52:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.61.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
76%|ββββββββ | 426/563 [16:38<04:38, 2.04s/it]
[2024-06-07 00:52:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.62.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
76%|ββββββββ | 426/563 [16:38<04:38, 2.04s/it]
[2024-06-07 00:52:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.62.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
76%|ββββββββ | 426/563 [16:39<04:38, 2.04s/it]
76%|ββββββββ | 429/563 [16:40<03:10, 1.42s/it]
[2024-06-07 00:52:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.62.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
76%|ββββββββ | 429/563 [16:45<03:10, 1.42s/it]
76%|ββββββββ | 430/563 [16:47<05:20, 2.41s/it]
[2024-06-07 00:52:56] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.62.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
76%|ββββββββ | 430/563 [16:47<05:20, 2.41s/it]
77%|ββββββββ | 431/563 [16:48<04:15, 1.94s/it]
[2024-06-07 00:52:56] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.62.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
77%|ββββββββ | 431/563 [16:48<04:15, 1.94s/it]
[2024-06-07 00:52:56] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.62.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
77%|ββββββββ | 431/563 [16:48<04:15, 1.94s/it]
77%|ββββββββ | 433/563 [16:48<02:57, 1.37s/it]
[2024-06-07 00:52:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.62.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
77%|ββββββββ | 433/563 [16:49<02:57, 1.37s/it]
77%|ββββββββ | 434/563 [16:49<02:35, 1.21s/it]
[2024-06-07 00:53:00] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.63.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
77%|ββββββββ | 434/563 [16:52<02:35, 1.21s/it]
77%|ββββββββ | 435/563 [16:54<04:38, 2.18s/it]
[2024-06-07 00:53:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.63.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
77%|ββββββββ | 435/563 [16:54<04:38, 2.18s/it]
[2024-06-07 00:53:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.63.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
77%|ββββββββ | 435/563 [16:55<04:38, 2.18s/it]
78%|ββββββββ | 437/563 [16:55<03:06, 1.48s/it]
[2024-06-07 00:53:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.63.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
78%|ββββββββ | 437/563 [16:55<03:06, 1.48s/it]
78%|ββββββββ | 438/563 [16:56<02:41, 1.29s/it]
[2024-06-07 00:53:04] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00028-of-00037.safetensors |
|
78%|ββββββββ | 438/563 [16:56<02:41, 1.29s/it]
[2024-06-07 00:53:05] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00029-of-00037.safetensors |
|
78%|ββββββββ | 438/563 [16:56<02:41, 1.29s/it]
[2024-06-07 00:53:05] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00030-of-00037.safetensors |
|
78%|ββββββββ | 438/563 [16:57<02:41, 1.29s/it]
[2024-06-07 00:53:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.63.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
78%|ββββββββ | 438/563 [17:04<02:41, 1.29s/it]
78%|ββββββββ | 439/563 [17:04<06:11, 3.00s/it]
[2024-06-07 00:53:14] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.63.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
78%|ββββββββ | 439/563 [17:05<06:11, 3.00s/it]
78%|ββββββββ | 440/563 [17:06<05:39, 2.76s/it]
[2024-06-07 00:53:15] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.63.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
78%|ββββββββ | 440/563 [17:06<05:39, 2.76s/it]
[2024-06-07 00:53:15] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.64.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
78%|ββββββββ | 440/563 [17:06<05:39, 2.76s/it]
[2024-06-07 00:53:16] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.64.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
78%|ββββββββ | 440/563 [17:07<05:39, 2.76s/it]
79%|ββββββββ | 443/563 [17:08<03:21, 1.68s/it]
[2024-06-07 00:53:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.64.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
79%|ββββββββ | 443/563 [17:12<03:21, 1.68s/it]
79%|ββββββββ | 444/563 [17:15<05:10, 2.61s/it]
[2024-06-07 00:53:23] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.64.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
79%|ββββββββ | 444/563 [17:15<05:10, 2.61s/it]
[2024-06-07 00:53:23] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.64.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
79%|ββββββββ | 444/563 [17:15<05:10, 2.61s/it]
[2024-06-07 00:53:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.64.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
79%|ββββββββ | 444/563 [17:15<05:10, 2.61s/it]
79%|ββββββββ | 447/563 [17:16<02:58, 1.54s/it]
[2024-06-07 00:53:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.64.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
79%|ββββββββ | 447/563 [17:16<02:58, 1.54s/it]
80%|ββββββββ | 448/563 [17:16<02:37, 1.37s/it]
[2024-06-07 00:53:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.65.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
80%|ββββββββ | 448/563 [17:16<02:37, 1.37s/it]
[2024-06-07 00:53:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.65.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
80%|ββββββββ | 448/563 [17:17<02:37, 1.37s/it]
80%|ββββββββ | 450/563 [17:18<02:21, 1.25s/it]
[2024-06-07 00:53:30] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.65.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
80%|ββββββββ | 450/563 [17:21<02:21, 1.25s/it]
80%|ββββββββ | 451/563 [17:24<03:58, 2.13s/it]
[2024-06-07 00:53:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.65.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
80%|ββββββββ | 451/563 [17:24<03:58, 2.13s/it]
[2024-06-07 00:53:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.65.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
80%|ββββββββ | 451/563 [17:24<03:58, 2.13s/it]
[2024-06-07 00:53:33] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.65.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
80%|ββββββββ | 451/563 [17:24<03:58, 2.13s/it]
81%|ββββββββ | 454/563 [17:25<02:20, 1.29s/it]
[2024-06-07 00:53:33] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.65.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
81%|ββββββββ | 454/563 [17:25<02:20, 1.29s/it]
81%|ββββββββ | 455/563 [17:25<02:06, 1.17s/it]
[2024-06-07 00:53:34] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00030-of-00037.safetensors |
|
81%|ββββββββ | 455/563 [17:25<02:06, 1.17s/it]
[2024-06-07 00:53:34] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00031-of-00037.safetensors |
|
81%|ββββββββ | 455/563 [17:26<02:06, 1.17s/it]
[2024-06-07 00:53:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.66.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
81%|ββββββββ | 455/563 [17:35<02:06, 1.17s/it]
81%|ββββββββ | 456/563 [17:35<05:01, 2.82s/it]
[2024-06-07 00:53:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.66.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
81%|ββββββββ | 456/563 [17:36<05:01, 2.82s/it]
81%|ββββββββ | 457/563 [17:38<04:59, 2.83s/it]
[2024-06-07 00:53:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.66.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
81%|ββββββββ | 457/563 [17:43<04:59, 2.83s/it]
81%|βββββββββ | 458/563 [17:46<07:13, 4.13s/it]
[2024-06-07 00:53:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.66.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
81%|βββββββββ | 458/563 [17:46<07:13, 4.13s/it]
[2024-06-07 00:53:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.66.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
81%|βββββββββ | 458/563 [17:46<07:13, 4.13s/it]
[2024-06-07 00:53:55] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.66.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
81%|βββββββββ | 458/563 [17:46<07:13, 4.13s/it]
82%|βββββββββ | 461/563 [17:47<03:41, 2.17s/it]
[2024-06-07 00:53:55] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.66.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
82%|βββββββββ | 461/563 [17:47<03:41, 2.17s/it]
82%|βββββββββ | 462/563 [17:47<03:08, 1.86s/it]
[2024-06-07 00:53:56] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.67.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
82%|βββββββββ | 462/563 [17:47<03:08, 1.86s/it]
[2024-06-07 00:53:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.67.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
82%|βββββββββ | 462/563 [17:48<03:08, 1.86s/it]
82%|βββββββββ | 464/563 [17:49<02:35, 1.57s/it]
[2024-06-07 00:54:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.67.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
82%|βββββββββ | 464/563 [17:54<02:35, 1.57s/it]
83%|βββββββββ | 465/563 [17:57<04:30, 2.76s/it]
[2024-06-07 00:54:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.67.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
83%|βββββββββ | 465/563 [17:57<04:30, 2.76s/it]
[2024-06-07 00:54:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.67.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
83%|βββββββββ | 465/563 [17:57<04:30, 2.76s/it]
83%|βββββββββ | 467/563 [17:57<02:49, 1.76s/it]
[2024-06-07 00:54:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.67.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
83%|βββββββββ | 467/563 [17:57<02:49, 1.76s/it]
83%|βββββββββ | 468/563 [17:58<02:27, 1.56s/it]
[2024-06-07 00:54:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.67.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
83%|βββββββββ | 468/563 [17:58<02:27, 1.56s/it]
83%|βββββββββ | 469/563 [17:58<02:06, 1.34s/it]
[2024-06-07 00:54:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.68.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
83%|βββββββββ | 469/563 [17:58<02:06, 1.34s/it]
[2024-06-07 00:54:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.68.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
83%|βββββββββ | 469/563 [17:59<02:06, 1.34s/it]
84%|βββββββββ | 471/563 [17:59<01:29, 1.03it/s]
[2024-06-07 00:54:08] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.68.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
84%|βββββββββ | 471/563 [17:59<01:29, 1.03it/s]
84%|βββββββββ | 472/563 [18:00<01:20, 1.13it/s]
[2024-06-07 00:54:08] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00031-of-00037.safetensors |
|
84%|βββββββββ | 472/563 [18:00<01:20, 1.13it/s]
[2024-06-07 00:54:08] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00032-of-00037.safetensors |
|
84%|βββββββββ | 472/563 [18:00<01:20, 1.13it/s]
[2024-06-07 00:54:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.68.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
84%|βββββββββ | 472/563 [18:09<01:20, 1.13it/s]
84%|βββββββββ | 473/563 [18:09<04:25, 2.95s/it]
[2024-06-07 00:54:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.68.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
84%|βββββββββ | 473/563 [18:10<04:25, 2.95s/it]
84%|βββββββββ | 474/563 [18:12<04:15, 2.87s/it]
[2024-06-07 00:54:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.68.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
84%|βββββββββ | 474/563 [18:17<04:15, 2.87s/it]
84%|βββββββββ | 475/563 [18:20<06:21, 4.34s/it]
[2024-06-07 00:54:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.68.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
84%|βββββββββ | 475/563 [18:20<06:21, 4.34s/it]
85%|βββββββββ | 476/563 [18:20<04:35, 3.17s/it]
[2024-06-07 00:54:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.69.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
85%|βββββββββ | 476/563 [18:20<04:35, 3.17s/it]
[2024-06-07 00:54:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.69.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
85%|βββββββββ | 476/563 [18:21<04:35, 3.17s/it]
85%|βββββββββ | 478/563 [18:22<03:10, 2.24s/it]
[2024-06-07 00:54:35] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.69.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
85%|βββββββββ | 478/563 [18:27<03:10, 2.24s/it]
85%|βββββββββ | 479/563 [18:29<04:40, 3.34s/it]
[2024-06-07 00:54:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.69.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
85%|βββββββββ | 479/563 [18:29<04:40, 3.34s/it]
85%|βββββββββ | 480/563 [18:29<03:28, 2.52s/it]
[2024-06-07 00:54:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.69.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
85%|βββββββββ | 480/563 [18:29<03:28, 2.52s/it]
[2024-06-07 00:54:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.69.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
85%|βββββββββ | 480/563 [18:30<03:28, 2.52s/it]
86%|βββββββββ | 482/563 [18:30<02:11, 1.62s/it]
[2024-06-07 00:54:39] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.69.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
86%|βββββββββ | 482/563 [18:30<02:11, 1.62s/it]
86%|βββββββββ | 483/563 [18:31<01:50, 1.39s/it]
[2024-06-07 00:54:39] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00033-of-00037.safetensors |
|
86%|βββββββββ | 483/563 [18:31<01:50, 1.39s/it]
[2024-06-07 00:54:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.70.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
86%|βββββββββ | 483/563 [18:38<01:50, 1.39s/it]
86%|βββββββββ | 484/563 [18:41<04:47, 3.64s/it]
[2024-06-07 00:54:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.70.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
86%|βββββββββ | 484/563 [18:41<04:47, 3.64s/it]
[2024-06-07 00:54:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.70.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
86%|βββββββββ | 484/563 [18:42<04:47, 3.64s/it]
86%|βββββββββ | 486/563 [18:42<02:57, 2.31s/it]
[2024-06-07 00:54:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.70.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
86%|βββββββββ | 486/563 [18:42<02:57, 2.31s/it]
87%|βββββββββ | 487/563 [18:43<02:25, 1.92s/it]
[2024-06-07 00:54:51] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00033-of-00037.safetensors |
|
87%|βββββββββ | 487/563 [18:43<02:25, 1.92s/it]
[2024-06-07 00:54:51] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00032-of-00037.safetensors |
|
87%|βββββββββ | 487/563 [18:43<02:25, 1.92s/it]
[2024-06-07 00:54:52] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00005-of-00037.safetensors |
|
87%|βββββββββ | 487/563 [18:43<02:25, 1.92s/it]
[2024-06-07 00:54:55] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.7.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
87%|βββββββββ | 487/563 [18:46<02:25, 1.92s/it]
87%|βββββββββ | 488/563 [18:46<02:55, 2.34s/it]
[2024-06-07 00:54:55] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.7.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
87%|βββββββββ | 488/563 [18:47<02:55, 2.34s/it]
87%|βββββββββ | 489/563 [18:48<02:48, 2.28s/it]
[2024-06-07 00:54:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.7.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
87%|βββββββββ | 489/563 [18:48<02:48, 2.28s/it]
[2024-06-07 00:54:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.8.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
87%|βββββββββ | 489/563 [18:48<02:48, 2.28s/it]
[2024-06-07 00:54:58] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.8.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
87%|βββββββββ | 489/563 [18:49<02:48, 2.28s/it]
87%|βββββββββ | 492/563 [18:50<01:43, 1.45s/it]
[2024-06-07 00:55:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.8.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
87%|βββββββββ | 492/563 [18:54<01:43, 1.45s/it]
88%|βββββββββ | 493/563 [18:56<02:40, 2.29s/it]
[2024-06-07 00:55:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.8.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
88%|βββββββββ | 493/563 [18:56<02:40, 2.29s/it]
[2024-06-07 00:55:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.8.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
88%|βββββββββ | 493/563 [18:56<02:40, 2.29s/it]
[2024-06-07 00:55:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.8.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
88%|βββββββββ | 493/563 [18:56<02:40, 2.29s/it]
88%|βββββββββ | 496/563 [18:57<01:31, 1.36s/it]
[2024-06-07 00:55:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.8.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
88%|βββββββββ | 496/563 [18:57<01:31, 1.36s/it]
88%|βββββββββ | 497/563 [18:58<01:20, 1.23s/it]
[2024-06-07 00:55:08] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.9.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
88%|βββββββββ | 497/563 [18:59<01:20, 1.23s/it]
88%|βββββββββ | 498/563 [19:02<02:01, 1.87s/it]
[2024-06-07 00:55:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.9.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
88%|βββββββββ | 498/563 [19:02<02:01, 1.87s/it]
[2024-06-07 00:55:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.9.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
88%|βββββββββ | 498/563 [19:02<02:01, 1.87s/it]
89%|βββββββββ | 500/563 [19:03<01:23, 1.33s/it]
[2024-06-07 00:55:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.9.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
89%|βββββββββ | 500/563 [19:03<01:23, 1.33s/it]
89%|βββββββββ | 501/563 [19:03<01:13, 1.18s/it]
[2024-06-07 00:55:12] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00005-of-00037.safetensors |
|
89%|βββββββββ | 501/563 [19:03<01:13, 1.18s/it]
[2024-06-07 00:55:12] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00033-of-00037.safetensors |
|
89%|βββββββββ | 501/563 [19:04<01:13, 1.18s/it]
[2024-06-07 00:55:15] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.70.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
89%|βββββββββ | 501/563 [19:07<01:13, 1.18s/it]
89%|βββββββββ | 502/563 [19:07<01:41, 1.66s/it]
[2024-06-07 00:55:16] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.70.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
89%|βββββββββ | 502/563 [19:07<01:41, 1.66s/it]
89%|βββββββββ | 503/563 [19:09<01:45, 1.76s/it]
[2024-06-07 00:55:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.70.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
89%|βββββββββ | 503/563 [19:09<01:45, 1.76s/it]
[2024-06-07 00:55:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.71.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
89%|βββββββββ | 503/563 [19:09<01:45, 1.76s/it]
[2024-06-07 00:55:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.71.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
89%|βββββββββ | 503/563 [19:10<01:45, 1.76s/it]
90%|βββββββββ | 506/563 [19:11<01:09, 1.21s/it]
[2024-06-07 00:55:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.71.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
90%|βββββββββ | 506/563 [19:13<01:09, 1.21s/it]
90%|βββββββββ | 507/563 [19:15<01:42, 1.83s/it]
[2024-06-07 00:55:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.71.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
90%|βββββββββ | 507/563 [19:15<01:42, 1.83s/it]
[2024-06-07 00:55:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.71.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
90%|βββββββββ | 507/563 [19:15<01:42, 1.83s/it]
[2024-06-07 00:55:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.71.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
90%|βββββββββ | 507/563 [19:15<01:42, 1.83s/it]
91%|βββββββββ | 510/563 [19:16<00:59, 1.12s/it]
[2024-06-07 00:55:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.71.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
91%|βββββββββ | 510/563 [19:16<00:59, 1.12s/it]
91%|βββββββββ | 511/563 [19:16<00:53, 1.02s/it]
[2024-06-07 00:55:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.72.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
91%|βββββββββ | 511/563 [19:19<00:53, 1.02s/it]
91%|βββββββββ | 512/563 [19:21<01:27, 1.71s/it]
[2024-06-07 00:55:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.72.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
91%|βββββββββ | 512/563 [19:21<01:27, 1.71s/it]
[2024-06-07 00:55:30] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.72.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
91%|βββββββββ | 512/563 [19:21<01:27, 1.71s/it]
91%|ββββββββββ| 514/563 [19:22<01:00, 1.23s/it]
[2024-06-07 00:55:30] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.72.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
91%|ββββββββββ| 514/563 [19:22<01:00, 1.23s/it]
91%|ββββββββββ| 515/563 [19:22<00:52, 1.09s/it]
[2024-06-07 00:55:31] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00033-of-00037.safetensors |
|
91%|ββββββββββ| 515/563 [19:22<00:52, 1.09s/it]
[2024-06-07 00:55:31] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00034-of-00037.safetensors |
|
91%|ββββββββββ| 515/563 [19:23<00:52, 1.09s/it]
[2024-06-07 00:55:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.72.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
91%|ββββββββββ| 515/563 [19:32<00:52, 1.09s/it]
92%|ββββββββββ| 516/563 [19:32<02:19, 2.98s/it]
[2024-06-07 00:55:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.72.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
92%|ββββββββββ| 516/563 [19:33<02:19, 2.98s/it]
92%|ββββββββββ| 517/563 [19:35<02:17, 2.99s/it]
[2024-06-07 00:55:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.72.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
92%|ββββββββββ| 517/563 [19:35<02:17, 2.99s/it]
[2024-06-07 00:55:43] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.73.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
92%|ββββββββββ| 517/563 [19:35<02:17, 2.99s/it]
[2024-06-07 00:55:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.73.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
92%|ββββββββββ| 517/563 [19:36<02:17, 2.99s/it]
92%|ββββββββββ| 520/563 [19:37<01:19, 1.86s/it]
[2024-06-07 00:55:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.73.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
92%|ββββββββββ| 520/563 [19:42<01:19, 1.86s/it]
93%|ββββββββββ| 521/563 [19:45<02:09, 3.09s/it]
[2024-06-07 00:55:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.73.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
93%|ββββββββββ| 521/563 [19:45<02:09, 3.09s/it]
93%|ββββββββββ| 522/563 [19:45<01:39, 2.44s/it]
[2024-06-07 00:55:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.73.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
93%|ββββββββββ| 522/563 [19:45<01:39, 2.44s/it]
[2024-06-07 00:55:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.73.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
93%|ββββββββββ| 522/563 [19:46<01:39, 2.44s/it]
93%|ββββββββββ| 524/563 [19:46<01:04, 1.65s/it]
[2024-06-07 00:55:55] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.73.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
93%|ββββββββββ| 524/563 [19:46<01:04, 1.65s/it]
93%|ββββββββββ| 525/563 [19:47<00:54, 1.42s/it]
[2024-06-07 00:55:55] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.74.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
93%|ββββββββββ| 525/563 [19:47<00:54, 1.42s/it]
[2024-06-07 00:55:56] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.74.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
93%|ββββββββββ| 525/563 [19:48<00:54, 1.42s/it]
94%|ββββββββββ| 527/563 [19:49<00:46, 1.30s/it]
[2024-06-07 00:56:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.74.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
94%|ββββββββββ| 527/563 [19:54<00:46, 1.30s/it]
94%|ββββββββββ| 528/563 [19:56<01:31, 2.62s/it]
[2024-06-07 00:56:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.74.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
94%|ββββββββββ| 528/563 [19:56<01:31, 2.62s/it]
94%|ββββββββββ| 529/563 [19:56<01:08, 2.03s/it]
[2024-06-07 00:56:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.74.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
94%|ββββββββββ| 529/563 [19:56<01:08, 2.03s/it]
[2024-06-07 00:56:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.74.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
94%|ββββββββββ| 529/563 [19:57<01:08, 2.03s/it]
94%|ββββββββββ| 531/563 [19:57<00:43, 1.37s/it]
[2024-06-07 00:56:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.74.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
94%|ββββββββββ| 531/563 [19:57<00:43, 1.37s/it]
94%|ββββββββββ| 532/563 [19:58<00:37, 1.20s/it]
[2024-06-07 00:56:06] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00034-of-00037.safetensors |
|
94%|ββββββββββ| 532/563 [19:58<00:37, 1.20s/it]
[2024-06-07 00:56:07] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00035-of-00037.safetensors |
|
94%|ββββββββββ| 532/563 [19:58<00:37, 1.20s/it]
[2024-06-07 00:56:16] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.75.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
94%|ββββββββββ| 532/563 [20:08<00:37, 1.20s/it]
95%|ββββββββββ| 533/563 [20:08<01:39, 3.31s/it]
[2024-06-07 00:56:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.75.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
95%|ββββββββββ| 533/563 [20:09<01:39, 3.31s/it]
95%|ββββββββββ| 534/563 [20:11<01:32, 3.21s/it]
[2024-06-07 00:56:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.75.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
95%|ββββββββββ| 534/563 [20:16<01:32, 3.21s/it]
95%|ββββββββββ| 535/563 [20:19<02:07, 4.57s/it]
[2024-06-07 00:56:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.75.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
95%|ββββββββββ| 535/563 [20:19<02:07, 4.57s/it]
95%|ββββββββββ| 536/563 [20:19<01:30, 3.34s/it]
[2024-06-07 00:56:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.75.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
95%|ββββββββββ| 536/563 [20:19<01:30, 3.34s/it]
[2024-06-07 00:56:28] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.75.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
95%|ββββββββββ| 536/563 [20:19<01:30, 3.34s/it]
96%|ββββββββββ| 538/563 [20:20<00:50, 2.03s/it]
[2024-06-07 00:56:28] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.75.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
96%|ββββββββββ| 538/563 [20:20<00:50, 2.03s/it]
96%|ββββββββββ| 539/563 [20:20<00:40, 1.69s/it]
[2024-06-07 00:56:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.76.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
96%|ββββββββββ| 539/563 [20:20<00:40, 1.69s/it]
[2024-06-07 00:56:30] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.76.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
96%|ββββββββββ| 539/563 [20:21<00:40, 1.69s/it]
96%|ββββββββββ| 541/563 [20:22<00:31, 1.43s/it]
[2024-06-07 00:56:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.76.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
96%|ββββββββββ| 541/563 [20:27<00:31, 1.43s/it]
96%|ββββββββββ| 542/563 [20:30<00:57, 2.73s/it]
[2024-06-07 00:56:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.76.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
96%|ββββββββββ| 542/563 [20:30<00:57, 2.73s/it]
96%|ββββββββββ| 543/563 [20:30<00:41, 2.09s/it]
[2024-06-07 00:56:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.76.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
96%|ββββββββββ| 543/563 [20:30<00:41, 2.09s/it]
[2024-06-07 00:56:39] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.76.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
96%|ββββββββββ| 543/563 [20:30<00:41, 2.09s/it]
97%|ββββββββββ| 545/563 [20:31<00:25, 1.39s/it]
[2024-06-07 00:56:39] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.76.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
97%|ββββββββββ| 545/563 [20:31<00:25, 1.39s/it]
97%|ββββββββββ| 546/563 [20:31<00:20, 1.21s/it]
[2024-06-07 00:56:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.77.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
97%|ββββββββββ| 546/563 [20:31<00:20, 1.21s/it]
[2024-06-07 00:56:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.77.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
97%|ββββββββββ| 546/563 [20:31<00:20, 1.21s/it]
97%|ββββββββββ| 548/563 [20:32<00:13, 1.12it/s]
[2024-06-07 00:56:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.77.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
97%|ββββββββββ| 548/563 [20:32<00:13, 1.12it/s]
98%|ββββββββββ| 549/563 [20:33<00:11, 1.21it/s]
[2024-06-07 00:56:41] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00035-of-00037.safetensors |
|
98%|ββββββββββ| 549/563 [20:33<00:11, 1.21it/s]
[2024-06-07 00:56:41] INFO huggingface_loader.py:185: Loading HF parameters from: /models/Qwen2-72B-Instruct/model-00036-of-00037.safetensors |
|
98%|ββββββββββ| 549/563 [20:33<00:11, 1.21it/s]
[2024-06-07 00:56:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.77.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
98%|ββββββββββ| 549/563 [20:35<00:11, 1.21it/s]
98%|ββββββββββ| 550/563 [20:35<00:17, 1.33s/it]
[2024-06-07 00:56:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.77.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
98%|ββββββββββ| 550/563 [20:36<00:17, 1.33s/it]
98%|ββββββββββ| 551/563 [20:37<00:18, 1.51s/it]
[2024-06-07 00:56:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.77.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
98%|ββββββββββ| 551/563 [20:43<00:18, 1.51s/it]
98%|ββββββββββ| 552/563 [20:45<00:35, 3.19s/it]
[2024-06-07 00:56:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.77.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
98%|ββββββββββ| 552/563 [20:45<00:35, 3.19s/it]
[2024-06-07 00:56:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.78.input_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
98%|ββββββββββ| 552/563 [20:45<00:35, 3.19s/it]
[2024-06-07 00:56:55] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.78.mlp.down_proj.weight[0m", shape: (8192, 29568), dtype: float16 |
|
98%|ββββββββββ| 552/563 [20:46<00:35, 3.19s/it]
99%|ββββββββββ| 555/563 [20:47<00:14, 1.87s/it]
[2024-06-07 00:57:00] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.78.mlp.gate_up_proj.weight[0m", shape: (59136, 8192), dtype: float16 |
|
99%|ββββββββββ| 555/563 [20:52<00:14, 1.87s/it]
99%|ββββββββββ| 556/563 [20:54<00:19, 2.83s/it]
[2024-06-07 00:57:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.78.post_attention_layernorm.weight[0m", shape: (8192,), dtype: float16 |
|
99%|ββββββββββ| 556/563 [20:54<00:19, 2.83s/it]
[2024-06-07 00:57:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.78.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
99%|ββββββββββ| 556/563 [20:54<00:19, 2.83s/it]
[2024-06-07 00:57:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.78.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
99%|ββββββββββ| 556/563 [20:55<00:19, 2.83s/it]
99%|ββββββββββ| 559/563 [20:55<00:06, 1.65s/it]
[2024-06-07 00:57:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.78.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
99%|ββββββββββ| 559/563 [20:55<00:06, 1.65s/it]
99%|ββββββββββ| 560/563 [20:56<00:04, 1.46s/it]
[2024-06-07 00:57:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.79.self_attn.c_attn.bias[0m", shape: (10240,), dtype: float16 |
|
99%|ββββββββββ| 560/563 [20:56<00:04, 1.46s/it]
[2024-06-07 00:57:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.79.self_attn.c_attn.weight[0m", shape: (10240, 8192), dtype: float16 |
|
99%|ββββββββββ| 560/563 [20:56<00:04, 1.46s/it]
100%|ββββββββββ| 562/563 [20:56<00:01, 1.10s/it]
[2024-06-07 00:57:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "[1mmodel.layers.79.self_attn.o_proj.weight[0m", shape: (8192, 8192), dtype: float16 |
|
100%|ββββββββββ| 562/563 [20:57<00:01, 1.10s/it]
100%|ββββββββββ| 563/563 [20:57<00:00, 1.00s/it]
100%|ββββββββββ| 563/563 [20:57<00:00, 2.23s/it] |
|
[2024-06-07 00:57:05] INFO huggingface_loader.py:197: Unloading HF weight file: /models/Qwen2-72B-Instruct/model-00036-of-00037.safetensors |
|
[2024-06-07 00:57:06] INFO stats.py:77: [92mTime usage[0m: HF loading: 330.285 sec; Pre-quantization mapping: 496.099 sec; Quantization: 0.000 sec |
|
[2024-06-07 00:57:06] INFO stats.py:91: [92mRAM usage[0m: Peak RAM: 14.883 GB. Total bytes loaded from disk: 293.177 GB |
|
[2024-06-07 00:57:06] INFO convert_weight.py:155: [92mParameter size[0m after quantization: 135.426 GB |
|
[2024-06-07 00:57:06] INFO convert_weight.py:160: [92mTotal parameters[0m: 78,698,975,232 |
|
[2024-06-07 00:57:06] INFO convert_weight.py:161: [92mBits per parameter[0m: 14.782 |
|
[2024-06-07 00:57:06] INFO convert_weight.py:166: Saved to directory: [1m/models/mlc-delivery/hf/mlc-ai/Qwen2-72B-Instruct-q0f16-MLC[0m |
|
|
|
All finished, 323 total shards committed, record saved to /models/mlc-delivery/hf/mlc-ai/Qwen2-72B-Instruct-q0f16-MLC/ndarray-cache.json |
|
|