06/03/2024 09:53:02 - INFO - transformers.tokenization_utils_base - loading file tokenizer.model from cache at None 06/03/2024 09:53:02 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/tokenizer.json 06/03/2024 09:53:02 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json from cache at None 06/03/2024 09:53:02 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/special_tokens_map.json 06/03/2024 09:53:02 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/tokenizer_config.json 06/03/2024 09:53:02 - INFO - llamafactory.data.loader - Loading dataset your_output_file_path_here_assay.json... 06/03/2024 09:53:03 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 09:53:03 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 09:53:03 - INFO - transformers.modeling_utils - loading weights file model.safetensors from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/model.safetensors.index.json 06/03/2024 09:56:37 - INFO - transformers.modeling_utils - Instantiating LlamaForCausalLM model under default dtype torch.float16. 06/03/2024 09:56:37 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig { "bos_token_id": 1, "eos_token_id": 32000, "use_cache": false } 06/03/2024 09:56:42 - INFO - transformers.modeling_utils - All model checkpoint weights were used when initializing LlamaForCausalLM. 06/03/2024 09:56:42 - INFO - transformers.modeling_utils - All the weights of LlamaForCausalLM were initialized from the model checkpoint at yanolja/EEVE-Korean-Instruct-10.8B-v1.0. If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. 06/03/2024 09:56:42 - INFO - transformers.modeling_utils - Generation config file not found, using a generation config created from the model config. 06/03/2024 09:56:42 - INFO - llamafactory.model.utils.checkpointing - Gradient checkpointing enabled. 06/03/2024 09:56:42 - INFO - llamafactory.model.utils.attention - Using vanilla attention implementation. 06/03/2024 09:56:42 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. 06/03/2024 09:56:42 - INFO - llamafactory.model.adapter - Fine-tuning method: LoRA 06/03/2024 09:56:42 - INFO - llamafactory.model.utils.misc - Found linear modules: q_proj,o_proj,up_proj,down_proj,v_proj,k_proj,gate_proj 06/03/2024 09:56:42 - INFO - llamafactory.model.loader - trainable params: 31457280 || all params: 10836381696 || trainable%: 0.2903 06/03/2024 09:56:42 - INFO - transformers.trainer - Using auto half precision backend 06/03/2024 09:56:42 - INFO - transformers.trainer - ***** Running training ***** 06/03/2024 09:56:42 - INFO - transformers.trainer - Num examples = 40,174 06/03/2024 09:56:42 - INFO - transformers.trainer - Num Epochs = 3 06/03/2024 09:56:42 - INFO - transformers.trainer - Instantaneous batch size per device = 2 06/03/2024 09:56:42 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16 06/03/2024 09:56:42 - INFO - transformers.trainer - Gradient Accumulation steps = 8 06/03/2024 09:56:42 - INFO - transformers.trainer - Total optimization steps = 7,530 06/03/2024 09:56:42 - INFO - transformers.trainer - Number of trainable parameters = 31,457,280 06/03/2024 09:56:48 - INFO - llamafactory.extras.callbacks - {'loss': 3.4521, 'learning_rate': 3.0000e-04, 'epoch': 0.00} 06/03/2024 09:56:54 - INFO - llamafactory.extras.callbacks - {'loss': 2.5577, 'learning_rate': 3.0000e-04, 'epoch': 0.00} 06/03/2024 09:57:00 - INFO - llamafactory.extras.callbacks - {'loss': 2.6864, 'learning_rate': 3.0000e-04, 'epoch': 0.01} 06/03/2024 09:57:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.6988, 'learning_rate': 3.0000e-04, 'epoch': 0.01} 06/03/2024 09:57:13 - INFO - llamafactory.extras.callbacks - {'loss': 2.6097, 'learning_rate': 2.9999e-04, 'epoch': 0.01} 06/03/2024 09:57:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.5627, 'learning_rate': 2.9999e-04, 'epoch': 0.01} 06/03/2024 09:57:25 - INFO - llamafactory.extras.callbacks - {'loss': 2.6011, 'learning_rate': 2.9999e-04, 'epoch': 0.01} 06/03/2024 09:57:31 - INFO - llamafactory.extras.callbacks - {'loss': 2.5422, 'learning_rate': 2.9998e-04, 'epoch': 0.02} 06/03/2024 09:57:37 - INFO - llamafactory.extras.callbacks - {'loss': 2.5569, 'learning_rate': 2.9998e-04, 'epoch': 0.02} 06/03/2024 09:57:42 - INFO - llamafactory.extras.callbacks - {'loss': 2.5303, 'learning_rate': 2.9997e-04, 'epoch': 0.02} 06/03/2024 09:57:48 - INFO - llamafactory.extras.callbacks - {'loss': 2.5741, 'learning_rate': 2.9996e-04, 'epoch': 0.02} 06/03/2024 09:57:54 - INFO - llamafactory.extras.callbacks - {'loss': 2.5897, 'learning_rate': 2.9996e-04, 'epoch': 0.02} 06/03/2024 09:58:00 - INFO - llamafactory.extras.callbacks - {'loss': 2.6875, 'learning_rate': 2.9995e-04, 'epoch': 0.03} 06/03/2024 09:58:06 - INFO - llamafactory.extras.callbacks - {'loss': 2.6568, 'learning_rate': 2.9994e-04, 'epoch': 0.03} 06/03/2024 09:58:13 - INFO - llamafactory.extras.callbacks - {'loss': 2.6262, 'learning_rate': 2.9993e-04, 'epoch': 0.03} 06/03/2024 09:58:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.5888, 'learning_rate': 2.9992e-04, 'epoch': 0.03} 06/03/2024 09:58:25 - INFO - llamafactory.extras.callbacks - {'loss': 2.5537, 'learning_rate': 2.9991e-04, 'epoch': 0.03} 06/03/2024 09:58:31 - INFO - llamafactory.extras.callbacks - {'loss': 2.4183, 'learning_rate': 2.9990e-04, 'epoch': 0.04} 06/03/2024 09:58:37 - INFO - llamafactory.extras.callbacks - {'loss': 2.5720, 'learning_rate': 2.9989e-04, 'epoch': 0.04} 06/03/2024 09:58:43 - INFO - llamafactory.extras.callbacks - {'loss': 2.4835, 'learning_rate': 2.9987e-04, 'epoch': 0.04} 06/03/2024 09:58:43 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-100 06/03/2024 09:58:43 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 09:58:43 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 09:58:43 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-100/tokenizer_config.json 06/03/2024 09:58:43 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-100/special_tokens_map.json 06/03/2024 09:58:49 - INFO - llamafactory.extras.callbacks - {'loss': 2.5790, 'learning_rate': 2.9986e-04, 'epoch': 0.04} 06/03/2024 09:58:55 - INFO - llamafactory.extras.callbacks - {'loss': 2.5179, 'learning_rate': 2.9985e-04, 'epoch': 0.04} 06/03/2024 09:59:01 - INFO - llamafactory.extras.callbacks - {'loss': 2.6215, 'learning_rate': 2.9983e-04, 'epoch': 0.05} 06/03/2024 09:59:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.4763, 'learning_rate': 2.9982e-04, 'epoch': 0.05} 06/03/2024 09:59:13 - INFO - llamafactory.extras.callbacks - {'loss': 2.5380, 'learning_rate': 2.9980e-04, 'epoch': 0.05} 06/03/2024 09:59:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.5162, 'learning_rate': 2.9979e-04, 'epoch': 0.05} 06/03/2024 09:59:25 - INFO - llamafactory.extras.callbacks - {'loss': 2.5734, 'learning_rate': 2.9977e-04, 'epoch': 0.05} 06/03/2024 09:59:31 - INFO - llamafactory.extras.callbacks - {'loss': 2.6802, 'learning_rate': 2.9975e-04, 'epoch': 0.06} 06/03/2024 09:59:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.5822, 'learning_rate': 2.9973e-04, 'epoch': 0.06} 06/03/2024 09:59:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.5107, 'learning_rate': 2.9971e-04, 'epoch': 0.06} 06/03/2024 09:59:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.6337, 'learning_rate': 2.9969e-04, 'epoch': 0.06} 06/03/2024 09:59:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.4263, 'learning_rate': 2.9967e-04, 'epoch': 0.06} 06/03/2024 10:00:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.5735, 'learning_rate': 2.9965e-04, 'epoch': 0.07} 06/03/2024 10:00:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.5495, 'learning_rate': 2.9963e-04, 'epoch': 0.07} 06/03/2024 10:00:13 - INFO - llamafactory.extras.callbacks - {'loss': 2.4665, 'learning_rate': 2.9961e-04, 'epoch': 0.07} 06/03/2024 10:00:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.5463, 'learning_rate': 2.9959e-04, 'epoch': 0.07} 06/03/2024 10:00:25 - INFO - llamafactory.extras.callbacks - {'loss': 2.5684, 'learning_rate': 2.9956e-04, 'epoch': 0.07} 06/03/2024 10:00:31 - INFO - llamafactory.extras.callbacks - {'loss': 2.5404, 'learning_rate': 2.9954e-04, 'epoch': 0.08} 06/03/2024 10:00:37 - INFO - llamafactory.extras.callbacks - {'loss': 2.4203, 'learning_rate': 2.9951e-04, 'epoch': 0.08} 06/03/2024 10:00:43 - INFO - llamafactory.extras.callbacks - {'loss': 2.5068, 'learning_rate': 2.9949e-04, 'epoch': 0.08} 06/03/2024 10:00:43 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-200 06/03/2024 10:00:43 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:00:43 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:00:44 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-200/tokenizer_config.json 06/03/2024 10:00:44 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-200/special_tokens_map.json 06/03/2024 10:00:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.5755, 'learning_rate': 2.9946e-04, 'epoch': 0.08} 06/03/2024 10:00:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.6048, 'learning_rate': 2.9944e-04, 'epoch': 0.08} 06/03/2024 10:01:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.6364, 'learning_rate': 2.9941e-04, 'epoch': 0.09} 06/03/2024 10:01:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.6253, 'learning_rate': 2.9938e-04, 'epoch': 0.09} 06/03/2024 10:01:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.4876, 'learning_rate': 2.9935e-04, 'epoch': 0.09} 06/03/2024 10:01:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.5986, 'learning_rate': 2.9932e-04, 'epoch': 0.09} 06/03/2024 10:01:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.4700, 'learning_rate': 2.9929e-04, 'epoch': 0.09} 06/03/2024 10:01:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.5446, 'learning_rate': 2.9926e-04, 'epoch': 0.10} 06/03/2024 10:01:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.4366, 'learning_rate': 2.9923e-04, 'epoch': 0.10} 06/03/2024 10:01:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.6270, 'learning_rate': 2.9920e-04, 'epoch': 0.10} 06/03/2024 10:01:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.5499, 'learning_rate': 2.9917e-04, 'epoch': 0.10} 06/03/2024 10:01:55 - INFO - llamafactory.extras.callbacks - {'loss': 2.5487, 'learning_rate': 2.9913e-04, 'epoch': 0.10} 06/03/2024 10:02:01 - INFO - llamafactory.extras.callbacks - {'loss': 2.6916, 'learning_rate': 2.9910e-04, 'epoch': 0.11} 06/03/2024 10:02:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.5951, 'learning_rate': 2.9906e-04, 'epoch': 0.11} 06/03/2024 10:02:13 - INFO - llamafactory.extras.callbacks - {'loss': 2.5271, 'learning_rate': 2.9903e-04, 'epoch': 0.11} 06/03/2024 10:02:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.4868, 'learning_rate': 2.9899e-04, 'epoch': 0.11} 06/03/2024 10:02:25 - INFO - llamafactory.extras.callbacks - {'loss': 2.5136, 'learning_rate': 2.9896e-04, 'epoch': 0.11} 06/03/2024 10:02:31 - INFO - llamafactory.extras.callbacks - {'loss': 2.4441, 'learning_rate': 2.9892e-04, 'epoch': 0.12} 06/03/2024 10:02:37 - INFO - llamafactory.extras.callbacks - {'loss': 2.4815, 'learning_rate': 2.9888e-04, 'epoch': 0.12} 06/03/2024 10:02:43 - INFO - llamafactory.extras.callbacks - {'loss': 2.6206, 'learning_rate': 2.9884e-04, 'epoch': 0.12} 06/03/2024 10:02:43 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-300 06/03/2024 10:02:43 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:02:43 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:02:43 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-300/tokenizer_config.json 06/03/2024 10:02:43 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-300/special_tokens_map.json 06/03/2024 10:02:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.5436, 'learning_rate': 2.9880e-04, 'epoch': 0.12} 06/03/2024 10:02:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.6138, 'learning_rate': 2.9876e-04, 'epoch': 0.12} 06/03/2024 10:03:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.5605, 'learning_rate': 2.9872e-04, 'epoch': 0.13} 06/03/2024 10:03:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.6090, 'learning_rate': 2.9868e-04, 'epoch': 0.13} 06/03/2024 10:03:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.5863, 'learning_rate': 2.9864e-04, 'epoch': 0.13} 06/03/2024 10:03:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.6525, 'learning_rate': 2.9860e-04, 'epoch': 0.13} 06/03/2024 10:03:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.5584, 'learning_rate': 2.9855e-04, 'epoch': 0.13} 06/03/2024 10:03:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.5515, 'learning_rate': 2.9852e-04, 'epoch': 0.14} 06/03/2024 10:03:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.5811, 'learning_rate': 2.9848e-04, 'epoch': 0.14} 06/03/2024 10:03:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.6470, 'learning_rate': 2.9843e-04, 'epoch': 0.14} 06/03/2024 10:03:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.4557, 'learning_rate': 2.9839e-04, 'epoch': 0.14} 06/03/2024 10:03:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.4481, 'learning_rate': 2.9834e-04, 'epoch': 0.14} 06/03/2024 10:04:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.4257, 'learning_rate': 2.9829e-04, 'epoch': 0.15} 06/03/2024 10:04:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.5358, 'learning_rate': 2.9825e-04, 'epoch': 0.15} 06/03/2024 10:04:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.6415, 'learning_rate': 2.9820e-04, 'epoch': 0.15} 06/03/2024 10:04:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.5689, 'learning_rate': 2.9815e-04, 'epoch': 0.15} 06/03/2024 10:04:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.4177, 'learning_rate': 2.9810e-04, 'epoch': 0.15} 06/03/2024 10:04:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.5833, 'learning_rate': 2.9805e-04, 'epoch': 0.16} 06/03/2024 10:04:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.4390, 'learning_rate': 2.9800e-04, 'epoch': 0.16} 06/03/2024 10:04:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.6805, 'learning_rate': 2.9795e-04, 'epoch': 0.16} 06/03/2024 10:04:44 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-400 06/03/2024 10:04:44 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:04:44 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:04:44 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-400/tokenizer_config.json 06/03/2024 10:04:44 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-400/special_tokens_map.json 06/03/2024 10:04:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.8233, 'learning_rate': 2.9790e-04, 'epoch': 0.16} 06/03/2024 10:04:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.5363, 'learning_rate': 2.9784e-04, 'epoch': 0.16} 06/03/2024 10:05:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.5845, 'learning_rate': 2.9779e-04, 'epoch': 0.17} 06/03/2024 10:05:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.5257, 'learning_rate': 2.9774e-04, 'epoch': 0.17} 06/03/2024 10:05:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.5175, 'learning_rate': 2.9768e-04, 'epoch': 0.17} 06/03/2024 10:05:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.5718, 'learning_rate': 2.9763e-04, 'epoch': 0.17} 06/03/2024 10:05:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.5675, 'learning_rate': 2.9757e-04, 'epoch': 0.17} 06/03/2024 10:05:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.5359, 'learning_rate': 2.9751e-04, 'epoch': 0.18} 06/03/2024 10:05:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.5943, 'learning_rate': 2.9746e-04, 'epoch': 0.18} 06/03/2024 10:05:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.6092, 'learning_rate': 2.9740e-04, 'epoch': 0.18} 06/03/2024 10:05:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.5497, 'learning_rate': 2.9734e-04, 'epoch': 0.18} 06/03/2024 10:05:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.5328, 'learning_rate': 2.9728e-04, 'epoch': 0.18} 06/03/2024 10:06:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.6173, 'learning_rate': 2.9722e-04, 'epoch': 0.19} 06/03/2024 10:06:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.6796, 'learning_rate': 2.9716e-04, 'epoch': 0.19} 06/03/2024 10:06:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.4812, 'learning_rate': 2.9710e-04, 'epoch': 0.19} 06/03/2024 10:06:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.6853, 'learning_rate': 2.9704e-04, 'epoch': 0.19} 06/03/2024 10:06:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.5475, 'learning_rate': 2.9698e-04, 'epoch': 0.19} 06/03/2024 10:06:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.5562, 'learning_rate': 2.9691e-04, 'epoch': 0.20} 06/03/2024 10:06:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.5752, 'learning_rate': 2.9685e-04, 'epoch': 0.20} 06/03/2024 10:06:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.5315, 'learning_rate': 2.9679e-04, 'epoch': 0.20} 06/03/2024 10:06:45 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-500 06/03/2024 10:06:45 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:06:45 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:06:45 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-500/tokenizer_config.json 06/03/2024 10:06:45 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-500/special_tokens_map.json 06/03/2024 10:06:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.4976, 'learning_rate': 2.9672e-04, 'epoch': 0.20} 06/03/2024 10:06:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.4871, 'learning_rate': 2.9666e-04, 'epoch': 0.20} 06/03/2024 10:07:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.6066, 'learning_rate': 2.9659e-04, 'epoch': 0.21} 06/03/2024 10:07:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.6226, 'learning_rate': 2.9652e-04, 'epoch': 0.21} 06/03/2024 10:07:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.5863, 'learning_rate': 2.9646e-04, 'epoch': 0.21} 06/03/2024 10:07:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.5719, 'learning_rate': 2.9639e-04, 'epoch': 0.21} 06/03/2024 10:07:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.5393, 'learning_rate': 2.9632e-04, 'epoch': 0.21} 06/03/2024 10:07:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.5966, 'learning_rate': 2.9625e-04, 'epoch': 0.22} 06/03/2024 10:07:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.6710, 'learning_rate': 2.9618e-04, 'epoch': 0.22} 06/03/2024 10:07:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.5870, 'learning_rate': 2.9611e-04, 'epoch': 0.22} 06/03/2024 10:07:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.6888, 'learning_rate': 2.9604e-04, 'epoch': 0.22} 06/03/2024 10:07:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.6974, 'learning_rate': 2.9597e-04, 'epoch': 0.22} 06/03/2024 10:08:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.5810, 'learning_rate': 2.9590e-04, 'epoch': 0.23} 06/03/2024 10:08:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.6273, 'learning_rate': 2.9582e-04, 'epoch': 0.23} 06/03/2024 10:08:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.5318, 'learning_rate': 2.9575e-04, 'epoch': 0.23} 06/03/2024 10:08:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.6444, 'learning_rate': 2.9567e-04, 'epoch': 0.23} 06/03/2024 10:08:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.5639, 'learning_rate': 2.9560e-04, 'epoch': 0.23} 06/03/2024 10:08:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.6057, 'learning_rate': 2.9552e-04, 'epoch': 0.23} 06/03/2024 10:08:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.5556, 'learning_rate': 2.9546e-04, 'epoch': 0.24} 06/03/2024 10:08:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.5598, 'learning_rate': 2.9539e-04, 'epoch': 0.24} 06/03/2024 10:08:46 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-600 06/03/2024 10:08:46 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:08:46 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:08:46 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-600/tokenizer_config.json 06/03/2024 10:08:46 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-600/special_tokens_map.json 06/03/2024 10:08:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.6364, 'learning_rate': 2.9531e-04, 'epoch': 0.24} 06/03/2024 10:08:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.5398, 'learning_rate': 2.9523e-04, 'epoch': 0.24} 06/03/2024 10:09:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.5041, 'learning_rate': 2.9515e-04, 'epoch': 0.24} 06/03/2024 10:09:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.5853, 'learning_rate': 2.9507e-04, 'epoch': 0.25} 06/03/2024 10:09:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.5740, 'learning_rate': 2.9499e-04, 'epoch': 0.25} 06/03/2024 10:09:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.6033, 'learning_rate': 2.9491e-04, 'epoch': 0.25} 06/03/2024 10:09:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.6626, 'learning_rate': 2.9483e-04, 'epoch': 0.25} 06/03/2024 10:09:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.5723, 'learning_rate': 2.9475e-04, 'epoch': 0.25} 06/03/2024 10:09:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.5944, 'learning_rate': 2.9467e-04, 'epoch': 0.26} 06/03/2024 10:09:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.5094, 'learning_rate': 2.9458e-04, 'epoch': 0.26} 06/03/2024 10:09:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.5138, 'learning_rate': 2.9450e-04, 'epoch': 0.26} 06/03/2024 10:09:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.7075, 'learning_rate': 2.9442e-04, 'epoch': 0.26} 06/03/2024 10:10:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.6431, 'learning_rate': 2.9433e-04, 'epoch': 0.26} 06/03/2024 10:10:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.5232, 'learning_rate': 2.9425e-04, 'epoch': 0.27} 06/03/2024 10:10:17 - INFO - llamafactory.extras.callbacks - {'loss': 2.4920, 'learning_rate': 2.9416e-04, 'epoch': 0.27} 06/03/2024 10:10:23 - INFO - llamafactory.extras.callbacks - {'loss': 2.4998, 'learning_rate': 2.9407e-04, 'epoch': 0.27} 06/03/2024 10:10:29 - INFO - llamafactory.extras.callbacks - {'loss': 2.5161, 'learning_rate': 2.9399e-04, 'epoch': 0.27} 06/03/2024 10:10:35 - INFO - llamafactory.extras.callbacks - {'loss': 2.5674, 'learning_rate': 2.9390e-04, 'epoch': 0.27} 06/03/2024 10:10:41 - INFO - llamafactory.extras.callbacks - {'loss': 2.5864, 'learning_rate': 2.9381e-04, 'epoch': 0.28} 06/03/2024 10:10:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.6115, 'learning_rate': 2.9372e-04, 'epoch': 0.28} 06/03/2024 10:10:46 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-700 06/03/2024 10:10:47 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:10:47 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:10:47 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-700/tokenizer_config.json 06/03/2024 10:10:47 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-700/special_tokens_map.json 06/03/2024 10:10:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.5960, 'learning_rate': 2.9363e-04, 'epoch': 0.28} 06/03/2024 10:10:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.6313, 'learning_rate': 2.9354e-04, 'epoch': 0.28} 06/03/2024 10:11:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.6689, 'learning_rate': 2.9345e-04, 'epoch': 0.28} 06/03/2024 10:11:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.6326, 'learning_rate': 2.9336e-04, 'epoch': 0.29} 06/03/2024 10:11:17 - INFO - llamafactory.extras.callbacks - {'loss': 2.5531, 'learning_rate': 2.9326e-04, 'epoch': 0.29} 06/03/2024 10:11:23 - INFO - llamafactory.extras.callbacks - {'loss': 2.5285, 'learning_rate': 2.9317e-04, 'epoch': 0.29} 06/03/2024 10:11:29 - INFO - llamafactory.extras.callbacks - {'loss': 2.5604, 'learning_rate': 2.9308e-04, 'epoch': 0.29} 06/03/2024 10:11:35 - INFO - llamafactory.extras.callbacks - {'loss': 2.5188, 'learning_rate': 2.9298e-04, 'epoch': 0.29} 06/03/2024 10:11:41 - INFO - llamafactory.extras.callbacks - {'loss': 2.5135, 'learning_rate': 2.9289e-04, 'epoch': 0.30} 06/03/2024 10:11:47 - INFO - llamafactory.extras.callbacks - {'loss': 2.5426, 'learning_rate': 2.9279e-04, 'epoch': 0.30} 06/03/2024 10:11:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.4408, 'learning_rate': 2.9270e-04, 'epoch': 0.30} 06/03/2024 10:11:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.8477, 'learning_rate': 2.9260e-04, 'epoch': 0.30} 06/03/2024 10:12:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.4087, 'learning_rate': 2.9250e-04, 'epoch': 0.30} 06/03/2024 10:12:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.5785, 'learning_rate': 2.9240e-04, 'epoch': 0.31} 06/03/2024 10:12:17 - INFO - llamafactory.extras.callbacks - {'loss': 2.5958, 'learning_rate': 2.9231e-04, 'epoch': 0.31} 06/03/2024 10:12:23 - INFO - llamafactory.extras.callbacks - {'loss': 2.6296, 'learning_rate': 2.9221e-04, 'epoch': 0.31} 06/03/2024 10:12:29 - INFO - llamafactory.extras.callbacks - {'loss': 2.5353, 'learning_rate': 2.9211e-04, 'epoch': 0.31} 06/03/2024 10:12:35 - INFO - llamafactory.extras.callbacks - {'loss': 2.6478, 'learning_rate': 2.9201e-04, 'epoch': 0.31} 06/03/2024 10:12:41 - INFO - llamafactory.extras.callbacks - {'loss': 2.7188, 'learning_rate': 2.9191e-04, 'epoch': 0.32} 06/03/2024 10:12:47 - INFO - llamafactory.extras.callbacks - {'loss': 2.4939, 'learning_rate': 2.9180e-04, 'epoch': 0.32} 06/03/2024 10:12:47 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-800 06/03/2024 10:12:47 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:12:47 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:12:47 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-800/tokenizer_config.json 06/03/2024 10:12:47 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-800/special_tokens_map.json 06/03/2024 10:12:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.5447, 'learning_rate': 2.9170e-04, 'epoch': 0.32} 06/03/2024 10:12:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.5208, 'learning_rate': 2.9160e-04, 'epoch': 0.32} 06/03/2024 10:13:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.6699, 'learning_rate': 2.9150e-04, 'epoch': 0.32} 06/03/2024 10:13:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.5921, 'learning_rate': 2.9139e-04, 'epoch': 0.33} 06/03/2024 10:13:17 - INFO - llamafactory.extras.callbacks - {'loss': 2.6926, 'learning_rate': 2.9129e-04, 'epoch': 0.33} 06/03/2024 10:13:23 - INFO - llamafactory.extras.callbacks - {'loss': 2.6289, 'learning_rate': 2.9118e-04, 'epoch': 0.33} 06/03/2024 10:13:29 - INFO - llamafactory.extras.callbacks - {'loss': 2.5597, 'learning_rate': 2.9107e-04, 'epoch': 0.33} 06/03/2024 10:13:35 - INFO - llamafactory.extras.callbacks - {'loss': 2.6819, 'learning_rate': 2.9097e-04, 'epoch': 0.33} 06/03/2024 10:13:41 - INFO - llamafactory.extras.callbacks - {'loss': 2.5476, 'learning_rate': 2.9086e-04, 'epoch': 0.34} 06/03/2024 10:13:47 - INFO - llamafactory.extras.callbacks - {'loss': 2.5689, 'learning_rate': 2.9075e-04, 'epoch': 0.34} 06/03/2024 10:13:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.6044, 'learning_rate': 2.9064e-04, 'epoch': 0.34} 06/03/2024 10:13:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.5326, 'learning_rate': 2.9054e-04, 'epoch': 0.34} 06/03/2024 10:14:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.4232, 'learning_rate': 2.9043e-04, 'epoch': 0.34} 06/03/2024 10:14:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.6581, 'learning_rate': 2.9032e-04, 'epoch': 0.35} 06/03/2024 10:14:17 - INFO - llamafactory.extras.callbacks - {'loss': 2.5621, 'learning_rate': 2.9020e-04, 'epoch': 0.35} 06/03/2024 10:14:23 - INFO - llamafactory.extras.callbacks - {'loss': 2.6615, 'learning_rate': 2.9009e-04, 'epoch': 0.35} 06/03/2024 10:14:29 - INFO - llamafactory.extras.callbacks - {'loss': 2.4399, 'learning_rate': 2.8998e-04, 'epoch': 0.35} 06/03/2024 10:14:35 - INFO - llamafactory.extras.callbacks - {'loss': 2.6114, 'learning_rate': 2.8987e-04, 'epoch': 0.35} 06/03/2024 10:14:41 - INFO - llamafactory.extras.callbacks - {'loss': 2.6281, 'learning_rate': 2.8975e-04, 'epoch': 0.36} 06/03/2024 10:14:47 - INFO - llamafactory.extras.callbacks - {'loss': 2.6551, 'learning_rate': 2.8964e-04, 'epoch': 0.36} 06/03/2024 10:14:47 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-900 06/03/2024 10:14:47 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:14:47 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:14:48 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-900/tokenizer_config.json 06/03/2024 10:14:48 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-900/special_tokens_map.json 06/03/2024 10:14:54 - INFO - llamafactory.extras.callbacks - {'loss': 2.7113, 'learning_rate': 2.8953e-04, 'epoch': 0.36} 06/03/2024 10:15:00 - INFO - llamafactory.extras.callbacks - {'loss': 2.7406, 'learning_rate': 2.8941e-04, 'epoch': 0.36} 06/03/2024 10:15:06 - INFO - llamafactory.extras.callbacks - {'loss': 2.5814, 'learning_rate': 2.8930e-04, 'epoch': 0.36} 06/03/2024 10:15:12 - INFO - llamafactory.extras.callbacks - {'loss': 2.7098, 'learning_rate': 2.8918e-04, 'epoch': 0.37} 06/03/2024 10:15:18 - INFO - llamafactory.extras.callbacks - {'loss': 2.5403, 'learning_rate': 2.8906e-04, 'epoch': 0.37} 06/03/2024 10:15:24 - INFO - llamafactory.extras.callbacks - {'loss': 2.5292, 'learning_rate': 2.8894e-04, 'epoch': 0.37} 06/03/2024 10:15:30 - INFO - llamafactory.extras.callbacks - {'loss': 2.5331, 'learning_rate': 2.8883e-04, 'epoch': 0.37} 06/03/2024 10:15:36 - INFO - llamafactory.extras.callbacks - {'loss': 2.6319, 'learning_rate': 2.8871e-04, 'epoch': 0.37} 06/03/2024 10:15:42 - INFO - llamafactory.extras.callbacks - {'loss': 2.4976, 'learning_rate': 2.8859e-04, 'epoch': 0.38} 06/03/2024 10:15:48 - INFO - llamafactory.extras.callbacks - {'loss': 2.5219, 'learning_rate': 2.8847e-04, 'epoch': 0.38} 06/03/2024 10:15:54 - INFO - llamafactory.extras.callbacks - {'loss': 2.5199, 'learning_rate': 2.8835e-04, 'epoch': 0.38} 06/03/2024 10:16:00 - INFO - llamafactory.extras.callbacks - {'loss': 2.5299, 'learning_rate': 2.8823e-04, 'epoch': 0.38} 06/03/2024 10:16:06 - INFO - llamafactory.extras.callbacks - {'loss': 2.5803, 'learning_rate': 2.8810e-04, 'epoch': 0.38} 06/03/2024 10:16:12 - INFO - llamafactory.extras.callbacks - {'loss': 2.5917, 'learning_rate': 2.8798e-04, 'epoch': 0.39} 06/03/2024 10:16:18 - INFO - llamafactory.extras.callbacks - {'loss': 2.6519, 'learning_rate': 2.8786e-04, 'epoch': 0.39} 06/03/2024 10:16:24 - INFO - llamafactory.extras.callbacks - {'loss': 2.7083, 'learning_rate': 2.8774e-04, 'epoch': 0.39} 06/03/2024 10:16:30 - INFO - llamafactory.extras.callbacks - {'loss': 2.4946, 'learning_rate': 2.8761e-04, 'epoch': 0.39} 06/03/2024 10:16:36 - INFO - llamafactory.extras.callbacks - {'loss': 2.6007, 'learning_rate': 2.8749e-04, 'epoch': 0.39} 06/03/2024 10:16:42 - INFO - llamafactory.extras.callbacks - {'loss': 2.5687, 'learning_rate': 2.8736e-04, 'epoch': 0.40} 06/03/2024 10:16:48 - INFO - llamafactory.extras.callbacks - {'loss': 2.5454, 'learning_rate': 2.8723e-04, 'epoch': 0.40} 06/03/2024 10:16:48 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1000 06/03/2024 10:16:48 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:16:48 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:16:48 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1000/tokenizer_config.json 06/03/2024 10:16:48 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1000/special_tokens_map.json 06/03/2024 10:16:54 - INFO - llamafactory.extras.callbacks - {'loss': 2.5901, 'learning_rate': 2.8711e-04, 'epoch': 0.40} 06/03/2024 10:17:00 - INFO - llamafactory.extras.callbacks - {'loss': 2.6213, 'learning_rate': 2.8698e-04, 'epoch': 0.40} 06/03/2024 10:17:06 - INFO - llamafactory.extras.callbacks - {'loss': 2.5593, 'learning_rate': 2.8685e-04, 'epoch': 0.40} 06/03/2024 10:17:12 - INFO - llamafactory.extras.callbacks - {'loss': 2.6130, 'learning_rate': 2.8672e-04, 'epoch': 0.41} 06/03/2024 10:17:18 - INFO - llamafactory.extras.callbacks - {'loss': 2.5590, 'learning_rate': 2.8660e-04, 'epoch': 0.41} 06/03/2024 10:17:24 - INFO - llamafactory.extras.callbacks - {'loss': 2.6535, 'learning_rate': 2.8647e-04, 'epoch': 0.41} 06/03/2024 10:17:30 - INFO - llamafactory.extras.callbacks - {'loss': 2.4873, 'learning_rate': 2.8634e-04, 'epoch': 0.41} 06/03/2024 10:17:36 - INFO - llamafactory.extras.callbacks - {'loss': 2.6035, 'learning_rate': 2.8621e-04, 'epoch': 0.41} 06/03/2024 10:17:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.6536, 'learning_rate': 2.8607e-04, 'epoch': 0.42} 06/03/2024 10:17:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.5994, 'learning_rate': 2.8594e-04, 'epoch': 0.42} 06/03/2024 10:17:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.6867, 'learning_rate': 2.8581e-04, 'epoch': 0.42} 06/03/2024 10:18:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.5302, 'learning_rate': 2.8568e-04, 'epoch': 0.42} 06/03/2024 10:18:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.5605, 'learning_rate': 2.8554e-04, 'epoch': 0.42} 06/03/2024 10:18:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.6312, 'learning_rate': 2.8541e-04, 'epoch': 0.43} 06/03/2024 10:18:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.5357, 'learning_rate': 2.8527e-04, 'epoch': 0.43} 06/03/2024 10:18:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.5713, 'learning_rate': 2.8514e-04, 'epoch': 0.43} 06/03/2024 10:18:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.5214, 'learning_rate': 2.8500e-04, 'epoch': 0.43} 06/03/2024 10:18:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.6300, 'learning_rate': 2.8486e-04, 'epoch': 0.43} 06/03/2024 10:18:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.5809, 'learning_rate': 2.8473e-04, 'epoch': 0.44} 06/03/2024 10:18:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.5623, 'learning_rate': 2.8459e-04, 'epoch': 0.44} 06/03/2024 10:18:50 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1100 06/03/2024 10:18:50 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:18:50 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:18:50 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1100/tokenizer_config.json 06/03/2024 10:18:50 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1100/special_tokens_map.json 06/03/2024 10:18:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.6073, 'learning_rate': 2.8445e-04, 'epoch': 0.44} 06/03/2024 10:19:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.6203, 'learning_rate': 2.8431e-04, 'epoch': 0.44} 06/03/2024 10:19:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.3232, 'learning_rate': 2.8417e-04, 'epoch': 0.44} 06/03/2024 10:19:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.5590, 'learning_rate': 2.8403e-04, 'epoch': 0.45} 06/03/2024 10:19:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.6238, 'learning_rate': 2.8389e-04, 'epoch': 0.45} 06/03/2024 10:19:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.5617, 'learning_rate': 2.8375e-04, 'epoch': 0.45} 06/03/2024 10:19:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.6933, 'learning_rate': 2.8361e-04, 'epoch': 0.45} 06/03/2024 10:19:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.4117, 'learning_rate': 2.8347e-04, 'epoch': 0.45} 06/03/2024 10:19:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.6071, 'learning_rate': 2.8332e-04, 'epoch': 0.46} 06/03/2024 10:19:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.6693, 'learning_rate': 2.8318e-04, 'epoch': 0.46} 06/03/2024 10:19:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.5838, 'learning_rate': 2.8303e-04, 'epoch': 0.46} 06/03/2024 10:20:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.5671, 'learning_rate': 2.8289e-04, 'epoch': 0.46} 06/03/2024 10:20:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.6354, 'learning_rate': 2.8274e-04, 'epoch': 0.46} 06/03/2024 10:20:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.5240, 'learning_rate': 2.8260e-04, 'epoch': 0.47} 06/03/2024 10:20:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.5369, 'learning_rate': 2.8245e-04, 'epoch': 0.47} 06/03/2024 10:20:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.5030, 'learning_rate': 2.8230e-04, 'epoch': 0.47} 06/03/2024 10:20:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.6666, 'learning_rate': 2.8216e-04, 'epoch': 0.47} 06/03/2024 10:20:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.5864, 'learning_rate': 2.8201e-04, 'epoch': 0.47} 06/03/2024 10:20:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.5652, 'learning_rate': 2.8186e-04, 'epoch': 0.48} 06/03/2024 10:20:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.4675, 'learning_rate': 2.8171e-04, 'epoch': 0.48} 06/03/2024 10:20:51 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1200 06/03/2024 10:20:51 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:20:51 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:20:51 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1200/tokenizer_config.json 06/03/2024 10:20:51 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1200/special_tokens_map.json 06/03/2024 10:20:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.7380, 'learning_rate': 2.8156e-04, 'epoch': 0.48} 06/03/2024 10:21:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.5925, 'learning_rate': 2.8141e-04, 'epoch': 0.48} 06/03/2024 10:21:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.5849, 'learning_rate': 2.8126e-04, 'epoch': 0.48} 06/03/2024 10:21:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.5966, 'learning_rate': 2.8111e-04, 'epoch': 0.49} 06/03/2024 10:21:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.5334, 'learning_rate': 2.8095e-04, 'epoch': 0.49} 06/03/2024 10:21:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.4928, 'learning_rate': 2.8080e-04, 'epoch': 0.49} 06/03/2024 10:21:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.6438, 'learning_rate': 2.8065e-04, 'epoch': 0.49} 06/03/2024 10:21:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.6136, 'learning_rate': 2.8049e-04, 'epoch': 0.49} 06/03/2024 10:21:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.7049, 'learning_rate': 2.8034e-04, 'epoch': 0.50} 06/03/2024 10:21:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.5448, 'learning_rate': 2.8018e-04, 'epoch': 0.50} 06/03/2024 10:21:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.4522, 'learning_rate': 2.8003e-04, 'epoch': 0.50} 06/03/2024 10:22:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.7182, 'learning_rate': 2.7987e-04, 'epoch': 0.50} 06/03/2024 10:22:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.5468, 'learning_rate': 2.7972e-04, 'epoch': 0.50} 06/03/2024 10:22:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.6180, 'learning_rate': 2.7956e-04, 'epoch': 0.51} 06/03/2024 10:22:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.6364, 'learning_rate': 2.7940e-04, 'epoch': 0.51} 06/03/2024 10:22:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.6681, 'learning_rate': 2.7924e-04, 'epoch': 0.51} 06/03/2024 10:22:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.6049, 'learning_rate': 2.7908e-04, 'epoch': 0.51} 06/03/2024 10:22:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.5811, 'learning_rate': 2.7892e-04, 'epoch': 0.51} 06/03/2024 10:22:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.5577, 'learning_rate': 2.7876e-04, 'epoch': 0.52} 06/03/2024 10:22:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.6275, 'learning_rate': 2.7860e-04, 'epoch': 0.52} 06/03/2024 10:22:52 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1300 06/03/2024 10:22:52 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:22:52 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:22:52 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1300/tokenizer_config.json 06/03/2024 10:22:52 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1300/special_tokens_map.json 06/03/2024 10:22:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.5750, 'learning_rate': 2.7844e-04, 'epoch': 0.52} 06/03/2024 10:23:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.5923, 'learning_rate': 2.7828e-04, 'epoch': 0.52} 06/03/2024 10:23:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.6065, 'learning_rate': 2.7812e-04, 'epoch': 0.52} 06/03/2024 10:23:17 - INFO - llamafactory.extras.callbacks - {'loss': 2.5999, 'learning_rate': 2.7795e-04, 'epoch': 0.53} 06/03/2024 10:23:23 - INFO - llamafactory.extras.callbacks - {'loss': 2.6796, 'learning_rate': 2.7779e-04, 'epoch': 0.53} 06/03/2024 10:23:29 - INFO - llamafactory.extras.callbacks - {'loss': 2.6884, 'learning_rate': 2.7763e-04, 'epoch': 0.53} 06/03/2024 10:23:35 - INFO - llamafactory.extras.callbacks - {'loss': 2.5403, 'learning_rate': 2.7746e-04, 'epoch': 0.53} 06/03/2024 10:23:41 - INFO - llamafactory.extras.callbacks - {'loss': 2.6237, 'learning_rate': 2.7730e-04, 'epoch': 0.53} 06/03/2024 10:23:47 - INFO - llamafactory.extras.callbacks - {'loss': 2.6553, 'learning_rate': 2.7713e-04, 'epoch': 0.54} 06/03/2024 10:23:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.6116, 'learning_rate': 2.7696e-04, 'epoch': 0.54} 06/03/2024 10:23:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.5374, 'learning_rate': 2.7680e-04, 'epoch': 0.54} 06/03/2024 10:24:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.5081, 'learning_rate': 2.7663e-04, 'epoch': 0.54} 06/03/2024 10:24:12 - INFO - llamafactory.extras.callbacks - {'loss': 2.5780, 'learning_rate': 2.7646e-04, 'epoch': 0.54} 06/03/2024 10:24:18 - INFO - llamafactory.extras.callbacks - {'loss': 2.7601, 'learning_rate': 2.7629e-04, 'epoch': 0.55} 06/03/2024 10:24:24 - INFO - llamafactory.extras.callbacks - {'loss': 2.6488, 'learning_rate': 2.7612e-04, 'epoch': 0.55} 06/03/2024 10:24:30 - INFO - llamafactory.extras.callbacks - {'loss': 2.5824, 'learning_rate': 2.7595e-04, 'epoch': 0.55} 06/03/2024 10:24:36 - INFO - llamafactory.extras.callbacks - {'loss': 2.5392, 'learning_rate': 2.7578e-04, 'epoch': 0.55} 06/03/2024 10:24:42 - INFO - llamafactory.extras.callbacks - {'loss': 2.6826, 'learning_rate': 2.7561e-04, 'epoch': 0.55} 06/03/2024 10:24:48 - INFO - llamafactory.extras.callbacks - {'loss': 2.6405, 'learning_rate': 2.7544e-04, 'epoch': 0.56} 06/03/2024 10:24:54 - INFO - llamafactory.extras.callbacks - {'loss': 2.6097, 'learning_rate': 2.7527e-04, 'epoch': 0.56} 06/03/2024 10:24:54 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1400 06/03/2024 10:24:54 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:24:54 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:24:54 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1400/tokenizer_config.json 06/03/2024 10:24:54 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1400/special_tokens_map.json 06/03/2024 10:25:00 - INFO - llamafactory.extras.callbacks - {'loss': 2.7201, 'learning_rate': 2.7510e-04, 'epoch': 0.56} 06/03/2024 10:25:06 - INFO - llamafactory.extras.callbacks - {'loss': 2.6121, 'learning_rate': 2.7492e-04, 'epoch': 0.56} 06/03/2024 10:25:12 - INFO - llamafactory.extras.callbacks - {'loss': 2.4831, 'learning_rate': 2.7475e-04, 'epoch': 0.56} 06/03/2024 10:25:18 - INFO - llamafactory.extras.callbacks - {'loss': 2.4708, 'learning_rate': 2.7458e-04, 'epoch': 0.57} 06/03/2024 10:25:24 - INFO - llamafactory.extras.callbacks - {'loss': 2.4762, 'learning_rate': 2.7440e-04, 'epoch': 0.57} 06/03/2024 10:25:30 - INFO - llamafactory.extras.callbacks - {'loss': 2.6241, 'learning_rate': 2.7423e-04, 'epoch': 0.57} 06/03/2024 10:25:36 - INFO - llamafactory.extras.callbacks - {'loss': 2.6665, 'learning_rate': 2.7405e-04, 'epoch': 0.57} 06/03/2024 10:25:42 - INFO - llamafactory.extras.callbacks - {'loss': 2.5111, 'learning_rate': 2.7388e-04, 'epoch': 0.57} 06/03/2024 10:25:48 - INFO - llamafactory.extras.callbacks - {'loss': 2.5384, 'learning_rate': 2.7370e-04, 'epoch': 0.58} 06/03/2024 10:25:54 - INFO - llamafactory.extras.callbacks - {'loss': 2.6777, 'learning_rate': 2.7352e-04, 'epoch': 0.58} 06/03/2024 10:26:00 - INFO - llamafactory.extras.callbacks - {'loss': 2.5877, 'learning_rate': 2.7334e-04, 'epoch': 0.58} 06/03/2024 10:26:06 - INFO - llamafactory.extras.callbacks - {'loss': 2.5990, 'learning_rate': 2.7317e-04, 'epoch': 0.58} 06/03/2024 10:26:12 - INFO - llamafactory.extras.callbacks - {'loss': 2.6303, 'learning_rate': 2.7299e-04, 'epoch': 0.58} 06/03/2024 10:26:18 - INFO - llamafactory.extras.callbacks - {'loss': 2.6477, 'learning_rate': 2.7281e-04, 'epoch': 0.59} 06/03/2024 10:26:24 - INFO - llamafactory.extras.callbacks - {'loss': 2.5973, 'learning_rate': 2.7263e-04, 'epoch': 0.59} 06/03/2024 10:26:30 - INFO - llamafactory.extras.callbacks - {'loss': 2.5756, 'learning_rate': 2.7245e-04, 'epoch': 0.59} 06/03/2024 10:26:36 - INFO - llamafactory.extras.callbacks - {'loss': 2.4608, 'learning_rate': 2.7227e-04, 'epoch': 0.59} 06/03/2024 10:26:42 - INFO - llamafactory.extras.callbacks - {'loss': 2.5340, 'learning_rate': 2.7208e-04, 'epoch': 0.59} 06/03/2024 10:26:48 - INFO - llamafactory.extras.callbacks - {'loss': 2.6079, 'learning_rate': 2.7190e-04, 'epoch': 0.60} 06/03/2024 10:26:54 - INFO - llamafactory.extras.callbacks - {'loss': 2.5866, 'learning_rate': 2.7172e-04, 'epoch': 0.60} 06/03/2024 10:26:54 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1500 06/03/2024 10:26:54 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:26:54 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:26:54 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1500/tokenizer_config.json 06/03/2024 10:26:54 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1500/special_tokens_map.json 06/03/2024 10:27:01 - INFO - llamafactory.extras.callbacks - {'loss': 2.5826, 'learning_rate': 2.7154e-04, 'epoch': 0.60} 06/03/2024 10:27:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.4498, 'learning_rate': 2.7135e-04, 'epoch': 0.60} 06/03/2024 10:27:13 - INFO - llamafactory.extras.callbacks - {'loss': 2.7349, 'learning_rate': 2.7117e-04, 'epoch': 0.60} 06/03/2024 10:27:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.4881, 'learning_rate': 2.7098e-04, 'epoch': 0.61} 06/03/2024 10:27:25 - INFO - llamafactory.extras.callbacks - {'loss': 2.6345, 'learning_rate': 2.7080e-04, 'epoch': 0.61} 06/03/2024 10:27:31 - INFO - llamafactory.extras.callbacks - {'loss': 2.5320, 'learning_rate': 2.7061e-04, 'epoch': 0.61} 06/03/2024 10:27:37 - INFO - llamafactory.extras.callbacks - {'loss': 2.6031, 'learning_rate': 2.7043e-04, 'epoch': 0.61} 06/03/2024 10:27:43 - INFO - llamafactory.extras.callbacks - {'loss': 2.6403, 'learning_rate': 2.7024e-04, 'epoch': 0.61} 06/03/2024 10:27:49 - INFO - llamafactory.extras.callbacks - {'loss': 2.5945, 'learning_rate': 2.7005e-04, 'epoch': 0.62} 06/03/2024 10:27:55 - INFO - llamafactory.extras.callbacks - {'loss': 2.5915, 'learning_rate': 2.6986e-04, 'epoch': 0.62} 06/03/2024 10:28:01 - INFO - llamafactory.extras.callbacks - {'loss': 2.6204, 'learning_rate': 2.6968e-04, 'epoch': 0.62} 06/03/2024 10:28:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.6282, 'learning_rate': 2.6949e-04, 'epoch': 0.62} 06/03/2024 10:28:12 - INFO - llamafactory.extras.callbacks - {'loss': 2.5167, 'learning_rate': 2.6930e-04, 'epoch': 0.62} 06/03/2024 10:28:18 - INFO - llamafactory.extras.callbacks - {'loss': 2.4141, 'learning_rate': 2.6911e-04, 'epoch': 0.63} 06/03/2024 10:28:24 - INFO - llamafactory.extras.callbacks - {'loss': 2.6739, 'learning_rate': 2.6892e-04, 'epoch': 0.63} 06/03/2024 10:28:30 - INFO - llamafactory.extras.callbacks - {'loss': 2.6941, 'learning_rate': 2.6873e-04, 'epoch': 0.63} 06/03/2024 10:28:36 - INFO - llamafactory.extras.callbacks - {'loss': 2.5628, 'learning_rate': 2.6853e-04, 'epoch': 0.63} 06/03/2024 10:28:42 - INFO - llamafactory.extras.callbacks - {'loss': 2.6974, 'learning_rate': 2.6834e-04, 'epoch': 0.63} 06/03/2024 10:28:48 - INFO - llamafactory.extras.callbacks - {'loss': 2.7351, 'learning_rate': 2.6815e-04, 'epoch': 0.64} 06/03/2024 10:28:54 - INFO - llamafactory.extras.callbacks - {'loss': 2.6939, 'learning_rate': 2.6796e-04, 'epoch': 0.64} 06/03/2024 10:28:54 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1600 06/03/2024 10:28:55 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:28:55 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:28:55 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1600/tokenizer_config.json 06/03/2024 10:28:55 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1600/special_tokens_map.json 06/03/2024 10:29:01 - INFO - llamafactory.extras.callbacks - {'loss': 2.4927, 'learning_rate': 2.6776e-04, 'epoch': 0.64} 06/03/2024 10:29:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.4791, 'learning_rate': 2.6757e-04, 'epoch': 0.64} 06/03/2024 10:29:13 - INFO - llamafactory.extras.callbacks - {'loss': 2.6630, 'learning_rate': 2.6737e-04, 'epoch': 0.64} 06/03/2024 10:29:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.6147, 'learning_rate': 2.6718e-04, 'epoch': 0.65} 06/03/2024 10:29:25 - INFO - llamafactory.extras.callbacks - {'loss': 2.5842, 'learning_rate': 2.6698e-04, 'epoch': 0.65} 06/03/2024 10:29:31 - INFO - llamafactory.extras.callbacks - {'loss': 2.6003, 'learning_rate': 2.6679e-04, 'epoch': 0.65} 06/03/2024 10:29:37 - INFO - llamafactory.extras.callbacks - {'loss': 2.6356, 'learning_rate': 2.6659e-04, 'epoch': 0.65} 06/03/2024 10:29:43 - INFO - llamafactory.extras.callbacks - {'loss': 2.5649, 'learning_rate': 2.6639e-04, 'epoch': 0.65} 06/03/2024 10:29:49 - INFO - llamafactory.extras.callbacks - {'loss': 2.6785, 'learning_rate': 2.6620e-04, 'epoch': 0.66} 06/03/2024 10:29:55 - INFO - llamafactory.extras.callbacks - {'loss': 2.5379, 'learning_rate': 2.6600e-04, 'epoch': 0.66} 06/03/2024 10:30:01 - INFO - llamafactory.extras.callbacks - {'loss': 2.6151, 'learning_rate': 2.6580e-04, 'epoch': 0.66} 06/03/2024 10:30:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.6609, 'learning_rate': 2.6560e-04, 'epoch': 0.66} 06/03/2024 10:30:13 - INFO - llamafactory.extras.callbacks - {'loss': 2.5232, 'learning_rate': 2.6540e-04, 'epoch': 0.66} 06/03/2024 10:30:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.5927, 'learning_rate': 2.6520e-04, 'epoch': 0.67} 06/03/2024 10:30:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.6449, 'learning_rate': 2.6500e-04, 'epoch': 0.67} 06/03/2024 10:30:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.5720, 'learning_rate': 2.6480e-04, 'epoch': 0.67} 06/03/2024 10:30:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.6565, 'learning_rate': 2.6460e-04, 'epoch': 0.67} 06/03/2024 10:30:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.5158, 'learning_rate': 2.6440e-04, 'epoch': 0.67} 06/03/2024 10:30:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.4979, 'learning_rate': 2.6419e-04, 'epoch': 0.68} 06/03/2024 10:30:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.6457, 'learning_rate': 2.6399e-04, 'epoch': 0.68} 06/03/2024 10:30:56 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1700 06/03/2024 10:30:56 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:30:56 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:30:56 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1700/tokenizer_config.json 06/03/2024 10:30:56 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1700/special_tokens_map.json 06/03/2024 10:31:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.5031, 'learning_rate': 2.6379e-04, 'epoch': 0.68} 06/03/2024 10:31:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.5488, 'learning_rate': 2.6358e-04, 'epoch': 0.68} 06/03/2024 10:31:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.6686, 'learning_rate': 2.6338e-04, 'epoch': 0.68} 06/03/2024 10:31:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.6308, 'learning_rate': 2.6317e-04, 'epoch': 0.69} 06/03/2024 10:31:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.6768, 'learning_rate': 2.6297e-04, 'epoch': 0.69} 06/03/2024 10:31:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.5665, 'learning_rate': 2.6276e-04, 'epoch': 0.69} 06/03/2024 10:31:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.6614, 'learning_rate': 2.6255e-04, 'epoch': 0.69} 06/03/2024 10:31:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.6992, 'learning_rate': 2.6235e-04, 'epoch': 0.69} 06/03/2024 10:31:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.6264, 'learning_rate': 2.6214e-04, 'epoch': 0.69} 06/03/2024 10:31:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.5872, 'learning_rate': 2.6193e-04, 'epoch': 0.70} 06/03/2024 10:32:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.5463, 'learning_rate': 2.6172e-04, 'epoch': 0.70} 06/03/2024 10:32:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.6829, 'learning_rate': 2.6151e-04, 'epoch': 0.70} 06/03/2024 10:32:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.5868, 'learning_rate': 2.6130e-04, 'epoch': 0.70} 06/03/2024 10:32:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.4201, 'learning_rate': 2.6109e-04, 'epoch': 0.70} 06/03/2024 10:32:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.6233, 'learning_rate': 2.6088e-04, 'epoch': 0.71} 06/03/2024 10:32:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.4135, 'learning_rate': 2.6067e-04, 'epoch': 0.71} 06/03/2024 10:32:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.6121, 'learning_rate': 2.6046e-04, 'epoch': 0.71} 06/03/2024 10:32:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.5794, 'learning_rate': 2.6025e-04, 'epoch': 0.71} 06/03/2024 10:32:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.4883, 'learning_rate': 2.6004e-04, 'epoch': 0.71} 06/03/2024 10:32:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.5903, 'learning_rate': 2.5982e-04, 'epoch': 0.72} 06/03/2024 10:32:56 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1800 06/03/2024 10:32:56 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:32:56 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:32:56 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1800/tokenizer_config.json 06/03/2024 10:32:56 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1800/special_tokens_map.json 06/03/2024 10:33:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.5673, 'learning_rate': 2.5961e-04, 'epoch': 0.72} 06/03/2024 10:33:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.5210, 'learning_rate': 2.5940e-04, 'epoch': 0.72} 06/03/2024 10:33:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.5703, 'learning_rate': 2.5918e-04, 'epoch': 0.72} 06/03/2024 10:33:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.6104, 'learning_rate': 2.5897e-04, 'epoch': 0.72} 06/03/2024 10:33:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.6097, 'learning_rate': 2.5875e-04, 'epoch': 0.73} 06/03/2024 10:33:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.5560, 'learning_rate': 2.5854e-04, 'epoch': 0.73} 06/03/2024 10:33:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.7103, 'learning_rate': 2.5832e-04, 'epoch': 0.73} 06/03/2024 10:33:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.6172, 'learning_rate': 2.5810e-04, 'epoch': 0.73} 06/03/2024 10:33:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.6565, 'learning_rate': 2.5789e-04, 'epoch': 0.73} 06/03/2024 10:33:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.5157, 'learning_rate': 2.5767e-04, 'epoch': 0.74} 06/03/2024 10:34:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.4723, 'learning_rate': 2.5745e-04, 'epoch': 0.74} 06/03/2024 10:34:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.5944, 'learning_rate': 2.5723e-04, 'epoch': 0.74} 06/03/2024 10:34:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.5379, 'learning_rate': 2.5701e-04, 'epoch': 0.74} 06/03/2024 10:34:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.5925, 'learning_rate': 2.5679e-04, 'epoch': 0.74} 06/03/2024 10:34:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.6810, 'learning_rate': 2.5657e-04, 'epoch': 0.75} 06/03/2024 10:34:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.6438, 'learning_rate': 2.5635e-04, 'epoch': 0.75} 06/03/2024 10:34:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.6089, 'learning_rate': 2.5613e-04, 'epoch': 0.75} 06/03/2024 10:34:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.6537, 'learning_rate': 2.5591e-04, 'epoch': 0.75} 06/03/2024 10:34:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.4471, 'learning_rate': 2.5569e-04, 'epoch': 0.75} 06/03/2024 10:34:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.7252, 'learning_rate': 2.5547e-04, 'epoch': 0.76} 06/03/2024 10:34:57 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1900 06/03/2024 10:34:57 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:34:57 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:34:57 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1900/tokenizer_config.json 06/03/2024 10:34:57 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-1900/special_tokens_map.json 06/03/2024 10:35:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.4402, 'learning_rate': 2.5524e-04, 'epoch': 0.76} 06/03/2024 10:35:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.7359, 'learning_rate': 2.5502e-04, 'epoch': 0.76} 06/03/2024 10:35:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.6348, 'learning_rate': 2.5480e-04, 'epoch': 0.76} 06/03/2024 10:35:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.6013, 'learning_rate': 2.5457e-04, 'epoch': 0.76} 06/03/2024 10:35:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.6093, 'learning_rate': 2.5435e-04, 'epoch': 0.77} 06/03/2024 10:35:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.6147, 'learning_rate': 2.5412e-04, 'epoch': 0.77} 06/03/2024 10:35:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.5970, 'learning_rate': 2.5390e-04, 'epoch': 0.77} 06/03/2024 10:35:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.7176, 'learning_rate': 2.5367e-04, 'epoch': 0.77} 06/03/2024 10:35:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.5272, 'learning_rate': 2.5345e-04, 'epoch': 0.77} 06/03/2024 10:35:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.5446, 'learning_rate': 2.5322e-04, 'epoch': 0.78} 06/03/2024 10:36:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.5218, 'learning_rate': 2.5299e-04, 'epoch': 0.78} 06/03/2024 10:36:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.6244, 'learning_rate': 2.5276e-04, 'epoch': 0.78} 06/03/2024 10:36:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.6561, 'learning_rate': 2.5254e-04, 'epoch': 0.78} 06/03/2024 10:36:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.6324, 'learning_rate': 2.5231e-04, 'epoch': 0.78} 06/03/2024 10:36:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.6164, 'learning_rate': 2.5208e-04, 'epoch': 0.79} 06/03/2024 10:36:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.6098, 'learning_rate': 2.5185e-04, 'epoch': 0.79} 06/03/2024 10:36:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.5050, 'learning_rate': 2.5162e-04, 'epoch': 0.79} 06/03/2024 10:36:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.5602, 'learning_rate': 2.5139e-04, 'epoch': 0.79} 06/03/2024 10:36:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.4686, 'learning_rate': 2.5116e-04, 'epoch': 0.79} 06/03/2024 10:36:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.4901, 'learning_rate': 2.5093e-04, 'epoch': 0.80} 06/03/2024 10:36:58 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2000 06/03/2024 10:36:58 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:36:58 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:36:58 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2000/tokenizer_config.json 06/03/2024 10:36:58 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2000/special_tokens_map.json 06/03/2024 10:37:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.5631, 'learning_rate': 2.5069e-04, 'epoch': 0.80} 06/03/2024 10:37:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.6703, 'learning_rate': 2.5046e-04, 'epoch': 0.80} 06/03/2024 10:37:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.4696, 'learning_rate': 2.5023e-04, 'epoch': 0.80} 06/03/2024 10:37:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.7125, 'learning_rate': 2.5000e-04, 'epoch': 0.80} 06/03/2024 10:37:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.7074, 'learning_rate': 2.4976e-04, 'epoch': 0.81} 06/03/2024 10:37:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.7419, 'learning_rate': 2.4953e-04, 'epoch': 0.81} 06/03/2024 10:37:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.5039, 'learning_rate': 2.4930e-04, 'epoch': 0.81} 06/03/2024 10:37:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.6298, 'learning_rate': 2.4906e-04, 'epoch': 0.81} 06/03/2024 10:37:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.5468, 'learning_rate': 2.4883e-04, 'epoch': 0.81} 06/03/2024 10:37:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.5345, 'learning_rate': 2.4859e-04, 'epoch': 0.82} 06/03/2024 10:38:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.5416, 'learning_rate': 2.4835e-04, 'epoch': 0.82} 06/03/2024 10:38:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.5523, 'learning_rate': 2.4812e-04, 'epoch': 0.82} 06/03/2024 10:38:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.4763, 'learning_rate': 2.4788e-04, 'epoch': 0.82} 06/03/2024 10:38:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.5462, 'learning_rate': 2.4764e-04, 'epoch': 0.82} 06/03/2024 10:38:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.6977, 'learning_rate': 2.4741e-04, 'epoch': 0.83} 06/03/2024 10:38:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.5808, 'learning_rate': 2.4717e-04, 'epoch': 0.83} 06/03/2024 10:38:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.4861, 'learning_rate': 2.4693e-04, 'epoch': 0.83} 06/03/2024 10:38:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.6549, 'learning_rate': 2.4669e-04, 'epoch': 0.83} 06/03/2024 10:38:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.6052, 'learning_rate': 2.4645e-04, 'epoch': 0.83} 06/03/2024 10:38:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.5116, 'learning_rate': 2.4621e-04, 'epoch': 0.84} 06/03/2024 10:38:58 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2100 06/03/2024 10:38:58 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:38:58 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:38:58 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2100/tokenizer_config.json 06/03/2024 10:38:58 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2100/special_tokens_map.json 06/03/2024 10:39:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.4863, 'learning_rate': 2.4597e-04, 'epoch': 0.84} 06/03/2024 10:39:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.7366, 'learning_rate': 2.4573e-04, 'epoch': 0.84} 06/03/2024 10:39:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.4626, 'learning_rate': 2.4549e-04, 'epoch': 0.84} 06/03/2024 10:39:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.5505, 'learning_rate': 2.4525e-04, 'epoch': 0.84} 06/03/2024 10:39:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.5981, 'learning_rate': 2.4500e-04, 'epoch': 0.85} 06/03/2024 10:39:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.6405, 'learning_rate': 2.4476e-04, 'epoch': 0.85} 06/03/2024 10:39:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.4575, 'learning_rate': 2.4452e-04, 'epoch': 0.85} 06/03/2024 10:39:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.6075, 'learning_rate': 2.4428e-04, 'epoch': 0.85} 06/03/2024 10:39:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.6052, 'learning_rate': 2.4403e-04, 'epoch': 0.85} 06/03/2024 10:39:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.6822, 'learning_rate': 2.4379e-04, 'epoch': 0.86} 06/03/2024 10:40:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.4590, 'learning_rate': 2.4354e-04, 'epoch': 0.86} 06/03/2024 10:40:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.4760, 'learning_rate': 2.4330e-04, 'epoch': 0.86} 06/03/2024 10:40:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.5372, 'learning_rate': 2.4305e-04, 'epoch': 0.86} 06/03/2024 10:40:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.6880, 'learning_rate': 2.4281e-04, 'epoch': 0.86} 06/03/2024 10:40:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.4423, 'learning_rate': 2.4256e-04, 'epoch': 0.87} 06/03/2024 10:40:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.5889, 'learning_rate': 2.4232e-04, 'epoch': 0.87} 06/03/2024 10:40:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.5738, 'learning_rate': 2.4207e-04, 'epoch': 0.87} 06/03/2024 10:40:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.7525, 'learning_rate': 2.4182e-04, 'epoch': 0.87} 06/03/2024 10:40:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.5813, 'learning_rate': 2.4157e-04, 'epoch': 0.87} 06/03/2024 10:40:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.5728, 'learning_rate': 2.4133e-04, 'epoch': 0.88} 06/03/2024 10:40:58 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2200 06/03/2024 10:40:58 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:40:58 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:40:58 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2200/tokenizer_config.json 06/03/2024 10:40:58 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2200/special_tokens_map.json 06/03/2024 10:41:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.5748, 'learning_rate': 2.4108e-04, 'epoch': 0.88} 06/03/2024 10:41:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.5401, 'learning_rate': 2.4083e-04, 'epoch': 0.88} 06/03/2024 10:41:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.5143, 'learning_rate': 2.4058e-04, 'epoch': 0.88} 06/03/2024 10:41:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.6762, 'learning_rate': 2.4033e-04, 'epoch': 0.88} 06/03/2024 10:41:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.5675, 'learning_rate': 2.4008e-04, 'epoch': 0.89} 06/03/2024 10:41:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.5946, 'learning_rate': 2.3983e-04, 'epoch': 0.89} 06/03/2024 10:41:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.6189, 'learning_rate': 2.3958e-04, 'epoch': 0.89} 06/03/2024 10:41:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.5522, 'learning_rate': 2.3933e-04, 'epoch': 0.89} 06/03/2024 10:41:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.5890, 'learning_rate': 2.3908e-04, 'epoch': 0.89} 06/03/2024 10:41:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.5239, 'learning_rate': 2.3882e-04, 'epoch': 0.90} 06/03/2024 10:42:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.5722, 'learning_rate': 2.3857e-04, 'epoch': 0.90} 06/03/2024 10:42:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.5552, 'learning_rate': 2.3832e-04, 'epoch': 0.90} 06/03/2024 10:42:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.6506, 'learning_rate': 2.3807e-04, 'epoch': 0.90} 06/03/2024 10:42:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.6130, 'learning_rate': 2.3781e-04, 'epoch': 0.90} 06/03/2024 10:42:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.6393, 'learning_rate': 2.3756e-04, 'epoch': 0.91} 06/03/2024 10:42:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.4195, 'learning_rate': 2.3730e-04, 'epoch': 0.91} 06/03/2024 10:42:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.5637, 'learning_rate': 2.3705e-04, 'epoch': 0.91} 06/03/2024 10:42:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.4586, 'learning_rate': 2.3680e-04, 'epoch': 0.91} 06/03/2024 10:42:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.5845, 'learning_rate': 2.3654e-04, 'epoch': 0.91} 06/03/2024 10:42:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.6640, 'learning_rate': 2.3628e-04, 'epoch': 0.92} 06/03/2024 10:42:58 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2300 06/03/2024 10:42:59 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:42:59 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:42:59 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2300/tokenizer_config.json 06/03/2024 10:42:59 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2300/special_tokens_map.json 06/03/2024 10:43:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.4598, 'learning_rate': 2.3603e-04, 'epoch': 0.92} 06/03/2024 10:43:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.5356, 'learning_rate': 2.3577e-04, 'epoch': 0.92} 06/03/2024 10:43:17 - INFO - llamafactory.extras.callbacks - {'loss': 2.5352, 'learning_rate': 2.3551e-04, 'epoch': 0.92} 06/03/2024 10:43:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.5963, 'learning_rate': 2.3526e-04, 'epoch': 0.92} 06/03/2024 10:43:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.5556, 'learning_rate': 2.3500e-04, 'epoch': 0.93} 06/03/2024 10:43:35 - INFO - llamafactory.extras.callbacks - {'loss': 2.6199, 'learning_rate': 2.3474e-04, 'epoch': 0.93} 06/03/2024 10:43:41 - INFO - llamafactory.extras.callbacks - {'loss': 2.4189, 'learning_rate': 2.3448e-04, 'epoch': 0.93} 06/03/2024 10:43:47 - INFO - llamafactory.extras.callbacks - {'loss': 2.6203, 'learning_rate': 2.3422e-04, 'epoch': 0.93} 06/03/2024 10:43:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.5481, 'learning_rate': 2.3397e-04, 'epoch': 0.93} 06/03/2024 10:43:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.5498, 'learning_rate': 2.3371e-04, 'epoch': 0.94} 06/03/2024 10:44:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.5629, 'learning_rate': 2.3345e-04, 'epoch': 0.94} 06/03/2024 10:44:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.5250, 'learning_rate': 2.3319e-04, 'epoch': 0.94} 06/03/2024 10:44:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.7078, 'learning_rate': 2.3293e-04, 'epoch': 0.94} 06/03/2024 10:44:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.6176, 'learning_rate': 2.3266e-04, 'epoch': 0.94} 06/03/2024 10:44:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.6501, 'learning_rate': 2.3240e-04, 'epoch': 0.95} 06/03/2024 10:44:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.4360, 'learning_rate': 2.3214e-04, 'epoch': 0.95} 06/03/2024 10:44:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.5451, 'learning_rate': 2.3188e-04, 'epoch': 0.95} 06/03/2024 10:44:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.5421, 'learning_rate': 2.3162e-04, 'epoch': 0.95} 06/03/2024 10:44:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.6161, 'learning_rate': 2.3135e-04, 'epoch': 0.95} 06/03/2024 10:44:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.5102, 'learning_rate': 2.3109e-04, 'epoch': 0.96} 06/03/2024 10:44:58 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2400 06/03/2024 10:44:58 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:44:58 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:44:58 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2400/tokenizer_config.json 06/03/2024 10:44:58 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2400/special_tokens_map.json 06/03/2024 10:45:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.7259, 'learning_rate': 2.3083e-04, 'epoch': 0.96} 06/03/2024 10:45:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.6417, 'learning_rate': 2.3056e-04, 'epoch': 0.96} 06/03/2024 10:45:17 - INFO - llamafactory.extras.callbacks - {'loss': 2.4390, 'learning_rate': 2.3030e-04, 'epoch': 0.96} 06/03/2024 10:45:23 - INFO - llamafactory.extras.callbacks - {'loss': 2.6322, 'learning_rate': 2.3004e-04, 'epoch': 0.96} 06/03/2024 10:45:29 - INFO - llamafactory.extras.callbacks - {'loss': 2.5879, 'learning_rate': 2.2977e-04, 'epoch': 0.97} 06/03/2024 10:45:35 - INFO - llamafactory.extras.callbacks - {'loss': 2.4739, 'learning_rate': 2.2951e-04, 'epoch': 0.97} 06/03/2024 10:45:41 - INFO - llamafactory.extras.callbacks - {'loss': 2.6461, 'learning_rate': 2.2924e-04, 'epoch': 0.97} 06/03/2024 10:45:47 - INFO - llamafactory.extras.callbacks - {'loss': 2.4895, 'learning_rate': 2.2897e-04, 'epoch': 0.97} 06/03/2024 10:45:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.6011, 'learning_rate': 2.2871e-04, 'epoch': 0.97} 06/03/2024 10:45:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.5588, 'learning_rate': 2.2844e-04, 'epoch': 0.98} 06/03/2024 10:46:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.5556, 'learning_rate': 2.2817e-04, 'epoch': 0.98} 06/03/2024 10:46:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.5654, 'learning_rate': 2.2791e-04, 'epoch': 0.98} 06/03/2024 10:46:17 - INFO - llamafactory.extras.callbacks - {'loss': 2.4522, 'learning_rate': 2.2764e-04, 'epoch': 0.98} 06/03/2024 10:46:23 - INFO - llamafactory.extras.callbacks - {'loss': 2.5592, 'learning_rate': 2.2737e-04, 'epoch': 0.98} 06/03/2024 10:46:29 - INFO - llamafactory.extras.callbacks - {'loss': 2.4735, 'learning_rate': 2.2710e-04, 'epoch': 0.99} 06/03/2024 10:46:35 - INFO - llamafactory.extras.callbacks - {'loss': 2.5008, 'learning_rate': 2.2684e-04, 'epoch': 0.99} 06/03/2024 10:46:41 - INFO - llamafactory.extras.callbacks - {'loss': 2.5271, 'learning_rate': 2.2657e-04, 'epoch': 0.99} 06/03/2024 10:46:47 - INFO - llamafactory.extras.callbacks - {'loss': 2.5740, 'learning_rate': 2.2630e-04, 'epoch': 0.99} 06/03/2024 10:46:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.5452, 'learning_rate': 2.2603e-04, 'epoch': 0.99} 06/03/2024 10:46:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.5663, 'learning_rate': 2.2576e-04, 'epoch': 1.00} 06/03/2024 10:46:59 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2500 06/03/2024 10:46:59 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:46:59 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:46:59 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2500/tokenizer_config.json 06/03/2024 10:46:59 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2500/special_tokens_map.json 06/03/2024 10:47:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.4705, 'learning_rate': 2.2549e-04, 'epoch': 1.00} 06/03/2024 10:47:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.4196, 'learning_rate': 2.2522e-04, 'epoch': 1.00} 06/03/2024 10:47:17 - INFO - llamafactory.extras.callbacks - {'loss': 2.3862, 'learning_rate': 2.2495e-04, 'epoch': 1.00} 06/03/2024 10:47:23 - INFO - llamafactory.extras.callbacks - {'loss': 2.2516, 'learning_rate': 2.2467e-04, 'epoch': 1.00} 06/03/2024 10:47:30 - INFO - llamafactory.extras.callbacks - {'loss': 2.0925, 'learning_rate': 2.2440e-04, 'epoch': 1.01} 06/03/2024 10:47:35 - INFO - llamafactory.extras.callbacks - {'loss': 2.2226, 'learning_rate': 2.2413e-04, 'epoch': 1.01} 06/03/2024 10:47:41 - INFO - llamafactory.extras.callbacks - {'loss': 2.1925, 'learning_rate': 2.2386e-04, 'epoch': 1.01} 06/03/2024 10:47:47 - INFO - llamafactory.extras.callbacks - {'loss': 2.0784, 'learning_rate': 2.2359e-04, 'epoch': 1.01} 06/03/2024 10:47:54 - INFO - llamafactory.extras.callbacks - {'loss': 2.1903, 'learning_rate': 2.2331e-04, 'epoch': 1.01} 06/03/2024 10:48:00 - INFO - llamafactory.extras.callbacks - {'loss': 2.0112, 'learning_rate': 2.2304e-04, 'epoch': 1.02} 06/03/2024 10:48:06 - INFO - llamafactory.extras.callbacks - {'loss': 2.1828, 'learning_rate': 2.2277e-04, 'epoch': 1.02} 06/03/2024 10:48:12 - INFO - llamafactory.extras.callbacks - {'loss': 2.2427, 'learning_rate': 2.2249e-04, 'epoch': 1.02} 06/03/2024 10:48:18 - INFO - llamafactory.extras.callbacks - {'loss': 2.0657, 'learning_rate': 2.2222e-04, 'epoch': 1.02} 06/03/2024 10:48:24 - INFO - llamafactory.extras.callbacks - {'loss': 2.1984, 'learning_rate': 2.2194e-04, 'epoch': 1.02} 06/03/2024 10:48:30 - INFO - llamafactory.extras.callbacks - {'loss': 2.2014, 'learning_rate': 2.2167e-04, 'epoch': 1.03} 06/03/2024 10:48:36 - INFO - llamafactory.extras.callbacks - {'loss': 2.1411, 'learning_rate': 2.2140e-04, 'epoch': 1.03} 06/03/2024 10:48:42 - INFO - llamafactory.extras.callbacks - {'loss': 2.1498, 'learning_rate': 2.2112e-04, 'epoch': 1.03} 06/03/2024 10:48:48 - INFO - llamafactory.extras.callbacks - {'loss': 2.1994, 'learning_rate': 2.2084e-04, 'epoch': 1.03} 06/03/2024 10:48:54 - INFO - llamafactory.extras.callbacks - {'loss': 2.1888, 'learning_rate': 2.2057e-04, 'epoch': 1.03} 06/03/2024 10:49:00 - INFO - llamafactory.extras.callbacks - {'loss': 2.1845, 'learning_rate': 2.2029e-04, 'epoch': 1.04} 06/03/2024 10:49:00 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2600 06/03/2024 10:49:00 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:49:00 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:49:01 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2600/tokenizer_config.json 06/03/2024 10:49:01 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2600/special_tokens_map.json 06/03/2024 10:49:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.1797, 'learning_rate': 2.2002e-04, 'epoch': 1.04} 06/03/2024 10:49:13 - INFO - llamafactory.extras.callbacks - {'loss': 2.2326, 'learning_rate': 2.1974e-04, 'epoch': 1.04} 06/03/2024 10:49:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.1578, 'learning_rate': 2.1946e-04, 'epoch': 1.04} 06/03/2024 10:49:25 - INFO - llamafactory.extras.callbacks - {'loss': 2.1630, 'learning_rate': 2.1918e-04, 'epoch': 1.04} 06/03/2024 10:49:31 - INFO - llamafactory.extras.callbacks - {'loss': 2.1310, 'learning_rate': 2.1891e-04, 'epoch': 1.05} 06/03/2024 10:49:36 - INFO - llamafactory.extras.callbacks - {'loss': 2.1385, 'learning_rate': 2.1863e-04, 'epoch': 1.05} 06/03/2024 10:49:42 - INFO - llamafactory.extras.callbacks - {'loss': 2.1297, 'learning_rate': 2.1835e-04, 'epoch': 1.05} 06/03/2024 10:49:49 - INFO - llamafactory.extras.callbacks - {'loss': 2.1600, 'learning_rate': 2.1807e-04, 'epoch': 1.05} 06/03/2024 10:49:54 - INFO - llamafactory.extras.callbacks - {'loss': 2.1703, 'learning_rate': 2.1779e-04, 'epoch': 1.05} 06/03/2024 10:50:01 - INFO - llamafactory.extras.callbacks - {'loss': 2.2871, 'learning_rate': 2.1751e-04, 'epoch': 1.06} 06/03/2024 10:50:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.1182, 'learning_rate': 2.1723e-04, 'epoch': 1.06} 06/03/2024 10:50:12 - INFO - llamafactory.extras.callbacks - {'loss': 2.1868, 'learning_rate': 2.1695e-04, 'epoch': 1.06} 06/03/2024 10:50:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.3034, 'learning_rate': 2.1667e-04, 'epoch': 1.06} 06/03/2024 10:50:25 - INFO - llamafactory.extras.callbacks - {'loss': 2.1462, 'learning_rate': 2.1639e-04, 'epoch': 1.06} 06/03/2024 10:50:31 - INFO - llamafactory.extras.callbacks - {'loss': 2.2264, 'learning_rate': 2.1611e-04, 'epoch': 1.07} 06/03/2024 10:50:37 - INFO - llamafactory.extras.callbacks - {'loss': 2.1483, 'learning_rate': 2.1583e-04, 'epoch': 1.07} 06/03/2024 10:50:43 - INFO - llamafactory.extras.callbacks - {'loss': 2.2671, 'learning_rate': 2.1555e-04, 'epoch': 1.07} 06/03/2024 10:50:49 - INFO - llamafactory.extras.callbacks - {'loss': 2.0288, 'learning_rate': 2.1527e-04, 'epoch': 1.07} 06/03/2024 10:50:55 - INFO - llamafactory.extras.callbacks - {'loss': 2.2455, 'learning_rate': 2.1499e-04, 'epoch': 1.07} 06/03/2024 10:51:01 - INFO - llamafactory.extras.callbacks - {'loss': 2.0936, 'learning_rate': 2.1470e-04, 'epoch': 1.08} 06/03/2024 10:51:01 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2700 06/03/2024 10:51:01 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:51:01 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:51:01 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2700/tokenizer_config.json 06/03/2024 10:51:01 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2700/special_tokens_map.json 06/03/2024 10:51:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.2633, 'learning_rate': 2.1442e-04, 'epoch': 1.08} 06/03/2024 10:51:13 - INFO - llamafactory.extras.callbacks - {'loss': 2.0494, 'learning_rate': 2.1414e-04, 'epoch': 1.08} 06/03/2024 10:51:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.2881, 'learning_rate': 2.1386e-04, 'epoch': 1.08} 06/03/2024 10:51:25 - INFO - llamafactory.extras.callbacks - {'loss': 2.2561, 'learning_rate': 2.1357e-04, 'epoch': 1.08} 06/03/2024 10:51:31 - INFO - llamafactory.extras.callbacks - {'loss': 2.1538, 'learning_rate': 2.1329e-04, 'epoch': 1.09} 06/03/2024 10:51:37 - INFO - llamafactory.extras.callbacks - {'loss': 2.2026, 'learning_rate': 2.1300e-04, 'epoch': 1.09} 06/03/2024 10:51:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.1910, 'learning_rate': 2.1272e-04, 'epoch': 1.09} 06/03/2024 10:51:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.2054, 'learning_rate': 2.1244e-04, 'epoch': 1.09} 06/03/2024 10:51:55 - INFO - llamafactory.extras.callbacks - {'loss': 2.0800, 'learning_rate': 2.1215e-04, 'epoch': 1.09} 06/03/2024 10:52:01 - INFO - llamafactory.extras.callbacks - {'loss': 2.1121, 'learning_rate': 2.1187e-04, 'epoch': 1.10} 06/03/2024 10:52:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.1042, 'learning_rate': 2.1158e-04, 'epoch': 1.10} 06/03/2024 10:52:13 - INFO - llamafactory.extras.callbacks - {'loss': 2.1663, 'learning_rate': 2.1130e-04, 'epoch': 1.10} 06/03/2024 10:52:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.1616, 'learning_rate': 2.1101e-04, 'epoch': 1.10} 06/03/2024 10:52:25 - INFO - llamafactory.extras.callbacks - {'loss': 2.2593, 'learning_rate': 2.1072e-04, 'epoch': 1.10} 06/03/2024 10:52:31 - INFO - llamafactory.extras.callbacks - {'loss': 1.9991, 'learning_rate': 2.1044e-04, 'epoch': 1.11} 06/03/2024 10:52:37 - INFO - llamafactory.extras.callbacks - {'loss': 2.1432, 'learning_rate': 2.1015e-04, 'epoch': 1.11} 06/03/2024 10:52:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.1829, 'learning_rate': 2.0986e-04, 'epoch': 1.11} 06/03/2024 10:52:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.2542, 'learning_rate': 2.0958e-04, 'epoch': 1.11} 06/03/2024 10:52:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.0991, 'learning_rate': 2.0929e-04, 'epoch': 1.11} 06/03/2024 10:53:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.1161, 'learning_rate': 2.0900e-04, 'epoch': 1.12} 06/03/2024 10:53:02 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2800 06/03/2024 10:53:02 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:53:02 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:53:02 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2800/tokenizer_config.json 06/03/2024 10:53:02 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2800/special_tokens_map.json 06/03/2024 10:53:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.1824, 'learning_rate': 2.0872e-04, 'epoch': 1.12} 06/03/2024 10:53:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.0616, 'learning_rate': 2.0843e-04, 'epoch': 1.12} 06/03/2024 10:53:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.3593, 'learning_rate': 2.0814e-04, 'epoch': 1.12} 06/03/2024 10:53:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.1251, 'learning_rate': 2.0785e-04, 'epoch': 1.12} 06/03/2024 10:53:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.1565, 'learning_rate': 2.0756e-04, 'epoch': 1.13} 06/03/2024 10:53:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.2081, 'learning_rate': 2.0727e-04, 'epoch': 1.13} 06/03/2024 10:53:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.1653, 'learning_rate': 2.0698e-04, 'epoch': 1.13} 06/03/2024 10:53:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.3108, 'learning_rate': 2.0669e-04, 'epoch': 1.13} 06/03/2024 10:53:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.1915, 'learning_rate': 2.0640e-04, 'epoch': 1.13} 06/03/2024 10:54:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.1557, 'learning_rate': 2.0611e-04, 'epoch': 1.14} 06/03/2024 10:54:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.1389, 'learning_rate': 2.0582e-04, 'epoch': 1.14} 06/03/2024 10:54:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.1938, 'learning_rate': 2.0553e-04, 'epoch': 1.14} 06/03/2024 10:54:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.1704, 'learning_rate': 2.0524e-04, 'epoch': 1.14} 06/03/2024 10:54:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.1409, 'learning_rate': 2.0495e-04, 'epoch': 1.14} 06/03/2024 10:54:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.2212, 'learning_rate': 2.0466e-04, 'epoch': 1.15} 06/03/2024 10:54:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.2452, 'learning_rate': 2.0437e-04, 'epoch': 1.15} 06/03/2024 10:54:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.1116, 'learning_rate': 2.0408e-04, 'epoch': 1.15} 06/03/2024 10:54:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.2473, 'learning_rate': 2.0378e-04, 'epoch': 1.15} 06/03/2024 10:54:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.0444, 'learning_rate': 2.0349e-04, 'epoch': 1.15} 06/03/2024 10:55:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.2091, 'learning_rate': 2.0320e-04, 'epoch': 1.15} 06/03/2024 10:55:02 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2900 06/03/2024 10:55:02 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:55:02 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:55:02 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2900/tokenizer_config.json 06/03/2024 10:55:02 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-2900/special_tokens_map.json 06/03/2024 10:55:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.1326, 'learning_rate': 2.0291e-04, 'epoch': 1.16} 06/03/2024 10:55:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.1845, 'learning_rate': 2.0261e-04, 'epoch': 1.16} 06/03/2024 10:55:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.2273, 'learning_rate': 2.0232e-04, 'epoch': 1.16} 06/03/2024 10:55:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.1177, 'learning_rate': 2.0203e-04, 'epoch': 1.16} 06/03/2024 10:55:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.2723, 'learning_rate': 2.0173e-04, 'epoch': 1.16} 06/03/2024 10:55:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.2022, 'learning_rate': 2.0144e-04, 'epoch': 1.17} 06/03/2024 10:55:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.1843, 'learning_rate': 2.0115e-04, 'epoch': 1.17} 06/03/2024 10:55:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.2598, 'learning_rate': 2.0085e-04, 'epoch': 1.17} 06/03/2024 10:55:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.3102, 'learning_rate': 2.0056e-04, 'epoch': 1.17} 06/03/2024 10:56:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.2779, 'learning_rate': 2.0026e-04, 'epoch': 1.17} 06/03/2024 10:56:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.1848, 'learning_rate': 1.9997e-04, 'epoch': 1.18} 06/03/2024 10:56:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.1027, 'learning_rate': 1.9967e-04, 'epoch': 1.18} 06/03/2024 10:56:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.2540, 'learning_rate': 1.9938e-04, 'epoch': 1.18} 06/03/2024 10:56:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.1361, 'learning_rate': 1.9908e-04, 'epoch': 1.18} 06/03/2024 10:56:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.1516, 'learning_rate': 1.9879e-04, 'epoch': 1.18} 06/03/2024 10:56:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.2784, 'learning_rate': 1.9849e-04, 'epoch': 1.19} 06/03/2024 10:56:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.1853, 'learning_rate': 1.9819e-04, 'epoch': 1.19} 06/03/2024 10:56:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.3024, 'learning_rate': 1.9790e-04, 'epoch': 1.19} 06/03/2024 10:56:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.1318, 'learning_rate': 1.9760e-04, 'epoch': 1.19} 06/03/2024 10:57:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.1913, 'learning_rate': 1.9730e-04, 'epoch': 1.19} 06/03/2024 10:57:02 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3000 06/03/2024 10:57:02 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:57:02 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:57:02 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3000/tokenizer_config.json 06/03/2024 10:57:02 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3000/special_tokens_map.json 06/03/2024 10:57:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.2053, 'learning_rate': 1.9701e-04, 'epoch': 1.20} 06/03/2024 10:57:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.1844, 'learning_rate': 1.9671e-04, 'epoch': 1.20} 06/03/2024 10:57:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.1710, 'learning_rate': 1.9641e-04, 'epoch': 1.20} 06/03/2024 10:57:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.2840, 'learning_rate': 1.9611e-04, 'epoch': 1.20} 06/03/2024 10:57:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.1361, 'learning_rate': 1.9582e-04, 'epoch': 1.20} 06/03/2024 10:57:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.1580, 'learning_rate': 1.9552e-04, 'epoch': 1.21} 06/03/2024 10:57:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.1897, 'learning_rate': 1.9522e-04, 'epoch': 1.21} 06/03/2024 10:57:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.2870, 'learning_rate': 1.9492e-04, 'epoch': 1.21} 06/03/2024 10:57:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.1044, 'learning_rate': 1.9462e-04, 'epoch': 1.21} 06/03/2024 10:58:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.1893, 'learning_rate': 1.9432e-04, 'epoch': 1.21} 06/03/2024 10:58:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.2271, 'learning_rate': 1.9403e-04, 'epoch': 1.22} 06/03/2024 10:58:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.1444, 'learning_rate': 1.9373e-04, 'epoch': 1.22} 06/03/2024 10:58:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.2168, 'learning_rate': 1.9343e-04, 'epoch': 1.22} 06/03/2024 10:58:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.2518, 'learning_rate': 1.9313e-04, 'epoch': 1.22} 06/03/2024 10:58:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.2739, 'learning_rate': 1.9283e-04, 'epoch': 1.22} 06/03/2024 10:58:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.0801, 'learning_rate': 1.9253e-04, 'epoch': 1.23} 06/03/2024 10:58:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.1793, 'learning_rate': 1.9223e-04, 'epoch': 1.23} 06/03/2024 10:58:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.2470, 'learning_rate': 1.9193e-04, 'epoch': 1.23} 06/03/2024 10:58:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.1663, 'learning_rate': 1.9163e-04, 'epoch': 1.23} 06/03/2024 10:59:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.0648, 'learning_rate': 1.9133e-04, 'epoch': 1.23} 06/03/2024 10:59:02 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3100 06/03/2024 10:59:02 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 10:59:02 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 10:59:02 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3100/tokenizer_config.json 06/03/2024 10:59:02 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3100/special_tokens_map.json 06/03/2024 10:59:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.1999, 'learning_rate': 1.9102e-04, 'epoch': 1.24} 06/03/2024 10:59:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.4797, 'learning_rate': 1.9072e-04, 'epoch': 1.24} 06/03/2024 10:59:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.3287, 'learning_rate': 1.9042e-04, 'epoch': 1.24} 06/03/2024 10:59:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.1932, 'learning_rate': 1.9012e-04, 'epoch': 1.24} 06/03/2024 10:59:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.2579, 'learning_rate': 1.8982e-04, 'epoch': 1.24} 06/03/2024 10:59:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.3502, 'learning_rate': 1.8952e-04, 'epoch': 1.25} 06/03/2024 10:59:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.2192, 'learning_rate': 1.8922e-04, 'epoch': 1.25} 06/03/2024 10:59:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.2133, 'learning_rate': 1.8891e-04, 'epoch': 1.25} 06/03/2024 10:59:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.3228, 'learning_rate': 1.8861e-04, 'epoch': 1.25} 06/03/2024 11:00:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.1752, 'learning_rate': 1.8831e-04, 'epoch': 1.25} 06/03/2024 11:00:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.1960, 'learning_rate': 1.8801e-04, 'epoch': 1.26} 06/03/2024 11:00:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.1835, 'learning_rate': 1.8770e-04, 'epoch': 1.26} 06/03/2024 11:00:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.0839, 'learning_rate': 1.8740e-04, 'epoch': 1.26} 06/03/2024 11:00:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.2350, 'learning_rate': 1.8710e-04, 'epoch': 1.26} 06/03/2024 11:00:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.1371, 'learning_rate': 1.8679e-04, 'epoch': 1.26} 06/03/2024 11:00:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.2860, 'learning_rate': 1.8649e-04, 'epoch': 1.27} 06/03/2024 11:00:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.1594, 'learning_rate': 1.8619e-04, 'epoch': 1.27} 06/03/2024 11:00:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.2272, 'learning_rate': 1.8588e-04, 'epoch': 1.27} 06/03/2024 11:00:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.1714, 'learning_rate': 1.8558e-04, 'epoch': 1.27} 06/03/2024 11:01:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.1461, 'learning_rate': 1.8528e-04, 'epoch': 1.27} 06/03/2024 11:01:03 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3200 06/03/2024 11:01:03 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:01:03 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:01:03 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3200/tokenizer_config.json 06/03/2024 11:01:03 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3200/special_tokens_map.json 06/03/2024 11:01:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.3851, 'learning_rate': 1.8497e-04, 'epoch': 1.28} 06/03/2024 11:01:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.1879, 'learning_rate': 1.8467e-04, 'epoch': 1.28} 06/03/2024 11:01:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.2753, 'learning_rate': 1.8436e-04, 'epoch': 1.28} 06/03/2024 11:01:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.2598, 'learning_rate': 1.8406e-04, 'epoch': 1.28} 06/03/2024 11:01:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.1045, 'learning_rate': 1.8375e-04, 'epoch': 1.28} 06/03/2024 11:01:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.2614, 'learning_rate': 1.8345e-04, 'epoch': 1.29} 06/03/2024 11:01:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.1703, 'learning_rate': 1.8314e-04, 'epoch': 1.29} 06/03/2024 11:01:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.1891, 'learning_rate': 1.8284e-04, 'epoch': 1.29} 06/03/2024 11:01:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.2270, 'learning_rate': 1.8253e-04, 'epoch': 1.29} 06/03/2024 11:02:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.2460, 'learning_rate': 1.8223e-04, 'epoch': 1.29} 06/03/2024 11:02:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.3162, 'learning_rate': 1.8192e-04, 'epoch': 1.30} 06/03/2024 11:02:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.1449, 'learning_rate': 1.8162e-04, 'epoch': 1.30} 06/03/2024 11:02:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.1936, 'learning_rate': 1.8131e-04, 'epoch': 1.30} 06/03/2024 11:02:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.2076, 'learning_rate': 1.8100e-04, 'epoch': 1.30} 06/03/2024 11:02:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.3472, 'learning_rate': 1.8070e-04, 'epoch': 1.30} 06/03/2024 11:02:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.2042, 'learning_rate': 1.8039e-04, 'epoch': 1.31} 06/03/2024 11:02:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.2247, 'learning_rate': 1.8008e-04, 'epoch': 1.31} 06/03/2024 11:02:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.1685, 'learning_rate': 1.7978e-04, 'epoch': 1.31} 06/03/2024 11:02:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.2263, 'learning_rate': 1.7947e-04, 'epoch': 1.31} 06/03/2024 11:03:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.1982, 'learning_rate': 1.7916e-04, 'epoch': 1.31} 06/03/2024 11:03:03 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3300 06/03/2024 11:03:03 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:03:03 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:03:03 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3300/tokenizer_config.json 06/03/2024 11:03:03 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3300/special_tokens_map.json 06/03/2024 11:03:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.1192, 'learning_rate': 1.7886e-04, 'epoch': 1.32} 06/03/2024 11:03:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.3361, 'learning_rate': 1.7855e-04, 'epoch': 1.32} 06/03/2024 11:03:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.1940, 'learning_rate': 1.7824e-04, 'epoch': 1.32} 06/03/2024 11:03:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.1641, 'learning_rate': 1.7794e-04, 'epoch': 1.32} 06/03/2024 11:03:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.2781, 'learning_rate': 1.7763e-04, 'epoch': 1.32} 06/03/2024 11:03:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.2773, 'learning_rate': 1.7732e-04, 'epoch': 1.33} 06/03/2024 11:03:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.1069, 'learning_rate': 1.7701e-04, 'epoch': 1.33} 06/03/2024 11:03:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.0440, 'learning_rate': 1.7670e-04, 'epoch': 1.33} 06/03/2024 11:03:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.1530, 'learning_rate': 1.7640e-04, 'epoch': 1.33} 06/03/2024 11:04:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.2099, 'learning_rate': 1.7609e-04, 'epoch': 1.33} 06/03/2024 11:04:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.2104, 'learning_rate': 1.7578e-04, 'epoch': 1.34} 06/03/2024 11:04:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.2408, 'learning_rate': 1.7547e-04, 'epoch': 1.34} 06/03/2024 11:04:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.1546, 'learning_rate': 1.7516e-04, 'epoch': 1.34} 06/03/2024 11:04:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.2619, 'learning_rate': 1.7485e-04, 'epoch': 1.34} 06/03/2024 11:04:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.3145, 'learning_rate': 1.7455e-04, 'epoch': 1.34} 06/03/2024 11:04:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.0273, 'learning_rate': 1.7424e-04, 'epoch': 1.35} 06/03/2024 11:04:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.2441, 'learning_rate': 1.7393e-04, 'epoch': 1.35} 06/03/2024 11:04:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.2212, 'learning_rate': 1.7362e-04, 'epoch': 1.35} 06/03/2024 11:04:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.2526, 'learning_rate': 1.7331e-04, 'epoch': 1.35} 06/03/2024 11:05:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.2123, 'learning_rate': 1.7300e-04, 'epoch': 1.35} 06/03/2024 11:05:03 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3400 06/03/2024 11:05:03 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:05:03 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:05:04 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3400/tokenizer_config.json 06/03/2024 11:05:04 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3400/special_tokens_map.json 06/03/2024 11:05:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.2284, 'learning_rate': 1.7269e-04, 'epoch': 1.36} 06/03/2024 11:05:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.1369, 'learning_rate': 1.7238e-04, 'epoch': 1.36} 06/03/2024 11:05:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.2149, 'learning_rate': 1.7207e-04, 'epoch': 1.36} 06/03/2024 11:05:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.1926, 'learning_rate': 1.7176e-04, 'epoch': 1.36} 06/03/2024 11:05:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.1174, 'learning_rate': 1.7145e-04, 'epoch': 1.36} 06/03/2024 11:05:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.2366, 'learning_rate': 1.7114e-04, 'epoch': 1.37} 06/03/2024 11:05:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.3191, 'learning_rate': 1.7083e-04, 'epoch': 1.37} 06/03/2024 11:05:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.2159, 'learning_rate': 1.7052e-04, 'epoch': 1.37} 06/03/2024 11:05:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.1906, 'learning_rate': 1.7021e-04, 'epoch': 1.37} 06/03/2024 11:06:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.1427, 'learning_rate': 1.6990e-04, 'epoch': 1.37} 06/03/2024 11:06:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.1721, 'learning_rate': 1.6959e-04, 'epoch': 1.38} 06/03/2024 11:06:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.1693, 'learning_rate': 1.6928e-04, 'epoch': 1.38} 06/03/2024 11:06:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.2256, 'learning_rate': 1.6897e-04, 'epoch': 1.38} 06/03/2024 11:06:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.2384, 'learning_rate': 1.6866e-04, 'epoch': 1.38} 06/03/2024 11:06:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.0807, 'learning_rate': 1.6835e-04, 'epoch': 1.38} 06/03/2024 11:06:41 - INFO - llamafactory.extras.callbacks - {'loss': 2.2717, 'learning_rate': 1.6804e-04, 'epoch': 1.39} 06/03/2024 11:06:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.0908, 'learning_rate': 1.6773e-04, 'epoch': 1.39} 06/03/2024 11:06:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.2445, 'learning_rate': 1.6742e-04, 'epoch': 1.39} 06/03/2024 11:06:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.2985, 'learning_rate': 1.6711e-04, 'epoch': 1.39} 06/03/2024 11:07:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.1415, 'learning_rate': 1.6680e-04, 'epoch': 1.39} 06/03/2024 11:07:05 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3500 06/03/2024 11:07:05 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:07:05 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:07:05 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3500/tokenizer_config.json 06/03/2024 11:07:05 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3500/special_tokens_map.json 06/03/2024 11:07:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.3455, 'learning_rate': 1.6649e-04, 'epoch': 1.40} 06/03/2024 11:07:17 - INFO - llamafactory.extras.callbacks - {'loss': 2.0707, 'learning_rate': 1.6618e-04, 'epoch': 1.40} 06/03/2024 11:07:23 - INFO - llamafactory.extras.callbacks - {'loss': 2.0146, 'learning_rate': 1.6587e-04, 'epoch': 1.40} 06/03/2024 11:07:29 - INFO - llamafactory.extras.callbacks - {'loss': 2.1838, 'learning_rate': 1.6555e-04, 'epoch': 1.40} 06/03/2024 11:07:35 - INFO - llamafactory.extras.callbacks - {'loss': 2.2518, 'learning_rate': 1.6524e-04, 'epoch': 1.40} 06/03/2024 11:07:41 - INFO - llamafactory.extras.callbacks - {'loss': 2.1785, 'learning_rate': 1.6493e-04, 'epoch': 1.41} 06/03/2024 11:07:47 - INFO - llamafactory.extras.callbacks - {'loss': 2.2149, 'learning_rate': 1.6462e-04, 'epoch': 1.41} 06/03/2024 11:07:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.1552, 'learning_rate': 1.6431e-04, 'epoch': 1.41} 06/03/2024 11:07:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.1950, 'learning_rate': 1.6400e-04, 'epoch': 1.41} 06/03/2024 11:08:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.1918, 'learning_rate': 1.6369e-04, 'epoch': 1.41} 06/03/2024 11:08:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.1733, 'learning_rate': 1.6337e-04, 'epoch': 1.42} 06/03/2024 11:08:17 - INFO - llamafactory.extras.callbacks - {'loss': 2.1643, 'learning_rate': 1.6306e-04, 'epoch': 1.42} 06/03/2024 11:08:23 - INFO - llamafactory.extras.callbacks - {'loss': 2.1423, 'learning_rate': 1.6275e-04, 'epoch': 1.42} 06/03/2024 11:08:29 - INFO - llamafactory.extras.callbacks - {'loss': 2.3586, 'learning_rate': 1.6244e-04, 'epoch': 1.42} 06/03/2024 11:08:36 - INFO - llamafactory.extras.callbacks - {'loss': 2.1143, 'learning_rate': 1.6213e-04, 'epoch': 1.42} 06/03/2024 11:08:42 - INFO - llamafactory.extras.callbacks - {'loss': 2.2403, 'learning_rate': 1.6182e-04, 'epoch': 1.43} 06/03/2024 11:08:48 - INFO - llamafactory.extras.callbacks - {'loss': 2.1688, 'learning_rate': 1.6150e-04, 'epoch': 1.43} 06/03/2024 11:08:54 - INFO - llamafactory.extras.callbacks - {'loss': 2.1304, 'learning_rate': 1.6119e-04, 'epoch': 1.43} 06/03/2024 11:09:00 - INFO - llamafactory.extras.callbacks - {'loss': 2.2145, 'learning_rate': 1.6088e-04, 'epoch': 1.43} 06/03/2024 11:09:06 - INFO - llamafactory.extras.callbacks - {'loss': 2.2152, 'learning_rate': 1.6057e-04, 'epoch': 1.43} 06/03/2024 11:09:06 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3600 06/03/2024 11:09:06 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:09:06 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:09:06 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3600/tokenizer_config.json 06/03/2024 11:09:06 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3600/special_tokens_map.json 06/03/2024 11:09:13 - INFO - llamafactory.extras.callbacks - {'loss': 2.2349, 'learning_rate': 1.6026e-04, 'epoch': 1.44} 06/03/2024 11:09:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.1943, 'learning_rate': 1.5994e-04, 'epoch': 1.44} 06/03/2024 11:09:25 - INFO - llamafactory.extras.callbacks - {'loss': 2.1507, 'learning_rate': 1.5963e-04, 'epoch': 1.44} 06/03/2024 11:09:31 - INFO - llamafactory.extras.callbacks - {'loss': 2.2447, 'learning_rate': 1.5932e-04, 'epoch': 1.44} 06/03/2024 11:09:37 - INFO - llamafactory.extras.callbacks - {'loss': 2.1024, 'learning_rate': 1.5901e-04, 'epoch': 1.44} 06/03/2024 11:09:43 - INFO - llamafactory.extras.callbacks - {'loss': 2.2081, 'learning_rate': 1.5869e-04, 'epoch': 1.45} 06/03/2024 11:09:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.1623, 'learning_rate': 1.5838e-04, 'epoch': 1.45} 06/03/2024 11:09:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.3001, 'learning_rate': 1.5807e-04, 'epoch': 1.45} 06/03/2024 11:10:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.2304, 'learning_rate': 1.5776e-04, 'epoch': 1.45} 06/03/2024 11:10:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.1608, 'learning_rate': 1.5744e-04, 'epoch': 1.45} 06/03/2024 11:10:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.1010, 'learning_rate': 1.5713e-04, 'epoch': 1.46} 06/03/2024 11:10:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.1676, 'learning_rate': 1.5682e-04, 'epoch': 1.46} 06/03/2024 11:10:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.2546, 'learning_rate': 1.5651e-04, 'epoch': 1.46} 06/03/2024 11:10:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.2418, 'learning_rate': 1.5619e-04, 'epoch': 1.46} 06/03/2024 11:10:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.1296, 'learning_rate': 1.5588e-04, 'epoch': 1.46} 06/03/2024 11:10:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.2744, 'learning_rate': 1.5557e-04, 'epoch': 1.47} 06/03/2024 11:10:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.1773, 'learning_rate': 1.5526e-04, 'epoch': 1.47} 06/03/2024 11:10:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.2818, 'learning_rate': 1.5494e-04, 'epoch': 1.47} 06/03/2024 11:11:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.1761, 'learning_rate': 1.5463e-04, 'epoch': 1.47} 06/03/2024 11:11:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.2124, 'learning_rate': 1.5432e-04, 'epoch': 1.47} 06/03/2024 11:11:09 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3700 06/03/2024 11:11:09 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:11:09 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:11:09 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3700/tokenizer_config.json 06/03/2024 11:11:09 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3700/special_tokens_map.json 06/03/2024 11:11:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.1130, 'learning_rate': 1.5400e-04, 'epoch': 1.48} 06/03/2024 11:11:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.1965, 'learning_rate': 1.5369e-04, 'epoch': 1.48} 06/03/2024 11:11:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.2088, 'learning_rate': 1.5338e-04, 'epoch': 1.48} 06/03/2024 11:11:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.1675, 'learning_rate': 1.5307e-04, 'epoch': 1.48} 06/03/2024 11:11:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.1832, 'learning_rate': 1.5275e-04, 'epoch': 1.48} 06/03/2024 11:11:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.2723, 'learning_rate': 1.5244e-04, 'epoch': 1.49} 06/03/2024 11:11:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.1688, 'learning_rate': 1.5213e-04, 'epoch': 1.49} 06/03/2024 11:11:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.1291, 'learning_rate': 1.5181e-04, 'epoch': 1.49} 06/03/2024 11:12:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.1488, 'learning_rate': 1.5150e-04, 'epoch': 1.49} 06/03/2024 11:12:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.1914, 'learning_rate': 1.5119e-04, 'epoch': 1.49} 06/03/2024 11:12:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.1354, 'learning_rate': 1.5088e-04, 'epoch': 1.50} 06/03/2024 11:12:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.1788, 'learning_rate': 1.5056e-04, 'epoch': 1.50} 06/03/2024 11:12:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.2506, 'learning_rate': 1.5025e-04, 'epoch': 1.50} 06/03/2024 11:12:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.3639, 'learning_rate': 1.4994e-04, 'epoch': 1.50} 06/03/2024 11:12:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.1029, 'learning_rate': 1.4962e-04, 'epoch': 1.50} 06/03/2024 11:12:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.2263, 'learning_rate': 1.4931e-04, 'epoch': 1.51} 06/03/2024 11:12:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.3244, 'learning_rate': 1.4900e-04, 'epoch': 1.51} 06/03/2024 11:12:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.2593, 'learning_rate': 1.4869e-04, 'epoch': 1.51} 06/03/2024 11:13:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.3214, 'learning_rate': 1.4837e-04, 'epoch': 1.51} 06/03/2024 11:13:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.1823, 'learning_rate': 1.4806e-04, 'epoch': 1.51} 06/03/2024 11:13:10 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3800 06/03/2024 11:13:10 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:13:10 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:13:10 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3800/tokenizer_config.json 06/03/2024 11:13:10 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3800/special_tokens_map.json 06/03/2024 11:13:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.0003, 'learning_rate': 1.4775e-04, 'epoch': 1.52} 06/03/2024 11:13:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.1975, 'learning_rate': 1.4743e-04, 'epoch': 1.52} 06/03/2024 11:13:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.1137, 'learning_rate': 1.4712e-04, 'epoch': 1.52} 06/03/2024 11:13:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.2060, 'learning_rate': 1.4681e-04, 'epoch': 1.52} 06/03/2024 11:13:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.2331, 'learning_rate': 1.4650e-04, 'epoch': 1.52} 06/03/2024 11:13:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.1523, 'learning_rate': 1.4618e-04, 'epoch': 1.53} 06/03/2024 11:13:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.2796, 'learning_rate': 1.4587e-04, 'epoch': 1.53} 06/03/2024 11:13:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.2487, 'learning_rate': 1.4556e-04, 'epoch': 1.53} 06/03/2024 11:14:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.1701, 'learning_rate': 1.4524e-04, 'epoch': 1.53} 06/03/2024 11:14:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.3521, 'learning_rate': 1.4493e-04, 'epoch': 1.53} 06/03/2024 11:14:17 - INFO - llamafactory.extras.callbacks - {'loss': 2.0861, 'learning_rate': 1.4462e-04, 'epoch': 1.54} 06/03/2024 11:14:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.2932, 'learning_rate': 1.4431e-04, 'epoch': 1.54} 06/03/2024 11:14:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.1267, 'learning_rate': 1.4399e-04, 'epoch': 1.54} 06/03/2024 11:14:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.1610, 'learning_rate': 1.4368e-04, 'epoch': 1.54} 06/03/2024 11:14:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.1416, 'learning_rate': 1.4337e-04, 'epoch': 1.54} 06/03/2024 11:14:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.1847, 'learning_rate': 1.4306e-04, 'epoch': 1.55} 06/03/2024 11:14:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.0351, 'learning_rate': 1.4274e-04, 'epoch': 1.55} 06/03/2024 11:14:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.2255, 'learning_rate': 1.4243e-04, 'epoch': 1.55} 06/03/2024 11:15:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.1551, 'learning_rate': 1.4212e-04, 'epoch': 1.55} 06/03/2024 11:15:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.1379, 'learning_rate': 1.4181e-04, 'epoch': 1.55} 06/03/2024 11:15:10 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3900 06/03/2024 11:15:10 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:15:10 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:15:10 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3900/tokenizer_config.json 06/03/2024 11:15:10 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-3900/special_tokens_map.json 06/03/2024 11:15:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.2812, 'learning_rate': 1.4149e-04, 'epoch': 1.56} 06/03/2024 11:15:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.1536, 'learning_rate': 1.4118e-04, 'epoch': 1.56} 06/03/2024 11:15:29 - INFO - llamafactory.extras.callbacks - {'loss': 2.0650, 'learning_rate': 1.4087e-04, 'epoch': 1.56} 06/03/2024 11:15:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.2897, 'learning_rate': 1.4056e-04, 'epoch': 1.56} 06/03/2024 11:15:41 - INFO - llamafactory.extras.callbacks - {'loss': 2.1539, 'learning_rate': 1.4024e-04, 'epoch': 1.56} 06/03/2024 11:15:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.1315, 'learning_rate': 1.3993e-04, 'epoch': 1.57} 06/03/2024 11:15:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.0840, 'learning_rate': 1.3962e-04, 'epoch': 1.57} 06/03/2024 11:15:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.0406, 'learning_rate': 1.3931e-04, 'epoch': 1.57} 06/03/2024 11:16:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.1606, 'learning_rate': 1.3900e-04, 'epoch': 1.57} 06/03/2024 11:16:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.1084, 'learning_rate': 1.3868e-04, 'epoch': 1.57} 06/03/2024 11:16:17 - INFO - llamafactory.extras.callbacks - {'loss': 2.2465, 'learning_rate': 1.3837e-04, 'epoch': 1.58} 06/03/2024 11:16:23 - INFO - llamafactory.extras.callbacks - {'loss': 2.1984, 'learning_rate': 1.3806e-04, 'epoch': 1.58} 06/03/2024 11:16:29 - INFO - llamafactory.extras.callbacks - {'loss': 2.1909, 'learning_rate': 1.3775e-04, 'epoch': 1.58} 06/03/2024 11:16:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.2525, 'learning_rate': 1.3744e-04, 'epoch': 1.58} 06/03/2024 11:16:41 - INFO - llamafactory.extras.callbacks - {'loss': 2.2275, 'learning_rate': 1.3712e-04, 'epoch': 1.58} 06/03/2024 11:16:47 - INFO - llamafactory.extras.callbacks - {'loss': 2.2409, 'learning_rate': 1.3681e-04, 'epoch': 1.59} 06/03/2024 11:16:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.1772, 'learning_rate': 1.3650e-04, 'epoch': 1.59} 06/03/2024 11:16:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.2952, 'learning_rate': 1.3619e-04, 'epoch': 1.59} 06/03/2024 11:17:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.0964, 'learning_rate': 1.3588e-04, 'epoch': 1.59} 06/03/2024 11:17:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.0697, 'learning_rate': 1.3557e-04, 'epoch': 1.59} 06/03/2024 11:17:11 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4000 06/03/2024 11:17:11 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:17:11 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:17:11 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4000/tokenizer_config.json 06/03/2024 11:17:11 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4000/special_tokens_map.json 06/03/2024 11:17:17 - INFO - llamafactory.extras.callbacks - {'loss': 2.1360, 'learning_rate': 1.3525e-04, 'epoch': 1.60} 06/03/2024 11:17:23 - INFO - llamafactory.extras.callbacks - {'loss': 2.2830, 'learning_rate': 1.3494e-04, 'epoch': 1.60} 06/03/2024 11:17:29 - INFO - llamafactory.extras.callbacks - {'loss': 2.2786, 'learning_rate': 1.3463e-04, 'epoch': 1.60} 06/03/2024 11:17:35 - INFO - llamafactory.extras.callbacks - {'loss': 2.2188, 'learning_rate': 1.3432e-04, 'epoch': 1.60} 06/03/2024 11:17:41 - INFO - llamafactory.extras.callbacks - {'loss': 2.0943, 'learning_rate': 1.3401e-04, 'epoch': 1.60} 06/03/2024 11:17:47 - INFO - llamafactory.extras.callbacks - {'loss': 2.1842, 'learning_rate': 1.3370e-04, 'epoch': 1.61} 06/03/2024 11:17:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.1589, 'learning_rate': 1.3339e-04, 'epoch': 1.61} 06/03/2024 11:18:00 - INFO - llamafactory.extras.callbacks - {'loss': 2.1383, 'learning_rate': 1.3308e-04, 'epoch': 1.61} 06/03/2024 11:18:06 - INFO - llamafactory.extras.callbacks - {'loss': 2.1412, 'learning_rate': 1.3277e-04, 'epoch': 1.61} 06/03/2024 11:18:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.2260, 'learning_rate': 1.3245e-04, 'epoch': 1.61} 06/03/2024 11:18:18 - INFO - llamafactory.extras.callbacks - {'loss': 2.1877, 'learning_rate': 1.3214e-04, 'epoch': 1.61} 06/03/2024 11:18:24 - INFO - llamafactory.extras.callbacks - {'loss': 2.1925, 'learning_rate': 1.3183e-04, 'epoch': 1.62} 06/03/2024 11:18:30 - INFO - llamafactory.extras.callbacks - {'loss': 2.0597, 'learning_rate': 1.3152e-04, 'epoch': 1.62} 06/03/2024 11:18:36 - INFO - llamafactory.extras.callbacks - {'loss': 2.2252, 'learning_rate': 1.3121e-04, 'epoch': 1.62} 06/03/2024 11:18:42 - INFO - llamafactory.extras.callbacks - {'loss': 2.2459, 'learning_rate': 1.3090e-04, 'epoch': 1.62} 06/03/2024 11:18:48 - INFO - llamafactory.extras.callbacks - {'loss': 2.2370, 'learning_rate': 1.3059e-04, 'epoch': 1.62} 06/03/2024 11:18:54 - INFO - llamafactory.extras.callbacks - {'loss': 2.1671, 'learning_rate': 1.3028e-04, 'epoch': 1.63} 06/03/2024 11:19:00 - INFO - llamafactory.extras.callbacks - {'loss': 2.1174, 'learning_rate': 1.2997e-04, 'epoch': 1.63} 06/03/2024 11:19:06 - INFO - llamafactory.extras.callbacks - {'loss': 2.1862, 'learning_rate': 1.2966e-04, 'epoch': 1.63} 06/03/2024 11:19:12 - INFO - llamafactory.extras.callbacks - {'loss': 2.1811, 'learning_rate': 1.2935e-04, 'epoch': 1.63} 06/03/2024 11:19:12 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4100 06/03/2024 11:19:12 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:19:12 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:19:12 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4100/tokenizer_config.json 06/03/2024 11:19:12 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4100/special_tokens_map.json 06/03/2024 11:19:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.1627, 'learning_rate': 1.2904e-04, 'epoch': 1.63} 06/03/2024 11:19:25 - INFO - llamafactory.extras.callbacks - {'loss': 2.2155, 'learning_rate': 1.2873e-04, 'epoch': 1.64} 06/03/2024 11:19:31 - INFO - llamafactory.extras.callbacks - {'loss': 2.2358, 'learning_rate': 1.2842e-04, 'epoch': 1.64} 06/03/2024 11:19:37 - INFO - llamafactory.extras.callbacks - {'loss': 2.1274, 'learning_rate': 1.2811e-04, 'epoch': 1.64} 06/03/2024 11:19:43 - INFO - llamafactory.extras.callbacks - {'loss': 2.3124, 'learning_rate': 1.2780e-04, 'epoch': 1.64} 06/03/2024 11:19:49 - INFO - llamafactory.extras.callbacks - {'loss': 2.2095, 'learning_rate': 1.2749e-04, 'epoch': 1.64} 06/03/2024 11:19:55 - INFO - llamafactory.extras.callbacks - {'loss': 2.1183, 'learning_rate': 1.2718e-04, 'epoch': 1.65} 06/03/2024 11:20:01 - INFO - llamafactory.extras.callbacks - {'loss': 2.0655, 'learning_rate': 1.2687e-04, 'epoch': 1.65} 06/03/2024 11:20:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.2523, 'learning_rate': 1.2657e-04, 'epoch': 1.65} 06/03/2024 11:20:13 - INFO - llamafactory.extras.callbacks - {'loss': 2.3545, 'learning_rate': 1.2626e-04, 'epoch': 1.65} 06/03/2024 11:20:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.2055, 'learning_rate': 1.2595e-04, 'epoch': 1.65} 06/03/2024 11:20:25 - INFO - llamafactory.extras.callbacks - {'loss': 2.1300, 'learning_rate': 1.2564e-04, 'epoch': 1.66} 06/03/2024 11:20:31 - INFO - llamafactory.extras.callbacks - {'loss': 1.9513, 'learning_rate': 1.2533e-04, 'epoch': 1.66} 06/03/2024 11:20:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.2238, 'learning_rate': 1.2502e-04, 'epoch': 1.66} 06/03/2024 11:20:43 - INFO - llamafactory.extras.callbacks - {'loss': 2.2567, 'learning_rate': 1.2471e-04, 'epoch': 1.66} 06/03/2024 11:20:49 - INFO - llamafactory.extras.callbacks - {'loss': 2.1921, 'learning_rate': 1.2440e-04, 'epoch': 1.66} 06/03/2024 11:20:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.1026, 'learning_rate': 1.2410e-04, 'epoch': 1.67} 06/03/2024 11:21:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.2438, 'learning_rate': 1.2379e-04, 'epoch': 1.67} 06/03/2024 11:21:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.1973, 'learning_rate': 1.2348e-04, 'epoch': 1.67} 06/03/2024 11:21:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.1514, 'learning_rate': 1.2317e-04, 'epoch': 1.67} 06/03/2024 11:21:14 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4200 06/03/2024 11:21:14 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:21:14 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:21:14 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4200/tokenizer_config.json 06/03/2024 11:21:14 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4200/special_tokens_map.json 06/03/2024 11:21:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.0853, 'learning_rate': 1.2286e-04, 'epoch': 1.67} 06/03/2024 11:21:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.1558, 'learning_rate': 1.2256e-04, 'epoch': 1.68} 06/03/2024 11:21:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.0616, 'learning_rate': 1.2225e-04, 'epoch': 1.68} 06/03/2024 11:21:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.1184, 'learning_rate': 1.2194e-04, 'epoch': 1.68} 06/03/2024 11:21:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.2190, 'learning_rate': 1.2163e-04, 'epoch': 1.68} 06/03/2024 11:21:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.0718, 'learning_rate': 1.2133e-04, 'epoch': 1.68} 06/03/2024 11:21:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.1528, 'learning_rate': 1.2102e-04, 'epoch': 1.69} 06/03/2024 11:22:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.1236, 'learning_rate': 1.2071e-04, 'epoch': 1.69} 06/03/2024 11:22:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.2627, 'learning_rate': 1.2041e-04, 'epoch': 1.69} 06/03/2024 11:22:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.0575, 'learning_rate': 1.2010e-04, 'epoch': 1.69} 06/03/2024 11:22:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.0092, 'learning_rate': 1.1979e-04, 'epoch': 1.69} 06/03/2024 11:22:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.0574, 'learning_rate': 1.1949e-04, 'epoch': 1.70} 06/03/2024 11:22:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.1512, 'learning_rate': 1.1918e-04, 'epoch': 1.70} 06/03/2024 11:22:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.1205, 'learning_rate': 1.1887e-04, 'epoch': 1.70} 06/03/2024 11:22:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.1388, 'learning_rate': 1.1857e-04, 'epoch': 1.70} 06/03/2024 11:22:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.0661, 'learning_rate': 1.1826e-04, 'epoch': 1.70} 06/03/2024 11:22:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.0906, 'learning_rate': 1.1796e-04, 'epoch': 1.71} 06/03/2024 11:23:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.0528, 'learning_rate': 1.1765e-04, 'epoch': 1.71} 06/03/2024 11:23:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.2097, 'learning_rate': 1.1735e-04, 'epoch': 1.71} 06/03/2024 11:23:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.1171, 'learning_rate': 1.1704e-04, 'epoch': 1.71} 06/03/2024 11:23:14 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4300 06/03/2024 11:23:14 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:23:14 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:23:14 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4300/tokenizer_config.json 06/03/2024 11:23:14 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4300/special_tokens_map.json 06/03/2024 11:23:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.1346, 'learning_rate': 1.1674e-04, 'epoch': 1.71} 06/03/2024 11:23:26 - INFO - llamafactory.extras.callbacks - {'loss': 2.1895, 'learning_rate': 1.1643e-04, 'epoch': 1.72} 06/03/2024 11:23:32 - INFO - llamafactory.extras.callbacks - {'loss': 2.1633, 'learning_rate': 1.1613e-04, 'epoch': 1.72} 06/03/2024 11:23:38 - INFO - llamafactory.extras.callbacks - {'loss': 2.1786, 'learning_rate': 1.1582e-04, 'epoch': 1.72} 06/03/2024 11:23:44 - INFO - llamafactory.extras.callbacks - {'loss': 2.1119, 'learning_rate': 1.1552e-04, 'epoch': 1.72} 06/03/2024 11:23:50 - INFO - llamafactory.extras.callbacks - {'loss': 2.2628, 'learning_rate': 1.1521e-04, 'epoch': 1.72} 06/03/2024 11:23:56 - INFO - llamafactory.extras.callbacks - {'loss': 2.1769, 'learning_rate': 1.1491e-04, 'epoch': 1.73} 06/03/2024 11:24:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.2575, 'learning_rate': 1.1460e-04, 'epoch': 1.73} 06/03/2024 11:24:08 - INFO - llamafactory.extras.callbacks - {'loss': 2.2241, 'learning_rate': 1.1430e-04, 'epoch': 1.73} 06/03/2024 11:24:14 - INFO - llamafactory.extras.callbacks - {'loss': 2.2833, 'learning_rate': 1.1400e-04, 'epoch': 1.73} 06/03/2024 11:24:20 - INFO - llamafactory.extras.callbacks - {'loss': 2.3199, 'learning_rate': 1.1369e-04, 'epoch': 1.73} 06/03/2024 11:24:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.0978, 'learning_rate': 1.1339e-04, 'epoch': 1.74} 06/03/2024 11:24:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.0553, 'learning_rate': 1.1308e-04, 'epoch': 1.74} 06/03/2024 11:24:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.1411, 'learning_rate': 1.1278e-04, 'epoch': 1.74} 06/03/2024 11:24:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.2510, 'learning_rate': 1.1248e-04, 'epoch': 1.74} 06/03/2024 11:24:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.1400, 'learning_rate': 1.1218e-04, 'epoch': 1.74} 06/03/2024 11:24:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.1176, 'learning_rate': 1.1187e-04, 'epoch': 1.75} 06/03/2024 11:25:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.1246, 'learning_rate': 1.1157e-04, 'epoch': 1.75} 06/03/2024 11:25:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.2544, 'learning_rate': 1.1127e-04, 'epoch': 1.75} 06/03/2024 11:25:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.0802, 'learning_rate': 1.1097e-04, 'epoch': 1.75} 06/03/2024 11:25:15 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4400 06/03/2024 11:25:15 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:25:15 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:25:15 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4400/tokenizer_config.json 06/03/2024 11:25:15 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4400/special_tokens_map.json 06/03/2024 11:25:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.0044, 'learning_rate': 1.1066e-04, 'epoch': 1.75} 06/03/2024 11:25:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.2617, 'learning_rate': 1.1036e-04, 'epoch': 1.76} 06/03/2024 11:25:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.2929, 'learning_rate': 1.1006e-04, 'epoch': 1.76} 06/03/2024 11:25:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.2589, 'learning_rate': 1.0976e-04, 'epoch': 1.76} 06/03/2024 11:25:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.2153, 'learning_rate': 1.0946e-04, 'epoch': 1.76} 06/03/2024 11:25:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.2352, 'learning_rate': 1.0916e-04, 'epoch': 1.76} 06/03/2024 11:25:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.2606, 'learning_rate': 1.0885e-04, 'epoch': 1.77} 06/03/2024 11:26:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.0726, 'learning_rate': 1.0855e-04, 'epoch': 1.77} 06/03/2024 11:26:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.1762, 'learning_rate': 1.0825e-04, 'epoch': 1.77} 06/03/2024 11:26:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.1124, 'learning_rate': 1.0795e-04, 'epoch': 1.77} 06/03/2024 11:26:21 - INFO - llamafactory.extras.callbacks - {'loss': 1.9115, 'learning_rate': 1.0765e-04, 'epoch': 1.77} 06/03/2024 11:26:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.1110, 'learning_rate': 1.0735e-04, 'epoch': 1.78} 06/03/2024 11:26:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.2250, 'learning_rate': 1.0705e-04, 'epoch': 1.78} 06/03/2024 11:26:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.2207, 'learning_rate': 1.0675e-04, 'epoch': 1.78} 06/03/2024 11:26:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.2278, 'learning_rate': 1.0645e-04, 'epoch': 1.78} 06/03/2024 11:26:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.2037, 'learning_rate': 1.0615e-04, 'epoch': 1.78} 06/03/2024 11:26:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.1696, 'learning_rate': 1.0586e-04, 'epoch': 1.79} 06/03/2024 11:27:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.1788, 'learning_rate': 1.0556e-04, 'epoch': 1.79} 06/03/2024 11:27:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.1020, 'learning_rate': 1.0526e-04, 'epoch': 1.79} 06/03/2024 11:27:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.1638, 'learning_rate': 1.0496e-04, 'epoch': 1.79} 06/03/2024 11:27:15 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4500 06/03/2024 11:27:15 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:27:15 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:27:15 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4500/tokenizer_config.json 06/03/2024 11:27:15 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4500/special_tokens_map.json 06/03/2024 11:27:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.1953, 'learning_rate': 1.0466e-04, 'epoch': 1.79} 06/03/2024 11:27:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.0658, 'learning_rate': 1.0436e-04, 'epoch': 1.80} 06/03/2024 11:27:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.1306, 'learning_rate': 1.0406e-04, 'epoch': 1.80} 06/03/2024 11:27:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.1241, 'learning_rate': 1.0377e-04, 'epoch': 1.80} 06/03/2024 11:27:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.1246, 'learning_rate': 1.0347e-04, 'epoch': 1.80} 06/03/2024 11:27:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.1185, 'learning_rate': 1.0317e-04, 'epoch': 1.80} 06/03/2024 11:27:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.2005, 'learning_rate': 1.0287e-04, 'epoch': 1.81} 06/03/2024 11:28:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.2367, 'learning_rate': 1.0258e-04, 'epoch': 1.81} 06/03/2024 11:28:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.1091, 'learning_rate': 1.0228e-04, 'epoch': 1.81} 06/03/2024 11:28:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.2249, 'learning_rate': 1.0198e-04, 'epoch': 1.81} 06/03/2024 11:28:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.1306, 'learning_rate': 1.0169e-04, 'epoch': 1.81} 06/03/2024 11:28:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.2575, 'learning_rate': 1.0139e-04, 'epoch': 1.82} 06/03/2024 11:28:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.2546, 'learning_rate': 1.0110e-04, 'epoch': 1.82} 06/03/2024 11:28:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.1074, 'learning_rate': 1.0080e-04, 'epoch': 1.82} 06/03/2024 11:28:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.2229, 'learning_rate': 1.0050e-04, 'epoch': 1.82} 06/03/2024 11:28:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.2268, 'learning_rate': 1.0021e-04, 'epoch': 1.82} 06/03/2024 11:28:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.0874, 'learning_rate': 9.9914e-05, 'epoch': 1.83} 06/03/2024 11:29:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.2942, 'learning_rate': 9.9619e-05, 'epoch': 1.83} 06/03/2024 11:29:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.3143, 'learning_rate': 9.9325e-05, 'epoch': 1.83} 06/03/2024 11:29:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.1610, 'learning_rate': 9.9030e-05, 'epoch': 1.83} 06/03/2024 11:29:15 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4600 06/03/2024 11:29:16 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:29:16 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:29:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4600/tokenizer_config.json 06/03/2024 11:29:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4600/special_tokens_map.json 06/03/2024 11:29:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.1210, 'learning_rate': 9.8736e-05, 'epoch': 1.83} 06/03/2024 11:29:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.1467, 'learning_rate': 9.8442e-05, 'epoch': 1.84} 06/03/2024 11:29:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.1369, 'learning_rate': 9.8148e-05, 'epoch': 1.84} 06/03/2024 11:29:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.2422, 'learning_rate': 9.7855e-05, 'epoch': 1.84} 06/03/2024 11:29:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.2922, 'learning_rate': 9.7562e-05, 'epoch': 1.84} 06/03/2024 11:29:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.1578, 'learning_rate': 9.7269e-05, 'epoch': 1.84} 06/03/2024 11:29:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.1355, 'learning_rate': 9.6976e-05, 'epoch': 1.85} 06/03/2024 11:30:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.1895, 'learning_rate': 9.6683e-05, 'epoch': 1.85} 06/03/2024 11:30:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.2200, 'learning_rate': 9.6391e-05, 'epoch': 1.85} 06/03/2024 11:30:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.1724, 'learning_rate': 9.6099e-05, 'epoch': 1.85} 06/03/2024 11:30:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.1442, 'learning_rate': 9.5807e-05, 'epoch': 1.85} 06/03/2024 11:30:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.1636, 'learning_rate': 9.5515e-05, 'epoch': 1.86} 06/03/2024 11:30:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.2238, 'learning_rate': 9.5224e-05, 'epoch': 1.86} 06/03/2024 11:30:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.1910, 'learning_rate': 9.4933e-05, 'epoch': 1.86} 06/03/2024 11:30:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.2831, 'learning_rate': 9.4642e-05, 'epoch': 1.86} 06/03/2024 11:30:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.1966, 'learning_rate': 9.4351e-05, 'epoch': 1.86} 06/03/2024 11:30:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.1003, 'learning_rate': 9.4061e-05, 'epoch': 1.87} 06/03/2024 11:31:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.1290, 'learning_rate': 9.3770e-05, 'epoch': 1.87} 06/03/2024 11:31:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.1238, 'learning_rate': 9.3480e-05, 'epoch': 1.87} 06/03/2024 11:31:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.0219, 'learning_rate': 9.3191e-05, 'epoch': 1.87} 06/03/2024 11:31:16 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4700 06/03/2024 11:31:16 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:31:16 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:31:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4700/tokenizer_config.json 06/03/2024 11:31:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4700/special_tokens_map.json 06/03/2024 11:31:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.2963, 'learning_rate': 9.2901e-05, 'epoch': 1.87} 06/03/2024 11:31:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.0044, 'learning_rate': 9.2612e-05, 'epoch': 1.88} 06/03/2024 11:31:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.0720, 'learning_rate': 9.2323e-05, 'epoch': 1.88} 06/03/2024 11:31:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.1120, 'learning_rate': 9.2034e-05, 'epoch': 1.88} 06/03/2024 11:31:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.1151, 'learning_rate': 9.1746e-05, 'epoch': 1.88} 06/03/2024 11:31:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.0835, 'learning_rate': 9.1458e-05, 'epoch': 1.88} 06/03/2024 11:31:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.1682, 'learning_rate': 9.1170e-05, 'epoch': 1.89} 06/03/2024 11:32:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.1956, 'learning_rate': 9.0882e-05, 'epoch': 1.89} 06/03/2024 11:32:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.1155, 'learning_rate': 9.0594e-05, 'epoch': 1.89} 06/03/2024 11:32:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.0731, 'learning_rate': 9.0307e-05, 'epoch': 1.89} 06/03/2024 11:32:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.1500, 'learning_rate': 9.0020e-05, 'epoch': 1.89} 06/03/2024 11:32:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.0689, 'learning_rate': 8.9734e-05, 'epoch': 1.90} 06/03/2024 11:32:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.1428, 'learning_rate': 8.9447e-05, 'epoch': 1.90} 06/03/2024 11:32:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.1202, 'learning_rate': 8.9161e-05, 'epoch': 1.90} 06/03/2024 11:32:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.0323, 'learning_rate': 8.8875e-05, 'epoch': 1.90} 06/03/2024 11:32:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.1210, 'learning_rate': 8.8590e-05, 'epoch': 1.90} 06/03/2024 11:32:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.0498, 'learning_rate': 8.8304e-05, 'epoch': 1.91} 06/03/2024 11:33:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.1391, 'learning_rate': 8.8019e-05, 'epoch': 1.91} 06/03/2024 11:33:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.1965, 'learning_rate': 8.7734e-05, 'epoch': 1.91} 06/03/2024 11:33:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.0892, 'learning_rate': 8.7450e-05, 'epoch': 1.91} 06/03/2024 11:33:16 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4800 06/03/2024 11:33:16 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:33:16 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:33:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4800/tokenizer_config.json 06/03/2024 11:33:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4800/special_tokens_map.json 06/03/2024 11:33:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.2029, 'learning_rate': 8.7166e-05, 'epoch': 1.91} 06/03/2024 11:33:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.1529, 'learning_rate': 8.6882e-05, 'epoch': 1.92} 06/03/2024 11:33:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.1064, 'learning_rate': 8.6598e-05, 'epoch': 1.92} 06/03/2024 11:33:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.0913, 'learning_rate': 8.6314e-05, 'epoch': 1.92} 06/03/2024 11:33:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.1739, 'learning_rate': 8.6031e-05, 'epoch': 1.92} 06/03/2024 11:33:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.1532, 'learning_rate': 8.5748e-05, 'epoch': 1.92} 06/03/2024 11:33:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.0795, 'learning_rate': 8.5466e-05, 'epoch': 1.93} 06/03/2024 11:34:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.1881, 'learning_rate': 8.5183e-05, 'epoch': 1.93} 06/03/2024 11:34:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.0240, 'learning_rate': 8.4901e-05, 'epoch': 1.93} 06/03/2024 11:34:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.1415, 'learning_rate': 8.4620e-05, 'epoch': 1.93} 06/03/2024 11:34:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.0673, 'learning_rate': 8.4338e-05, 'epoch': 1.93} 06/03/2024 11:34:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.2054, 'learning_rate': 8.4057e-05, 'epoch': 1.94} 06/03/2024 11:34:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.0969, 'learning_rate': 8.3776e-05, 'epoch': 1.94} 06/03/2024 11:34:40 - INFO - llamafactory.extras.callbacks - {'loss': 2.1023, 'learning_rate': 8.3495e-05, 'epoch': 1.94} 06/03/2024 11:34:46 - INFO - llamafactory.extras.callbacks - {'loss': 2.2145, 'learning_rate': 8.3215e-05, 'epoch': 1.94} 06/03/2024 11:34:52 - INFO - llamafactory.extras.callbacks - {'loss': 2.1721, 'learning_rate': 8.2935e-05, 'epoch': 1.94} 06/03/2024 11:34:58 - INFO - llamafactory.extras.callbacks - {'loss': 2.1454, 'learning_rate': 8.2655e-05, 'epoch': 1.95} 06/03/2024 11:35:04 - INFO - llamafactory.extras.callbacks - {'loss': 2.1281, 'learning_rate': 8.2376e-05, 'epoch': 1.95} 06/03/2024 11:35:10 - INFO - llamafactory.extras.callbacks - {'loss': 2.2334, 'learning_rate': 8.2097e-05, 'epoch': 1.95} 06/03/2024 11:35:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.1161, 'learning_rate': 8.1818e-05, 'epoch': 1.95} 06/03/2024 11:35:15 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4900 06/03/2024 11:35:16 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:35:16 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:35:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4900/tokenizer_config.json 06/03/2024 11:35:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-4900/special_tokens_map.json 06/03/2024 11:35:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.0538, 'learning_rate': 8.1539e-05, 'epoch': 1.95} 06/03/2024 11:35:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.1474, 'learning_rate': 8.1261e-05, 'epoch': 1.96} 06/03/2024 11:35:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.0556, 'learning_rate': 8.0983e-05, 'epoch': 1.96} 06/03/2024 11:35:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.1850, 'learning_rate': 8.0705e-05, 'epoch': 1.96} 06/03/2024 11:35:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.0859, 'learning_rate': 8.0428e-05, 'epoch': 1.96} 06/03/2024 11:35:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.1597, 'learning_rate': 8.0151e-05, 'epoch': 1.96} 06/03/2024 11:35:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.0879, 'learning_rate': 7.9874e-05, 'epoch': 1.97} 06/03/2024 11:36:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.1786, 'learning_rate': 7.9653e-05, 'epoch': 1.97} 06/03/2024 11:36:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.2384, 'learning_rate': 7.9377e-05, 'epoch': 1.97} 06/03/2024 11:36:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.1620, 'learning_rate': 7.9101e-05, 'epoch': 1.97} 06/03/2024 11:36:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.0265, 'learning_rate': 7.8825e-05, 'epoch': 1.97} 06/03/2024 11:36:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.2106, 'learning_rate': 7.8550e-05, 'epoch': 1.98} 06/03/2024 11:36:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.0393, 'learning_rate': 7.8275e-05, 'epoch': 1.98} 06/03/2024 11:36:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.0138, 'learning_rate': 7.8000e-05, 'epoch': 1.98} 06/03/2024 11:36:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.0343, 'learning_rate': 7.7726e-05, 'epoch': 1.98} 06/03/2024 11:36:51 - INFO - llamafactory.extras.callbacks - {'loss': 2.2479, 'learning_rate': 7.7452e-05, 'epoch': 1.98} 06/03/2024 11:36:57 - INFO - llamafactory.extras.callbacks - {'loss': 2.2015, 'learning_rate': 7.7178e-05, 'epoch': 1.99} 06/03/2024 11:37:03 - INFO - llamafactory.extras.callbacks - {'loss': 2.0462, 'learning_rate': 7.6905e-05, 'epoch': 1.99} 06/03/2024 11:37:09 - INFO - llamafactory.extras.callbacks - {'loss': 2.1735, 'learning_rate': 7.6632e-05, 'epoch': 1.99} 06/03/2024 11:37:15 - INFO - llamafactory.extras.callbacks - {'loss': 2.1925, 'learning_rate': 7.6359e-05, 'epoch': 1.99} 06/03/2024 11:37:15 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5000 06/03/2024 11:37:15 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:37:15 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:37:15 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5000/tokenizer_config.json 06/03/2024 11:37:15 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5000/special_tokens_map.json 06/03/2024 11:37:21 - INFO - llamafactory.extras.callbacks - {'loss': 2.1225, 'learning_rate': 7.6087e-05, 'epoch': 1.99} 06/03/2024 11:37:27 - INFO - llamafactory.extras.callbacks - {'loss': 2.1722, 'learning_rate': 7.5814e-05, 'epoch': 2.00} 06/03/2024 11:37:33 - INFO - llamafactory.extras.callbacks - {'loss': 2.0786, 'learning_rate': 7.5543e-05, 'epoch': 2.00} 06/03/2024 11:37:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.9824, 'learning_rate': 7.5271e-05, 'epoch': 2.00} 06/03/2024 11:37:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.6428, 'learning_rate': 7.5000e-05, 'epoch': 2.00} 06/03/2024 11:37:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.6547, 'learning_rate': 7.4729e-05, 'epoch': 2.00} 06/03/2024 11:37:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.5545, 'learning_rate': 7.4459e-05, 'epoch': 2.01} 06/03/2024 11:38:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.4708, 'learning_rate': 7.4189e-05, 'epoch': 2.01} 06/03/2024 11:38:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.4207, 'learning_rate': 7.3919e-05, 'epoch': 2.01} 06/03/2024 11:38:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.5883, 'learning_rate': 7.3649e-05, 'epoch': 2.01} 06/03/2024 11:38:21 - INFO - llamafactory.extras.callbacks - {'loss': 1.3543, 'learning_rate': 7.3380e-05, 'epoch': 2.01} 06/03/2024 11:38:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.5251, 'learning_rate': 7.3111e-05, 'epoch': 2.02} 06/03/2024 11:38:32 - INFO - llamafactory.extras.callbacks - {'loss': 1.4305, 'learning_rate': 7.2843e-05, 'epoch': 2.02} 06/03/2024 11:38:38 - INFO - llamafactory.extras.callbacks - {'loss': 1.4375, 'learning_rate': 7.2574e-05, 'epoch': 2.02} 06/03/2024 11:38:44 - INFO - llamafactory.extras.callbacks - {'loss': 1.5044, 'learning_rate': 7.2307e-05, 'epoch': 2.02} 06/03/2024 11:38:50 - INFO - llamafactory.extras.callbacks - {'loss': 1.4885, 'learning_rate': 7.2039e-05, 'epoch': 2.02} 06/03/2024 11:38:56 - INFO - llamafactory.extras.callbacks - {'loss': 1.4702, 'learning_rate': 7.1772e-05, 'epoch': 2.03} 06/03/2024 11:39:02 - INFO - llamafactory.extras.callbacks - {'loss': 1.4807, 'learning_rate': 7.1505e-05, 'epoch': 2.03} 06/03/2024 11:39:08 - INFO - llamafactory.extras.callbacks - {'loss': 1.3754, 'learning_rate': 7.1239e-05, 'epoch': 2.03} 06/03/2024 11:39:14 - INFO - llamafactory.extras.callbacks - {'loss': 1.3530, 'learning_rate': 7.0973e-05, 'epoch': 2.03} 06/03/2024 11:39:14 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5100 06/03/2024 11:39:14 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:39:14 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:39:14 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5100/tokenizer_config.json 06/03/2024 11:39:14 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5100/special_tokens_map.json 06/03/2024 11:39:20 - INFO - llamafactory.extras.callbacks - {'loss': 1.4694, 'learning_rate': 7.0707e-05, 'epoch': 2.03} 06/03/2024 11:39:26 - INFO - llamafactory.extras.callbacks - {'loss': 1.3246, 'learning_rate': 7.0441e-05, 'epoch': 2.04} 06/03/2024 11:39:32 - INFO - llamafactory.extras.callbacks - {'loss': 1.5211, 'learning_rate': 7.0176e-05, 'epoch': 2.04} 06/03/2024 11:39:38 - INFO - llamafactory.extras.callbacks - {'loss': 1.4369, 'learning_rate': 6.9912e-05, 'epoch': 2.04} 06/03/2024 11:39:44 - INFO - llamafactory.extras.callbacks - {'loss': 1.3727, 'learning_rate': 6.9647e-05, 'epoch': 2.04} 06/03/2024 11:39:50 - INFO - llamafactory.extras.callbacks - {'loss': 1.5538, 'learning_rate': 6.9383e-05, 'epoch': 2.04} 06/03/2024 11:39:56 - INFO - llamafactory.extras.callbacks - {'loss': 1.4582, 'learning_rate': 6.9119e-05, 'epoch': 2.05} 06/03/2024 11:40:02 - INFO - llamafactory.extras.callbacks - {'loss': 1.4463, 'learning_rate': 6.8856e-05, 'epoch': 2.05} 06/03/2024 11:40:08 - INFO - llamafactory.extras.callbacks - {'loss': 1.4782, 'learning_rate': 6.8593e-05, 'epoch': 2.05} 06/03/2024 11:40:14 - INFO - llamafactory.extras.callbacks - {'loss': 1.4206, 'learning_rate': 6.8330e-05, 'epoch': 2.05} 06/03/2024 11:40:20 - INFO - llamafactory.extras.callbacks - {'loss': 1.4499, 'learning_rate': 6.8068e-05, 'epoch': 2.05} 06/03/2024 11:40:26 - INFO - llamafactory.extras.callbacks - {'loss': 1.5276, 'learning_rate': 6.7806e-05, 'epoch': 2.06} 06/03/2024 11:40:32 - INFO - llamafactory.extras.callbacks - {'loss': 1.4228, 'learning_rate': 6.7545e-05, 'epoch': 2.06} 06/03/2024 11:40:38 - INFO - llamafactory.extras.callbacks - {'loss': 1.4222, 'learning_rate': 6.7283e-05, 'epoch': 2.06} 06/03/2024 11:40:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.4701, 'learning_rate': 6.7023e-05, 'epoch': 2.06} 06/03/2024 11:40:50 - INFO - llamafactory.extras.callbacks - {'loss': 1.5687, 'learning_rate': 6.6762e-05, 'epoch': 2.06} 06/03/2024 11:40:56 - INFO - llamafactory.extras.callbacks - {'loss': 1.4237, 'learning_rate': 6.6502e-05, 'epoch': 2.07} 06/03/2024 11:41:02 - INFO - llamafactory.extras.callbacks - {'loss': 1.4510, 'learning_rate': 6.6242e-05, 'epoch': 2.07} 06/03/2024 11:41:08 - INFO - llamafactory.extras.callbacks - {'loss': 1.3937, 'learning_rate': 6.5983e-05, 'epoch': 2.07} 06/03/2024 11:41:14 - INFO - llamafactory.extras.callbacks - {'loss': 1.4379, 'learning_rate': 6.5724e-05, 'epoch': 2.07} 06/03/2024 11:41:14 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5200 06/03/2024 11:41:14 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:41:14 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:41:14 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5200/tokenizer_config.json 06/03/2024 11:41:14 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5200/special_tokens_map.json 06/03/2024 11:41:21 - INFO - llamafactory.extras.callbacks - {'loss': 1.6361, 'learning_rate': 6.5465e-05, 'epoch': 2.07} 06/03/2024 11:41:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.5065, 'learning_rate': 6.5207e-05, 'epoch': 2.07} 06/03/2024 11:41:33 - INFO - llamafactory.extras.callbacks - {'loss': 1.5345, 'learning_rate': 6.4949e-05, 'epoch': 2.08} 06/03/2024 11:41:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.3541, 'learning_rate': 6.4691e-05, 'epoch': 2.08} 06/03/2024 11:41:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.4141, 'learning_rate': 6.4434e-05, 'epoch': 2.08} 06/03/2024 11:41:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.3367, 'learning_rate': 6.4177e-05, 'epoch': 2.08} 06/03/2024 11:41:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.4730, 'learning_rate': 6.3921e-05, 'epoch': 2.08} 06/03/2024 11:42:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.4581, 'learning_rate': 6.3665e-05, 'epoch': 2.09} 06/03/2024 11:42:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.4396, 'learning_rate': 6.3409e-05, 'epoch': 2.09} 06/03/2024 11:42:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.4986, 'learning_rate': 6.3154e-05, 'epoch': 2.09} 06/03/2024 11:42:21 - INFO - llamafactory.extras.callbacks - {'loss': 1.3253, 'learning_rate': 6.2899e-05, 'epoch': 2.09} 06/03/2024 11:42:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.4252, 'learning_rate': 6.2644e-05, 'epoch': 2.09} 06/03/2024 11:42:32 - INFO - llamafactory.extras.callbacks - {'loss': 1.3983, 'learning_rate': 6.2390e-05, 'epoch': 2.10} 06/03/2024 11:42:38 - INFO - llamafactory.extras.callbacks - {'loss': 1.4465, 'learning_rate': 6.2136e-05, 'epoch': 2.10} 06/03/2024 11:42:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.4788, 'learning_rate': 6.1883e-05, 'epoch': 2.10} 06/03/2024 11:42:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.5226, 'learning_rate': 6.1630e-05, 'epoch': 2.10} 06/03/2024 11:42:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.4933, 'learning_rate': 6.1377e-05, 'epoch': 2.10} 06/03/2024 11:43:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.4294, 'learning_rate': 6.1125e-05, 'epoch': 2.11} 06/03/2024 11:43:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.4371, 'learning_rate': 6.0873e-05, 'epoch': 2.11} 06/03/2024 11:43:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.4965, 'learning_rate': 6.0622e-05, 'epoch': 2.11} 06/03/2024 11:43:15 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5300 06/03/2024 11:43:15 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:43:15 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:43:15 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5300/tokenizer_config.json 06/03/2024 11:43:15 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5300/special_tokens_map.json 06/03/2024 11:43:21 - INFO - llamafactory.extras.callbacks - {'loss': 1.5039, 'learning_rate': 6.0370e-05, 'epoch': 2.11} 06/03/2024 11:43:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.4780, 'learning_rate': 6.0120e-05, 'epoch': 2.11} 06/03/2024 11:43:33 - INFO - llamafactory.extras.callbacks - {'loss': 1.4578, 'learning_rate': 5.9869e-05, 'epoch': 2.12} 06/03/2024 11:43:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.3406, 'learning_rate': 5.9619e-05, 'epoch': 2.12} 06/03/2024 11:43:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.3709, 'learning_rate': 5.9370e-05, 'epoch': 2.12} 06/03/2024 11:43:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.3714, 'learning_rate': 5.9121e-05, 'epoch': 2.12} 06/03/2024 11:43:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.3924, 'learning_rate': 5.8872e-05, 'epoch': 2.12} 06/03/2024 11:44:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.5259, 'learning_rate': 5.8624e-05, 'epoch': 2.13} 06/03/2024 11:44:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.4742, 'learning_rate': 5.8376e-05, 'epoch': 2.13} 06/03/2024 11:44:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.3870, 'learning_rate': 5.8128e-05, 'epoch': 2.13} 06/03/2024 11:44:21 - INFO - llamafactory.extras.callbacks - {'loss': 1.4379, 'learning_rate': 5.7881e-05, 'epoch': 2.13} 06/03/2024 11:44:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.4462, 'learning_rate': 5.7634e-05, 'epoch': 2.13} 06/03/2024 11:44:33 - INFO - llamafactory.extras.callbacks - {'loss': 1.4794, 'learning_rate': 5.7388e-05, 'epoch': 2.14} 06/03/2024 11:44:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.4962, 'learning_rate': 5.7142e-05, 'epoch': 2.14} 06/03/2024 11:44:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.3646, 'learning_rate': 5.6897e-05, 'epoch': 2.14} 06/03/2024 11:44:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.4707, 'learning_rate': 5.6651e-05, 'epoch': 2.14} 06/03/2024 11:44:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.4804, 'learning_rate': 5.6407e-05, 'epoch': 2.14} 06/03/2024 11:45:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.4235, 'learning_rate': 5.6162e-05, 'epoch': 2.15} 06/03/2024 11:45:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.4927, 'learning_rate': 5.5918e-05, 'epoch': 2.15} 06/03/2024 11:45:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.4286, 'learning_rate': 5.5675e-05, 'epoch': 2.15} 06/03/2024 11:45:15 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5400 06/03/2024 11:45:15 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:45:15 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:45:15 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5400/tokenizer_config.json 06/03/2024 11:45:15 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5400/special_tokens_map.json 06/03/2024 11:45:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.3760, 'learning_rate': 5.5432e-05, 'epoch': 2.15} 06/03/2024 11:45:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.4612, 'learning_rate': 5.5189e-05, 'epoch': 2.15} 06/03/2024 11:45:33 - INFO - llamafactory.extras.callbacks - {'loss': 1.4151, 'learning_rate': 5.4947e-05, 'epoch': 2.16} 06/03/2024 11:45:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.3365, 'learning_rate': 5.4705e-05, 'epoch': 2.16} 06/03/2024 11:45:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.3991, 'learning_rate': 5.4464e-05, 'epoch': 2.16} 06/03/2024 11:45:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.5015, 'learning_rate': 5.4223e-05, 'epoch': 2.16} 06/03/2024 11:45:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.5070, 'learning_rate': 5.3982e-05, 'epoch': 2.16} 06/03/2024 11:46:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.4358, 'learning_rate': 5.3742e-05, 'epoch': 2.17} 06/03/2024 11:46:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.4419, 'learning_rate': 5.3502e-05, 'epoch': 2.17} 06/03/2024 11:46:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.3984, 'learning_rate': 5.3263e-05, 'epoch': 2.17} 06/03/2024 11:46:21 - INFO - llamafactory.extras.callbacks - {'loss': 1.4324, 'learning_rate': 5.3024e-05, 'epoch': 2.17} 06/03/2024 11:46:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.3704, 'learning_rate': 5.2785e-05, 'epoch': 2.17} 06/03/2024 11:46:33 - INFO - llamafactory.extras.callbacks - {'loss': 1.4444, 'learning_rate': 5.2547e-05, 'epoch': 2.18} 06/03/2024 11:46:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.4017, 'learning_rate': 5.2309e-05, 'epoch': 2.18} 06/03/2024 11:46:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.4759, 'learning_rate': 5.2072e-05, 'epoch': 2.18} 06/03/2024 11:46:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.4067, 'learning_rate': 5.1835e-05, 'epoch': 2.18} 06/03/2024 11:46:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.3564, 'learning_rate': 5.1599e-05, 'epoch': 2.18} 06/03/2024 11:47:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.4527, 'learning_rate': 5.1363e-05, 'epoch': 2.19} 06/03/2024 11:47:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.4816, 'learning_rate': 5.1128e-05, 'epoch': 2.19} 06/03/2024 11:47:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.5056, 'learning_rate': 5.0892e-05, 'epoch': 2.19} 06/03/2024 11:47:15 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5500 06/03/2024 11:47:15 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:47:15 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:47:15 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5500/tokenizer_config.json 06/03/2024 11:47:15 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5500/special_tokens_map.json 06/03/2024 11:47:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.5091, 'learning_rate': 5.0658e-05, 'epoch': 2.19} 06/03/2024 11:47:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.4491, 'learning_rate': 5.0424e-05, 'epoch': 2.19} 06/03/2024 11:47:33 - INFO - llamafactory.extras.callbacks - {'loss': 1.4854, 'learning_rate': 5.0190e-05, 'epoch': 2.20} 06/03/2024 11:47:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.3953, 'learning_rate': 4.9956e-05, 'epoch': 2.20} 06/03/2024 11:47:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.3548, 'learning_rate': 4.9723e-05, 'epoch': 2.20} 06/03/2024 11:47:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.4881, 'learning_rate': 4.9491e-05, 'epoch': 2.20} 06/03/2024 11:47:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.3972, 'learning_rate': 4.9259e-05, 'epoch': 2.20} 06/03/2024 11:48:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.4337, 'learning_rate': 4.9027e-05, 'epoch': 2.21} 06/03/2024 11:48:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.3513, 'learning_rate': 4.8796e-05, 'epoch': 2.21} 06/03/2024 11:48:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.5526, 'learning_rate': 4.8565e-05, 'epoch': 2.21} 06/03/2024 11:48:21 - INFO - llamafactory.extras.callbacks - {'loss': 1.3557, 'learning_rate': 4.8335e-05, 'epoch': 2.21} 06/03/2024 11:48:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.4205, 'learning_rate': 4.8105e-05, 'epoch': 2.21} 06/03/2024 11:48:33 - INFO - llamafactory.extras.callbacks - {'loss': 1.4186, 'learning_rate': 4.7876e-05, 'epoch': 2.22} 06/03/2024 11:48:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.4787, 'learning_rate': 4.7647e-05, 'epoch': 2.22} 06/03/2024 11:48:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.4171, 'learning_rate': 4.7418e-05, 'epoch': 2.22} 06/03/2024 11:48:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.4746, 'learning_rate': 4.7190e-05, 'epoch': 2.22} 06/03/2024 11:48:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.4176, 'learning_rate': 4.6963e-05, 'epoch': 2.22} 06/03/2024 11:49:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.3984, 'learning_rate': 4.6735e-05, 'epoch': 2.23} 06/03/2024 11:49:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.4745, 'learning_rate': 4.6509e-05, 'epoch': 2.23} 06/03/2024 11:49:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.3082, 'learning_rate': 4.6282e-05, 'epoch': 2.23} 06/03/2024 11:49:15 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5600 06/03/2024 11:49:15 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:49:15 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:49:15 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5600/tokenizer_config.json 06/03/2024 11:49:15 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5600/special_tokens_map.json 06/03/2024 11:49:21 - INFO - llamafactory.extras.callbacks - {'loss': 1.3744, 'learning_rate': 4.6057e-05, 'epoch': 2.23} 06/03/2024 11:49:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.4119, 'learning_rate': 4.5831e-05, 'epoch': 2.23} 06/03/2024 11:49:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.4283, 'learning_rate': 4.5606e-05, 'epoch': 2.24} 06/03/2024 11:49:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.4476, 'learning_rate': 4.5382e-05, 'epoch': 2.24} 06/03/2024 11:49:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.4383, 'learning_rate': 4.5158e-05, 'epoch': 2.24} 06/03/2024 11:49:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.4488, 'learning_rate': 4.4934e-05, 'epoch': 2.24} 06/03/2024 11:49:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.4049, 'learning_rate': 4.4711e-05, 'epoch': 2.24} 06/03/2024 11:50:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.3719, 'learning_rate': 4.4489e-05, 'epoch': 2.25} 06/03/2024 11:50:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.4431, 'learning_rate': 4.4266e-05, 'epoch': 2.25} 06/03/2024 11:50:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.4635, 'learning_rate': 4.4045e-05, 'epoch': 2.25} 06/03/2024 11:50:21 - INFO - llamafactory.extras.callbacks - {'loss': 1.3857, 'learning_rate': 4.3823e-05, 'epoch': 2.25} 06/03/2024 11:50:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.4411, 'learning_rate': 4.3603e-05, 'epoch': 2.25} 06/03/2024 11:50:33 - INFO - llamafactory.extras.callbacks - {'loss': 1.4889, 'learning_rate': 4.3382e-05, 'epoch': 2.26} 06/03/2024 11:50:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.3643, 'learning_rate': 4.3162e-05, 'epoch': 2.26} 06/03/2024 11:50:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.5660, 'learning_rate': 4.2943e-05, 'epoch': 2.26} 06/03/2024 11:50:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.2933, 'learning_rate': 4.2724e-05, 'epoch': 2.26} 06/03/2024 11:50:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.4204, 'learning_rate': 4.2506e-05, 'epoch': 2.26} 06/03/2024 11:51:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.3884, 'learning_rate': 4.2288e-05, 'epoch': 2.27} 06/03/2024 11:51:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.4120, 'learning_rate': 4.2070e-05, 'epoch': 2.27} 06/03/2024 11:51:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.5060, 'learning_rate': 4.1853e-05, 'epoch': 2.27} 06/03/2024 11:51:15 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5700 06/03/2024 11:51:15 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:51:15 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:51:15 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5700/tokenizer_config.json 06/03/2024 11:51:15 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5700/special_tokens_map.json 06/03/2024 11:51:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.4290, 'learning_rate': 4.1636e-05, 'epoch': 2.27} 06/03/2024 11:51:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.4705, 'learning_rate': 4.1420e-05, 'epoch': 2.27} 06/03/2024 11:51:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.4002, 'learning_rate': 4.1205e-05, 'epoch': 2.28} 06/03/2024 11:51:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.3614, 'learning_rate': 4.0989e-05, 'epoch': 2.28} 06/03/2024 11:51:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.4712, 'learning_rate': 4.0775e-05, 'epoch': 2.28} 06/03/2024 11:51:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.3980, 'learning_rate': 4.0561e-05, 'epoch': 2.28} 06/03/2024 11:51:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.4639, 'learning_rate': 4.0347e-05, 'epoch': 2.28} 06/03/2024 11:52:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.4031, 'learning_rate': 4.0133e-05, 'epoch': 2.29} 06/03/2024 11:52:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.4173, 'learning_rate': 3.9921e-05, 'epoch': 2.29} 06/03/2024 11:52:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.3303, 'learning_rate': 3.9708e-05, 'epoch': 2.29} 06/03/2024 11:52:21 - INFO - llamafactory.extras.callbacks - {'loss': 1.4079, 'learning_rate': 3.9497e-05, 'epoch': 2.29} 06/03/2024 11:52:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.2561, 'learning_rate': 3.9285e-05, 'epoch': 2.29} 06/03/2024 11:52:33 - INFO - llamafactory.extras.callbacks - {'loss': 1.3237, 'learning_rate': 3.9074e-05, 'epoch': 2.30} 06/03/2024 11:52:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.3766, 'learning_rate': 3.8864e-05, 'epoch': 2.30} 06/03/2024 11:52:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.4697, 'learning_rate': 3.8654e-05, 'epoch': 2.30} 06/03/2024 11:52:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.4737, 'learning_rate': 3.8445e-05, 'epoch': 2.30} 06/03/2024 11:52:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.4341, 'learning_rate': 3.8236e-05, 'epoch': 2.30} 06/03/2024 11:53:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.3941, 'learning_rate': 3.8027e-05, 'epoch': 2.31} 06/03/2024 11:53:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.4962, 'learning_rate': 3.7819e-05, 'epoch': 2.31} 06/03/2024 11:53:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.3548, 'learning_rate': 3.7612e-05, 'epoch': 2.31} 06/03/2024 11:53:15 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5800 06/03/2024 11:53:15 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:53:15 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:53:15 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5800/tokenizer_config.json 06/03/2024 11:53:15 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5800/special_tokens_map.json 06/03/2024 11:53:21 - INFO - llamafactory.extras.callbacks - {'loss': 1.3996, 'learning_rate': 3.7405e-05, 'epoch': 2.31} 06/03/2024 11:53:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.2514, 'learning_rate': 3.7198e-05, 'epoch': 2.31} 06/03/2024 11:53:33 - INFO - llamafactory.extras.callbacks - {'loss': 1.5508, 'learning_rate': 3.6992e-05, 'epoch': 2.32} 06/03/2024 11:53:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.4132, 'learning_rate': 3.6787e-05, 'epoch': 2.32} 06/03/2024 11:53:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.4769, 'learning_rate': 3.6582e-05, 'epoch': 2.32} 06/03/2024 11:53:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.4430, 'learning_rate': 3.6377e-05, 'epoch': 2.32} 06/03/2024 11:53:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.3258, 'learning_rate': 3.6173e-05, 'epoch': 2.32} 06/03/2024 11:54:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.4307, 'learning_rate': 3.5970e-05, 'epoch': 2.33} 06/03/2024 11:54:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.3226, 'learning_rate': 3.5767e-05, 'epoch': 2.33} 06/03/2024 11:54:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.5756, 'learning_rate': 3.5564e-05, 'epoch': 2.33} 06/03/2024 11:54:21 - INFO - llamafactory.extras.callbacks - {'loss': 1.4443, 'learning_rate': 3.5362e-05, 'epoch': 2.33} 06/03/2024 11:54:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.3909, 'learning_rate': 3.5160e-05, 'epoch': 2.33} 06/03/2024 11:54:33 - INFO - llamafactory.extras.callbacks - {'loss': 1.4784, 'learning_rate': 3.4959e-05, 'epoch': 2.34} 06/03/2024 11:54:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.4363, 'learning_rate': 3.4759e-05, 'epoch': 2.34} 06/03/2024 11:54:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.4029, 'learning_rate': 3.4559e-05, 'epoch': 2.34} 06/03/2024 11:54:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.3896, 'learning_rate': 3.4359e-05, 'epoch': 2.34} 06/03/2024 11:54:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.4431, 'learning_rate': 3.4160e-05, 'epoch': 2.34} 06/03/2024 11:55:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.4417, 'learning_rate': 3.3962e-05, 'epoch': 2.35} 06/03/2024 11:55:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.3301, 'learning_rate': 3.3764e-05, 'epoch': 2.35} 06/03/2024 11:55:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.3286, 'learning_rate': 3.3566e-05, 'epoch': 2.35} 06/03/2024 11:55:16 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5900 06/03/2024 11:55:16 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:55:16 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:55:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5900/tokenizer_config.json 06/03/2024 11:55:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-5900/special_tokens_map.json 06/03/2024 11:55:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.3933, 'learning_rate': 3.3369e-05, 'epoch': 2.35} 06/03/2024 11:55:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.4248, 'learning_rate': 3.3173e-05, 'epoch': 2.35} 06/03/2024 11:55:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.3682, 'learning_rate': 3.2977e-05, 'epoch': 2.36} 06/03/2024 11:55:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.4050, 'learning_rate': 3.2781e-05, 'epoch': 2.36} 06/03/2024 11:55:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.4808, 'learning_rate': 3.2586e-05, 'epoch': 2.36} 06/03/2024 11:55:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.4218, 'learning_rate': 3.2392e-05, 'epoch': 2.36} 06/03/2024 11:55:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.5413, 'learning_rate': 3.2198e-05, 'epoch': 2.36} 06/03/2024 11:56:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.4296, 'learning_rate': 3.2004e-05, 'epoch': 2.37} 06/03/2024 11:56:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.3827, 'learning_rate': 3.1811e-05, 'epoch': 2.37} 06/03/2024 11:56:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.5233, 'learning_rate': 3.1619e-05, 'epoch': 2.37} 06/03/2024 11:56:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.3816, 'learning_rate': 3.1427e-05, 'epoch': 2.37} 06/03/2024 11:56:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.3521, 'learning_rate': 3.1236e-05, 'epoch': 2.37} 06/03/2024 11:56:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.4244, 'learning_rate': 3.1045e-05, 'epoch': 2.38} 06/03/2024 11:56:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.3821, 'learning_rate': 3.0854e-05, 'epoch': 2.38} 06/03/2024 11:56:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.3369, 'learning_rate': 3.0664e-05, 'epoch': 2.38} 06/03/2024 11:56:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.4149, 'learning_rate': 3.0475e-05, 'epoch': 2.38} 06/03/2024 11:56:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.3382, 'learning_rate': 3.0286e-05, 'epoch': 2.38} 06/03/2024 11:57:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.4529, 'learning_rate': 3.0098e-05, 'epoch': 2.39} 06/03/2024 11:57:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.3111, 'learning_rate': 2.9910e-05, 'epoch': 2.39} 06/03/2024 11:57:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.4022, 'learning_rate': 2.9723e-05, 'epoch': 2.39} 06/03/2024 11:57:16 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6000 06/03/2024 11:57:16 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:57:16 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:57:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6000/tokenizer_config.json 06/03/2024 11:57:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6000/special_tokens_map.json 06/03/2024 11:57:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.4136, 'learning_rate': 2.9536e-05, 'epoch': 2.39} 06/03/2024 11:57:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.4527, 'learning_rate': 2.9350e-05, 'epoch': 2.39} 06/03/2024 11:57:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.3211, 'learning_rate': 2.9165e-05, 'epoch': 2.40} 06/03/2024 11:57:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.4453, 'learning_rate': 2.8979e-05, 'epoch': 2.40} 06/03/2024 11:57:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.3070, 'learning_rate': 2.8795e-05, 'epoch': 2.40} 06/03/2024 11:57:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.3796, 'learning_rate': 2.8611e-05, 'epoch': 2.40} 06/03/2024 11:57:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.3723, 'learning_rate': 2.8427e-05, 'epoch': 2.40} 06/03/2024 11:58:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.6131, 'learning_rate': 2.8244e-05, 'epoch': 2.41} 06/03/2024 11:58:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.4068, 'learning_rate': 2.8062e-05, 'epoch': 2.41} 06/03/2024 11:58:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.5127, 'learning_rate': 2.7880e-05, 'epoch': 2.41} 06/03/2024 11:58:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.3749, 'learning_rate': 2.7698e-05, 'epoch': 2.41} 06/03/2024 11:58:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.4688, 'learning_rate': 2.7517e-05, 'epoch': 2.41} 06/03/2024 11:58:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.4385, 'learning_rate': 2.7337e-05, 'epoch': 2.42} 06/03/2024 11:58:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.3846, 'learning_rate': 2.7157e-05, 'epoch': 2.42} 06/03/2024 11:58:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.4896, 'learning_rate': 2.6978e-05, 'epoch': 2.42} 06/03/2024 11:58:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.3961, 'learning_rate': 2.6799e-05, 'epoch': 2.42} 06/03/2024 11:58:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.3814, 'learning_rate': 2.6621e-05, 'epoch': 2.42} 06/03/2024 11:59:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.4310, 'learning_rate': 2.6443e-05, 'epoch': 2.43} 06/03/2024 11:59:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.3264, 'learning_rate': 2.6266e-05, 'epoch': 2.43} 06/03/2024 11:59:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.5530, 'learning_rate': 2.6089e-05, 'epoch': 2.43} 06/03/2024 11:59:16 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6100 06/03/2024 11:59:16 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 11:59:16 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 11:59:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6100/tokenizer_config.json 06/03/2024 11:59:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6100/special_tokens_map.json 06/03/2024 11:59:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.4004, 'learning_rate': 2.5913e-05, 'epoch': 2.43} 06/03/2024 11:59:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.4310, 'learning_rate': 2.5738e-05, 'epoch': 2.43} 06/03/2024 11:59:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.3365, 'learning_rate': 2.5563e-05, 'epoch': 2.44} 06/03/2024 11:59:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.3417, 'learning_rate': 2.5388e-05, 'epoch': 2.44} 06/03/2024 11:59:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.3847, 'learning_rate': 2.5214e-05, 'epoch': 2.44} 06/03/2024 11:59:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.4237, 'learning_rate': 2.5041e-05, 'epoch': 2.44} 06/03/2024 11:59:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.4874, 'learning_rate': 2.4868e-05, 'epoch': 2.44} 06/03/2024 12:00:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.3981, 'learning_rate': 2.4696e-05, 'epoch': 2.45} 06/03/2024 12:00:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.6551, 'learning_rate': 2.4524e-05, 'epoch': 2.45} 06/03/2024 12:00:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.3940, 'learning_rate': 2.4353e-05, 'epoch': 2.45} 06/03/2024 12:00:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.3926, 'learning_rate': 2.4182e-05, 'epoch': 2.45} 06/03/2024 12:00:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.3861, 'learning_rate': 2.4012e-05, 'epoch': 2.45} 06/03/2024 12:00:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.2794, 'learning_rate': 2.3843e-05, 'epoch': 2.46} 06/03/2024 12:00:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.3764, 'learning_rate': 2.3674e-05, 'epoch': 2.46} 06/03/2024 12:00:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.3863, 'learning_rate': 2.3505e-05, 'epoch': 2.46} 06/03/2024 12:00:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.4574, 'learning_rate': 2.3337e-05, 'epoch': 2.46} 06/03/2024 12:00:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.4165, 'learning_rate': 2.3170e-05, 'epoch': 2.46} 06/03/2024 12:01:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.4666, 'learning_rate': 2.3003e-05, 'epoch': 2.47} 06/03/2024 12:01:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.3601, 'learning_rate': 2.2837e-05, 'epoch': 2.47} 06/03/2024 12:01:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.3728, 'learning_rate': 2.2671e-05, 'epoch': 2.47} 06/03/2024 12:01:15 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6200 06/03/2024 12:01:16 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 12:01:16 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 12:01:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6200/tokenizer_config.json 06/03/2024 12:01:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6200/special_tokens_map.json 06/03/2024 12:01:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.3483, 'learning_rate': 2.2506e-05, 'epoch': 2.47} 06/03/2024 12:01:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.3469, 'learning_rate': 2.2342e-05, 'epoch': 2.47} 06/03/2024 12:01:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.4293, 'learning_rate': 2.2178e-05, 'epoch': 2.48} 06/03/2024 12:01:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.4135, 'learning_rate': 2.2014e-05, 'epoch': 2.48} 06/03/2024 12:01:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.3905, 'learning_rate': 2.1851e-05, 'epoch': 2.48} 06/03/2024 12:01:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.4851, 'learning_rate': 2.1689e-05, 'epoch': 2.48} 06/03/2024 12:01:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.4202, 'learning_rate': 2.1527e-05, 'epoch': 2.48} 06/03/2024 12:02:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.3384, 'learning_rate': 2.1366e-05, 'epoch': 2.49} 06/03/2024 12:02:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.3893, 'learning_rate': 2.1205e-05, 'epoch': 2.49} 06/03/2024 12:02:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.4976, 'learning_rate': 2.1045e-05, 'epoch': 2.49} 06/03/2024 12:02:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.3458, 'learning_rate': 2.0886e-05, 'epoch': 2.49} 06/03/2024 12:02:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.4209, 'learning_rate': 2.0727e-05, 'epoch': 2.49} 06/03/2024 12:02:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.4830, 'learning_rate': 2.0568e-05, 'epoch': 2.50} 06/03/2024 12:02:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.3673, 'learning_rate': 2.0410e-05, 'epoch': 2.50} 06/03/2024 12:02:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.3931, 'learning_rate': 2.0253e-05, 'epoch': 2.50} 06/03/2024 12:02:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.4295, 'learning_rate': 2.0096e-05, 'epoch': 2.50} 06/03/2024 12:02:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.2439, 'learning_rate': 1.9940e-05, 'epoch': 2.50} 06/03/2024 12:03:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.3480, 'learning_rate': 1.9784e-05, 'epoch': 2.51} 06/03/2024 12:03:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.4389, 'learning_rate': 1.9629e-05, 'epoch': 2.51} 06/03/2024 12:03:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.3658, 'learning_rate': 1.9475e-05, 'epoch': 2.51} 06/03/2024 12:03:15 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6300 06/03/2024 12:03:15 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 12:03:15 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 12:03:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6300/tokenizer_config.json 06/03/2024 12:03:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6300/special_tokens_map.json 06/03/2024 12:03:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.4752, 'learning_rate': 1.9321e-05, 'epoch': 2.51} 06/03/2024 12:03:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.3493, 'learning_rate': 1.9168e-05, 'epoch': 2.51} 06/03/2024 12:03:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.4374, 'learning_rate': 1.9015e-05, 'epoch': 2.52} 06/03/2024 12:03:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.3024, 'learning_rate': 1.8863e-05, 'epoch': 2.52} 06/03/2024 12:03:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.4318, 'learning_rate': 1.8711e-05, 'epoch': 2.52} 06/03/2024 12:03:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.4620, 'learning_rate': 1.8560e-05, 'epoch': 2.52} 06/03/2024 12:03:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.3598, 'learning_rate': 1.8410e-05, 'epoch': 2.52} 06/03/2024 12:04:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.3982, 'learning_rate': 1.8260e-05, 'epoch': 2.53} 06/03/2024 12:04:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.3480, 'learning_rate': 1.8110e-05, 'epoch': 2.53} 06/03/2024 12:04:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.4032, 'learning_rate': 1.7962e-05, 'epoch': 2.53} 06/03/2024 12:04:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.3413, 'learning_rate': 1.7813e-05, 'epoch': 2.53} 06/03/2024 12:04:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.3587, 'learning_rate': 1.7666e-05, 'epoch': 2.53} 06/03/2024 12:04:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.3324, 'learning_rate': 1.7519e-05, 'epoch': 2.53} 06/03/2024 12:04:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.3518, 'learning_rate': 1.7372e-05, 'epoch': 2.54} 06/03/2024 12:04:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.3107, 'learning_rate': 1.7226e-05, 'epoch': 2.54} 06/03/2024 12:04:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.3946, 'learning_rate': 1.7081e-05, 'epoch': 2.54} 06/03/2024 12:04:57 - INFO - llamafactory.extras.callbacks - {'loss': 1.3023, 'learning_rate': 1.6936e-05, 'epoch': 2.54} 06/03/2024 12:05:03 - INFO - llamafactory.extras.callbacks - {'loss': 1.3593, 'learning_rate': 1.6792e-05, 'epoch': 2.54} 06/03/2024 12:05:09 - INFO - llamafactory.extras.callbacks - {'loss': 1.3542, 'learning_rate': 1.6649e-05, 'epoch': 2.55} 06/03/2024 12:05:15 - INFO - llamafactory.extras.callbacks - {'loss': 1.4190, 'learning_rate': 1.6506e-05, 'epoch': 2.55} 06/03/2024 12:05:15 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6400 06/03/2024 12:05:15 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 12:05:15 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 12:05:15 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6400/tokenizer_config.json 06/03/2024 12:05:15 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6400/special_tokens_map.json 06/03/2024 12:05:21 - INFO - llamafactory.extras.callbacks - {'loss': 1.4539, 'learning_rate': 1.6363e-05, 'epoch': 2.55} 06/03/2024 12:05:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.4295, 'learning_rate': 1.6221e-05, 'epoch': 2.55} 06/03/2024 12:05:33 - INFO - llamafactory.extras.callbacks - {'loss': 1.3533, 'learning_rate': 1.6080e-05, 'epoch': 2.55} 06/03/2024 12:05:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.3518, 'learning_rate': 1.5940e-05, 'epoch': 2.56} 06/03/2024 12:05:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.4809, 'learning_rate': 1.5799e-05, 'epoch': 2.56} 06/03/2024 12:05:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.3105, 'learning_rate': 1.5660e-05, 'epoch': 2.56} 06/03/2024 12:05:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.3695, 'learning_rate': 1.5521e-05, 'epoch': 2.56} 06/03/2024 12:06:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.4548, 'learning_rate': 1.5383e-05, 'epoch': 2.56} 06/03/2024 12:06:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.3541, 'learning_rate': 1.5245e-05, 'epoch': 2.57} 06/03/2024 12:06:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.5401, 'learning_rate': 1.5108e-05, 'epoch': 2.57} 06/03/2024 12:06:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.4449, 'learning_rate': 1.4971e-05, 'epoch': 2.57} 06/03/2024 12:06:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.2817, 'learning_rate': 1.4835e-05, 'epoch': 2.57} 06/03/2024 12:06:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.4374, 'learning_rate': 1.4700e-05, 'epoch': 2.57} 06/03/2024 12:06:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.4975, 'learning_rate': 1.4565e-05, 'epoch': 2.58} 06/03/2024 12:06:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.3731, 'learning_rate': 1.4431e-05, 'epoch': 2.58} 06/03/2024 12:06:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.3796, 'learning_rate': 1.4297e-05, 'epoch': 2.58} 06/03/2024 12:06:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.4786, 'learning_rate': 1.4164e-05, 'epoch': 2.58} 06/03/2024 12:07:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.3543, 'learning_rate': 1.4032e-05, 'epoch': 2.58} 06/03/2024 12:07:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.3401, 'learning_rate': 1.3900e-05, 'epoch': 2.59} 06/03/2024 12:07:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.4265, 'learning_rate': 1.3769e-05, 'epoch': 2.59} 06/03/2024 12:07:16 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6500 06/03/2024 12:07:16 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 12:07:16 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 12:07:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6500/tokenizer_config.json 06/03/2024 12:07:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6500/special_tokens_map.json 06/03/2024 12:07:23 - INFO - llamafactory.extras.callbacks - {'loss': 1.4390, 'learning_rate': 1.3638e-05, 'epoch': 2.59} 06/03/2024 12:07:29 - INFO - llamafactory.extras.callbacks - {'loss': 1.4863, 'learning_rate': 1.3508e-05, 'epoch': 2.59} 06/03/2024 12:07:35 - INFO - llamafactory.extras.callbacks - {'loss': 1.4606, 'learning_rate': 1.3378e-05, 'epoch': 2.59} 06/03/2024 12:07:41 - INFO - llamafactory.extras.callbacks - {'loss': 1.4164, 'learning_rate': 1.3250e-05, 'epoch': 2.60} 06/03/2024 12:07:47 - INFO - llamafactory.extras.callbacks - {'loss': 1.4159, 'learning_rate': 1.3121e-05, 'epoch': 2.60} 06/03/2024 12:07:53 - INFO - llamafactory.extras.callbacks - {'loss': 1.3885, 'learning_rate': 1.2994e-05, 'epoch': 2.60} 06/03/2024 12:07:59 - INFO - llamafactory.extras.callbacks - {'loss': 1.3622, 'learning_rate': 1.2867e-05, 'epoch': 2.60} 06/03/2024 12:08:05 - INFO - llamafactory.extras.callbacks - {'loss': 1.3941, 'learning_rate': 1.2740e-05, 'epoch': 2.60} 06/03/2024 12:08:11 - INFO - llamafactory.extras.callbacks - {'loss': 1.4176, 'learning_rate': 1.2614e-05, 'epoch': 2.61} 06/03/2024 12:08:17 - INFO - llamafactory.extras.callbacks - {'loss': 1.4261, 'learning_rate': 1.2489e-05, 'epoch': 2.61} 06/03/2024 12:08:23 - INFO - llamafactory.extras.callbacks - {'loss': 1.4162, 'learning_rate': 1.2364e-05, 'epoch': 2.61} 06/03/2024 12:08:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.4112, 'learning_rate': 1.2240e-05, 'epoch': 2.61} 06/03/2024 12:08:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.3148, 'learning_rate': 1.2117e-05, 'epoch': 2.61} 06/03/2024 12:08:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.4669, 'learning_rate': 1.1994e-05, 'epoch': 2.62} 06/03/2024 12:08:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.4846, 'learning_rate': 1.1871e-05, 'epoch': 2.62} 06/03/2024 12:08:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.3225, 'learning_rate': 1.1750e-05, 'epoch': 2.62} 06/03/2024 12:08:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.3188, 'learning_rate': 1.1629e-05, 'epoch': 2.62} 06/03/2024 12:09:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.4801, 'learning_rate': 1.1508e-05, 'epoch': 2.62} 06/03/2024 12:09:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.3978, 'learning_rate': 1.1388e-05, 'epoch': 2.63} 06/03/2024 12:09:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.3319, 'learning_rate': 1.1269e-05, 'epoch': 2.63} 06/03/2024 12:09:16 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6600 06/03/2024 12:09:16 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 12:09:16 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 12:09:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6600/tokenizer_config.json 06/03/2024 12:09:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6600/special_tokens_map.json 06/03/2024 12:09:23 - INFO - llamafactory.extras.callbacks - {'loss': 1.3908, 'learning_rate': 1.1150e-05, 'epoch': 2.63} 06/03/2024 12:09:29 - INFO - llamafactory.extras.callbacks - {'loss': 1.3999, 'learning_rate': 1.1032e-05, 'epoch': 2.63} 06/03/2024 12:09:35 - INFO - llamafactory.extras.callbacks - {'loss': 1.3493, 'learning_rate': 1.0915e-05, 'epoch': 2.63} 06/03/2024 12:09:41 - INFO - llamafactory.extras.callbacks - {'loss': 1.3622, 'learning_rate': 1.0798e-05, 'epoch': 2.64} 06/03/2024 12:09:47 - INFO - llamafactory.extras.callbacks - {'loss': 1.4264, 'learning_rate': 1.0681e-05, 'epoch': 2.64} 06/03/2024 12:09:53 - INFO - llamafactory.extras.callbacks - {'loss': 1.3576, 'learning_rate': 1.0566e-05, 'epoch': 2.64} 06/03/2024 12:09:59 - INFO - llamafactory.extras.callbacks - {'loss': 1.4206, 'learning_rate': 1.0451e-05, 'epoch': 2.64} 06/03/2024 12:10:05 - INFO - llamafactory.extras.callbacks - {'loss': 1.4038, 'learning_rate': 1.0336e-05, 'epoch': 2.64} 06/03/2024 12:10:11 - INFO - llamafactory.extras.callbacks - {'loss': 1.3237, 'learning_rate': 1.0222e-05, 'epoch': 2.65} 06/03/2024 12:10:17 - INFO - llamafactory.extras.callbacks - {'loss': 1.3490, 'learning_rate': 1.0109e-05, 'epoch': 2.65} 06/03/2024 12:10:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.3641, 'learning_rate': 9.9966e-06, 'epoch': 2.65} 06/03/2024 12:10:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.2903, 'learning_rate': 9.8846e-06, 'epoch': 2.65} 06/03/2024 12:10:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.4015, 'learning_rate': 9.7732e-06, 'epoch': 2.65} 06/03/2024 12:10:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.3503, 'learning_rate': 9.6624e-06, 'epoch': 2.66} 06/03/2024 12:10:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.3357, 'learning_rate': 9.5522e-06, 'epoch': 2.66} 06/03/2024 12:10:53 - INFO - llamafactory.extras.callbacks - {'loss': 1.4308, 'learning_rate': 9.4426e-06, 'epoch': 2.66} 06/03/2024 12:10:59 - INFO - llamafactory.extras.callbacks - {'loss': 1.2818, 'learning_rate': 9.3337e-06, 'epoch': 2.66} 06/03/2024 12:11:05 - INFO - llamafactory.extras.callbacks - {'loss': 1.3870, 'learning_rate': 9.2253e-06, 'epoch': 2.66} 06/03/2024 12:11:11 - INFO - llamafactory.extras.callbacks - {'loss': 1.3665, 'learning_rate': 9.1176e-06, 'epoch': 2.67} 06/03/2024 12:11:17 - INFO - llamafactory.extras.callbacks - {'loss': 1.5225, 'learning_rate': 9.0105e-06, 'epoch': 2.67} 06/03/2024 12:11:17 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6700 06/03/2024 12:11:17 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 12:11:17 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 12:11:17 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6700/tokenizer_config.json 06/03/2024 12:11:17 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6700/special_tokens_map.json 06/03/2024 12:11:23 - INFO - llamafactory.extras.callbacks - {'loss': 1.3151, 'learning_rate': 8.9040e-06, 'epoch': 2.67} 06/03/2024 12:11:29 - INFO - llamafactory.extras.callbacks - {'loss': 1.3935, 'learning_rate': 8.7981e-06, 'epoch': 2.67} 06/03/2024 12:11:35 - INFO - llamafactory.extras.callbacks - {'loss': 1.3811, 'learning_rate': 8.6928e-06, 'epoch': 2.67} 06/03/2024 12:11:41 - INFO - llamafactory.extras.callbacks - {'loss': 1.4715, 'learning_rate': 8.5881e-06, 'epoch': 2.68} 06/03/2024 12:11:47 - INFO - llamafactory.extras.callbacks - {'loss': 1.2541, 'learning_rate': 8.4841e-06, 'epoch': 2.68} 06/03/2024 12:11:53 - INFO - llamafactory.extras.callbacks - {'loss': 1.3637, 'learning_rate': 8.3806e-06, 'epoch': 2.68} 06/03/2024 12:11:59 - INFO - llamafactory.extras.callbacks - {'loss': 1.4220, 'learning_rate': 8.2778e-06, 'epoch': 2.68} 06/03/2024 12:12:05 - INFO - llamafactory.extras.callbacks - {'loss': 1.4463, 'learning_rate': 8.1756e-06, 'epoch': 2.68} 06/03/2024 12:12:11 - INFO - llamafactory.extras.callbacks - {'loss': 1.4117, 'learning_rate': 8.0740e-06, 'epoch': 2.69} 06/03/2024 12:12:17 - INFO - llamafactory.extras.callbacks - {'loss': 1.3940, 'learning_rate': 7.9731e-06, 'epoch': 2.69} 06/03/2024 12:12:23 - INFO - llamafactory.extras.callbacks - {'loss': 1.4076, 'learning_rate': 7.8727e-06, 'epoch': 2.69} 06/03/2024 12:12:29 - INFO - llamafactory.extras.callbacks - {'loss': 1.3680, 'learning_rate': 7.7730e-06, 'epoch': 2.69} 06/03/2024 12:12:35 - INFO - llamafactory.extras.callbacks - {'loss': 1.4427, 'learning_rate': 7.6739e-06, 'epoch': 2.69} 06/03/2024 12:12:41 - INFO - llamafactory.extras.callbacks - {'loss': 1.3994, 'learning_rate': 7.5754e-06, 'epoch': 2.70} 06/03/2024 12:12:47 - INFO - llamafactory.extras.callbacks - {'loss': 1.4891, 'learning_rate': 7.4775e-06, 'epoch': 2.70} 06/03/2024 12:12:53 - INFO - llamafactory.extras.callbacks - {'loss': 1.4082, 'learning_rate': 7.3802e-06, 'epoch': 2.70} 06/03/2024 12:12:59 - INFO - llamafactory.extras.callbacks - {'loss': 1.3687, 'learning_rate': 7.2836e-06, 'epoch': 2.70} 06/03/2024 12:13:05 - INFO - llamafactory.extras.callbacks - {'loss': 1.3346, 'learning_rate': 7.1876e-06, 'epoch': 2.70} 06/03/2024 12:13:11 - INFO - llamafactory.extras.callbacks - {'loss': 1.4623, 'learning_rate': 7.0922e-06, 'epoch': 2.71} 06/03/2024 12:13:17 - INFO - llamafactory.extras.callbacks - {'loss': 1.3667, 'learning_rate': 6.9975e-06, 'epoch': 2.71} 06/03/2024 12:13:17 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6800 06/03/2024 12:13:17 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 12:13:17 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 12:13:17 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6800/tokenizer_config.json 06/03/2024 12:13:17 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6800/special_tokens_map.json 06/03/2024 12:13:23 - INFO - llamafactory.extras.callbacks - {'loss': 1.3206, 'learning_rate': 6.9033e-06, 'epoch': 2.71} 06/03/2024 12:13:29 - INFO - llamafactory.extras.callbacks - {'loss': 1.4675, 'learning_rate': 6.8098e-06, 'epoch': 2.71} 06/03/2024 12:13:35 - INFO - llamafactory.extras.callbacks - {'loss': 1.2444, 'learning_rate': 6.7169e-06, 'epoch': 2.71} 06/03/2024 12:13:41 - INFO - llamafactory.extras.callbacks - {'loss': 1.3859, 'learning_rate': 6.6246e-06, 'epoch': 2.72} 06/03/2024 12:13:47 - INFO - llamafactory.extras.callbacks - {'loss': 1.3183, 'learning_rate': 6.5330e-06, 'epoch': 2.72} 06/03/2024 12:13:53 - INFO - llamafactory.extras.callbacks - {'loss': 1.3464, 'learning_rate': 6.4419e-06, 'epoch': 2.72} 06/03/2024 12:13:59 - INFO - llamafactory.extras.callbacks - {'loss': 1.4351, 'learning_rate': 6.3515e-06, 'epoch': 2.72} 06/03/2024 12:14:05 - INFO - llamafactory.extras.callbacks - {'loss': 1.3552, 'learning_rate': 6.2617e-06, 'epoch': 2.72} 06/03/2024 12:14:11 - INFO - llamafactory.extras.callbacks - {'loss': 1.4057, 'learning_rate': 6.1726e-06, 'epoch': 2.73} 06/03/2024 12:14:17 - INFO - llamafactory.extras.callbacks - {'loss': 1.3631, 'learning_rate': 6.0841e-06, 'epoch': 2.73} 06/03/2024 12:14:23 - INFO - llamafactory.extras.callbacks - {'loss': 1.3564, 'learning_rate': 5.9962e-06, 'epoch': 2.73} 06/03/2024 12:14:29 - INFO - llamafactory.extras.callbacks - {'loss': 1.3883, 'learning_rate': 5.9089e-06, 'epoch': 2.73} 06/03/2024 12:14:35 - INFO - llamafactory.extras.callbacks - {'loss': 1.4344, 'learning_rate': 5.8223e-06, 'epoch': 2.73} 06/03/2024 12:14:41 - INFO - llamafactory.extras.callbacks - {'loss': 1.4617, 'learning_rate': 5.7362e-06, 'epoch': 2.74} 06/03/2024 12:14:47 - INFO - llamafactory.extras.callbacks - {'loss': 1.3243, 'learning_rate': 5.6508e-06, 'epoch': 2.74} 06/03/2024 12:14:53 - INFO - llamafactory.extras.callbacks - {'loss': 1.3606, 'learning_rate': 5.5661e-06, 'epoch': 2.74} 06/03/2024 12:14:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.4057, 'learning_rate': 5.4819e-06, 'epoch': 2.74} 06/03/2024 12:15:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.2948, 'learning_rate': 5.3984e-06, 'epoch': 2.74} 06/03/2024 12:15:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.4363, 'learning_rate': 5.3156e-06, 'epoch': 2.75} 06/03/2024 12:15:17 - INFO - llamafactory.extras.callbacks - {'loss': 1.4127, 'learning_rate': 5.2333e-06, 'epoch': 2.75} 06/03/2024 12:15:17 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6900 06/03/2024 12:15:17 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 12:15:17 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 12:15:17 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6900/tokenizer_config.json 06/03/2024 12:15:17 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-6900/special_tokens_map.json 06/03/2024 12:15:23 - INFO - llamafactory.extras.callbacks - {'loss': 1.3238, 'learning_rate': 5.1517e-06, 'epoch': 2.75} 06/03/2024 12:15:29 - INFO - llamafactory.extras.callbacks - {'loss': 1.4469, 'learning_rate': 5.0707e-06, 'epoch': 2.75} 06/03/2024 12:15:35 - INFO - llamafactory.extras.callbacks - {'loss': 1.4263, 'learning_rate': 4.9904e-06, 'epoch': 2.75} 06/03/2024 12:15:41 - INFO - llamafactory.extras.callbacks - {'loss': 1.3478, 'learning_rate': 4.9106e-06, 'epoch': 2.76} 06/03/2024 12:15:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.4268, 'learning_rate': 4.8315e-06, 'epoch': 2.76} 06/03/2024 12:15:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.3866, 'learning_rate': 4.7531e-06, 'epoch': 2.76} 06/03/2024 12:15:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.4328, 'learning_rate': 4.6752e-06, 'epoch': 2.76} 06/03/2024 12:16:05 - INFO - llamafactory.extras.callbacks - {'loss': 1.4405, 'learning_rate': 4.5980e-06, 'epoch': 2.76} 06/03/2024 12:16:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.4027, 'learning_rate': 4.5215e-06, 'epoch': 2.77} 06/03/2024 12:16:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.4402, 'learning_rate': 4.4456e-06, 'epoch': 2.77} 06/03/2024 12:16:23 - INFO - llamafactory.extras.callbacks - {'loss': 1.3285, 'learning_rate': 4.3703e-06, 'epoch': 2.77} 06/03/2024 12:16:29 - INFO - llamafactory.extras.callbacks - {'loss': 1.3919, 'learning_rate': 4.2956e-06, 'epoch': 2.77} 06/03/2024 12:16:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.3757, 'learning_rate': 4.2216e-06, 'epoch': 2.77} 06/03/2024 12:16:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.2786, 'learning_rate': 4.1482e-06, 'epoch': 2.78} 06/03/2024 12:16:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.3229, 'learning_rate': 4.0754e-06, 'epoch': 2.78} 06/03/2024 12:16:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.3014, 'learning_rate': 4.0033e-06, 'epoch': 2.78} 06/03/2024 12:16:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.3824, 'learning_rate': 3.9318e-06, 'epoch': 2.78} 06/03/2024 12:17:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.3002, 'learning_rate': 3.8609e-06, 'epoch': 2.78} 06/03/2024 12:17:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.3736, 'learning_rate': 3.7907e-06, 'epoch': 2.79} 06/03/2024 12:17:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.2088, 'learning_rate': 3.7211e-06, 'epoch': 2.79} 06/03/2024 12:17:16 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7000 06/03/2024 12:17:16 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 12:17:16 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 12:17:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7000/tokenizer_config.json 06/03/2024 12:17:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7000/special_tokens_map.json 06/03/2024 12:17:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.4813, 'learning_rate': 3.6522e-06, 'epoch': 2.79} 06/03/2024 12:17:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.4579, 'learning_rate': 3.5839e-06, 'epoch': 2.79} 06/03/2024 12:17:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.3265, 'learning_rate': 3.5162e-06, 'epoch': 2.79} 06/03/2024 12:17:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.2436, 'learning_rate': 3.4492e-06, 'epoch': 2.80} 06/03/2024 12:17:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.4201, 'learning_rate': 3.3828e-06, 'epoch': 2.80} 06/03/2024 12:17:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.2915, 'learning_rate': 3.3170e-06, 'epoch': 2.80} 06/03/2024 12:17:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.3436, 'learning_rate': 3.2519e-06, 'epoch': 2.80} 06/03/2024 12:18:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.3628, 'learning_rate': 3.1874e-06, 'epoch': 2.80} 06/03/2024 12:18:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.3498, 'learning_rate': 3.1236e-06, 'epoch': 2.81} 06/03/2024 12:18:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.4384, 'learning_rate': 3.0604e-06, 'epoch': 2.81} 06/03/2024 12:18:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.3800, 'learning_rate': 2.9978e-06, 'epoch': 2.81} 06/03/2024 12:18:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.3070, 'learning_rate': 2.9359e-06, 'epoch': 2.81} 06/03/2024 12:18:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.4385, 'learning_rate': 2.8746e-06, 'epoch': 2.81} 06/03/2024 12:18:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.4435, 'learning_rate': 2.8139e-06, 'epoch': 2.82} 06/03/2024 12:18:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.3494, 'learning_rate': 2.7539e-06, 'epoch': 2.82} 06/03/2024 12:18:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.5222, 'learning_rate': 2.6946e-06, 'epoch': 2.82} 06/03/2024 12:18:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.4859, 'learning_rate': 2.6358e-06, 'epoch': 2.82} 06/03/2024 12:19:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.4203, 'learning_rate': 2.5778e-06, 'epoch': 2.82} 06/03/2024 12:19:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.3568, 'learning_rate': 2.5203e-06, 'epoch': 2.83} 06/03/2024 12:19:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.4753, 'learning_rate': 2.4635e-06, 'epoch': 2.83} 06/03/2024 12:19:16 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7100 06/03/2024 12:19:16 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 12:19:16 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 12:19:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7100/tokenizer_config.json 06/03/2024 12:19:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7100/special_tokens_map.json 06/03/2024 12:19:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.3780, 'learning_rate': 2.4074e-06, 'epoch': 2.83} 06/03/2024 12:19:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.4273, 'learning_rate': 2.3519e-06, 'epoch': 2.83} 06/03/2024 12:19:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.4064, 'learning_rate': 2.2970e-06, 'epoch': 2.83} 06/03/2024 12:19:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.4521, 'learning_rate': 2.2428e-06, 'epoch': 2.84} 06/03/2024 12:19:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.3218, 'learning_rate': 2.1892e-06, 'epoch': 2.84} 06/03/2024 12:19:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.3948, 'learning_rate': 2.1362e-06, 'epoch': 2.84} 06/03/2024 12:19:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.3663, 'learning_rate': 2.0839e-06, 'epoch': 2.84} 06/03/2024 12:20:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.4420, 'learning_rate': 2.0323e-06, 'epoch': 2.84} 06/03/2024 12:20:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.5015, 'learning_rate': 1.9813e-06, 'epoch': 2.85} 06/03/2024 12:20:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.3389, 'learning_rate': 1.9309e-06, 'epoch': 2.85} 06/03/2024 12:20:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.3325, 'learning_rate': 1.8812e-06, 'epoch': 2.85} 06/03/2024 12:20:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.4162, 'learning_rate': 1.8321e-06, 'epoch': 2.85} 06/03/2024 12:20:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.4418, 'learning_rate': 1.7837e-06, 'epoch': 2.85} 06/03/2024 12:20:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.4855, 'learning_rate': 1.7359e-06, 'epoch': 2.86} 06/03/2024 12:20:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.4014, 'learning_rate': 1.6887e-06, 'epoch': 2.86} 06/03/2024 12:20:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.4108, 'learning_rate': 1.6422e-06, 'epoch': 2.86} 06/03/2024 12:20:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.3386, 'learning_rate': 1.5964e-06, 'epoch': 2.86} 06/03/2024 12:21:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.4485, 'learning_rate': 1.5512e-06, 'epoch': 2.86} 06/03/2024 12:21:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.4852, 'learning_rate': 1.5066e-06, 'epoch': 2.87} 06/03/2024 12:21:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.3835, 'learning_rate': 1.4627e-06, 'epoch': 2.87} 06/03/2024 12:21:16 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7200 06/03/2024 12:21:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7200/tokenizer_config.json 06/03/2024 12:21:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7200/special_tokens_map.json 06/03/2024 12:21:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.4499, 'learning_rate': 1.4194e-06, 'epoch': 2.87} 06/03/2024 12:21:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.2806, 'learning_rate': 1.3768e-06, 'epoch': 2.87} 06/03/2024 12:21:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.4481, 'learning_rate': 1.3348e-06, 'epoch': 2.87} 06/03/2024 12:21:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.4859, 'learning_rate': 1.2935e-06, 'epoch': 2.88} 06/03/2024 12:21:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.3335, 'learning_rate': 1.2528e-06, 'epoch': 2.88} 06/03/2024 12:21:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.3900, 'learning_rate': 1.2128e-06, 'epoch': 2.88} 06/03/2024 12:21:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.3790, 'learning_rate': 1.1734e-06, 'epoch': 2.88} 06/03/2024 12:22:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.3816, 'learning_rate': 1.1347e-06, 'epoch': 2.88} 06/03/2024 12:22:11 - INFO - llamafactory.extras.callbacks - {'loss': 1.3705, 'learning_rate': 1.0966e-06, 'epoch': 2.89} 06/03/2024 12:22:17 - INFO - llamafactory.extras.callbacks - {'loss': 1.4106, 'learning_rate': 1.0591e-06, 'epoch': 2.89} 06/03/2024 12:22:23 - INFO - llamafactory.extras.callbacks - {'loss': 1.3795, 'learning_rate': 1.0223e-06, 'epoch': 2.89} 06/03/2024 12:22:29 - INFO - llamafactory.extras.callbacks - {'loss': 1.4139, 'learning_rate': 9.8619e-07, 'epoch': 2.89} 06/03/2024 12:22:35 - INFO - llamafactory.extras.callbacks - {'loss': 1.3067, 'learning_rate': 9.5069e-07, 'epoch': 2.89} 06/03/2024 12:22:41 - INFO - llamafactory.extras.callbacks - {'loss': 1.3011, 'learning_rate': 9.1584e-07, 'epoch': 2.90} 06/03/2024 12:22:47 - INFO - llamafactory.extras.callbacks - {'loss': 1.4344, 'learning_rate': 8.8164e-07, 'epoch': 2.90} 06/03/2024 12:22:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.3629, 'learning_rate': 8.4809e-07, 'epoch': 2.90} 06/03/2024 12:22:59 - INFO - llamafactory.extras.callbacks - {'loss': 1.2844, 'learning_rate': 8.1519e-07, 'epoch': 2.90} 06/03/2024 12:23:05 - INFO - llamafactory.extras.callbacks - {'loss': 1.4011, 'learning_rate': 7.8293e-07, 'epoch': 2.90} 06/03/2024 12:23:11 - INFO - llamafactory.extras.callbacks - {'loss': 1.3853, 'learning_rate': 7.5133e-07, 'epoch': 2.91} 06/03/2024 12:23:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.3160, 'learning_rate': 7.2038e-07, 'epoch': 2.91} 06/03/2024 12:23:16 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7300 06/03/2024 12:23:17 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 12:23:17 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 12:23:17 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7300/tokenizer_config.json 06/03/2024 12:23:17 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7300/special_tokens_map.json 06/03/2024 12:23:23 - INFO - llamafactory.extras.callbacks - {'loss': 1.3602, 'learning_rate': 6.9007e-07, 'epoch': 2.91} 06/03/2024 12:23:29 - INFO - llamafactory.extras.callbacks - {'loss': 1.4213, 'learning_rate': 6.6042e-07, 'epoch': 2.91} 06/03/2024 12:23:35 - INFO - llamafactory.extras.callbacks - {'loss': 1.5167, 'learning_rate': 6.3141e-07, 'epoch': 2.91} 06/03/2024 12:23:41 - INFO - llamafactory.extras.callbacks - {'loss': 1.4030, 'learning_rate': 6.0305e-07, 'epoch': 2.92} 06/03/2024 12:23:47 - INFO - llamafactory.extras.callbacks - {'loss': 1.4157, 'learning_rate': 5.7535e-07, 'epoch': 2.92} 06/03/2024 12:23:53 - INFO - llamafactory.extras.callbacks - {'loss': 1.3336, 'learning_rate': 5.4829e-07, 'epoch': 2.92} 06/03/2024 12:23:59 - INFO - llamafactory.extras.callbacks - {'loss': 1.3912, 'learning_rate': 5.2189e-07, 'epoch': 2.92} 06/03/2024 12:24:05 - INFO - llamafactory.extras.callbacks - {'loss': 1.4181, 'learning_rate': 4.9614e-07, 'epoch': 2.92} 06/03/2024 12:24:11 - INFO - llamafactory.extras.callbacks - {'loss': 1.4416, 'learning_rate': 4.7103e-07, 'epoch': 2.93} 06/03/2024 12:24:17 - INFO - llamafactory.extras.callbacks - {'loss': 1.4493, 'learning_rate': 4.4658e-07, 'epoch': 2.93} 06/03/2024 12:24:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.4544, 'learning_rate': 4.2278e-07, 'epoch': 2.93} 06/03/2024 12:24:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.3583, 'learning_rate': 3.9963e-07, 'epoch': 2.93} 06/03/2024 12:24:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.3814, 'learning_rate': 3.7713e-07, 'epoch': 2.93} 06/03/2024 12:24:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.4387, 'learning_rate': 3.5528e-07, 'epoch': 2.94} 06/03/2024 12:24:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.3866, 'learning_rate': 3.3408e-07, 'epoch': 2.94} 06/03/2024 12:24:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.3422, 'learning_rate': 3.1353e-07, 'epoch': 2.94} 06/03/2024 12:24:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.3686, 'learning_rate': 2.9364e-07, 'epoch': 2.94} 06/03/2024 12:25:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.4133, 'learning_rate': 2.7439e-07, 'epoch': 2.94} 06/03/2024 12:25:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.4139, 'learning_rate': 2.5580e-07, 'epoch': 2.95} 06/03/2024 12:25:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.4123, 'learning_rate': 2.3786e-07, 'epoch': 2.95} 06/03/2024 12:25:16 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7400 06/03/2024 12:25:16 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 12:25:16 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 12:25:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7400/tokenizer_config.json 06/03/2024 12:25:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7400/special_tokens_map.json 06/03/2024 12:25:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.5799, 'learning_rate': 2.2057e-07, 'epoch': 2.95} 06/03/2024 12:25:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.4038, 'learning_rate': 2.0394e-07, 'epoch': 2.95} 06/03/2024 12:25:34 - INFO - llamafactory.extras.callbacks - {'loss': 1.3085, 'learning_rate': 1.8795e-07, 'epoch': 2.95} 06/03/2024 12:25:40 - INFO - llamafactory.extras.callbacks - {'loss': 1.4544, 'learning_rate': 1.7262e-07, 'epoch': 2.96} 06/03/2024 12:25:46 - INFO - llamafactory.extras.callbacks - {'loss': 1.3811, 'learning_rate': 1.5794e-07, 'epoch': 2.96} 06/03/2024 12:25:52 - INFO - llamafactory.extras.callbacks - {'loss': 1.3174, 'learning_rate': 1.4391e-07, 'epoch': 2.96} 06/03/2024 12:25:59 - INFO - llamafactory.extras.callbacks - {'loss': 1.4519, 'learning_rate': 1.3053e-07, 'epoch': 2.96} 06/03/2024 12:26:05 - INFO - llamafactory.extras.callbacks - {'loss': 1.4098, 'learning_rate': 1.1780e-07, 'epoch': 2.96} 06/03/2024 12:26:11 - INFO - llamafactory.extras.callbacks - {'loss': 1.3563, 'learning_rate': 1.0573e-07, 'epoch': 2.97} 06/03/2024 12:26:17 - INFO - llamafactory.extras.callbacks - {'loss': 1.3960, 'learning_rate': 9.4311e-08, 'epoch': 2.97} 06/03/2024 12:26:23 - INFO - llamafactory.extras.callbacks - {'loss': 1.3253, 'learning_rate': 8.3543e-08, 'epoch': 2.97} 06/03/2024 12:26:29 - INFO - llamafactory.extras.callbacks - {'loss': 1.4452, 'learning_rate': 7.3427e-08, 'epoch': 2.97} 06/03/2024 12:26:35 - INFO - llamafactory.extras.callbacks - {'loss': 1.3387, 'learning_rate': 6.3964e-08, 'epoch': 2.97} 06/03/2024 12:26:41 - INFO - llamafactory.extras.callbacks - {'loss': 1.4156, 'learning_rate': 5.5153e-08, 'epoch': 2.98} 06/03/2024 12:26:47 - INFO - llamafactory.extras.callbacks - {'loss': 1.3523, 'learning_rate': 4.6995e-08, 'epoch': 2.98} 06/03/2024 12:26:53 - INFO - llamafactory.extras.callbacks - {'loss': 1.3493, 'learning_rate': 3.9489e-08, 'epoch': 2.98} 06/03/2024 12:26:58 - INFO - llamafactory.extras.callbacks - {'loss': 1.3492, 'learning_rate': 3.2636e-08, 'epoch': 2.98} 06/03/2024 12:27:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.4875, 'learning_rate': 2.6435e-08, 'epoch': 2.98} 06/03/2024 12:27:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.3215, 'learning_rate': 2.0887e-08, 'epoch': 2.99} 06/03/2024 12:27:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.3576, 'learning_rate': 1.5992e-08, 'epoch': 2.99} 06/03/2024 12:27:16 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7500 06/03/2024 12:27:17 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 12:27:17 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 12:27:17 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7500/tokenizer_config.json 06/03/2024 12:27:17 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/checkpoint-7500/special_tokens_map.json 06/03/2024 12:27:23 - INFO - llamafactory.extras.callbacks - {'loss': 1.4502, 'learning_rate': 1.1749e-08, 'epoch': 2.99} 06/03/2024 12:27:29 - INFO - llamafactory.extras.callbacks - {'loss': 1.3403, 'learning_rate': 8.1592e-09, 'epoch': 2.99} 06/03/2024 12:27:35 - INFO - llamafactory.extras.callbacks - {'loss': 1.4512, 'learning_rate': 5.2219e-09, 'epoch': 2.99} 06/03/2024 12:27:41 - INFO - llamafactory.extras.callbacks - {'loss': 1.5030, 'learning_rate': 2.9373e-09, 'epoch': 2.99} 06/03/2024 12:27:47 - INFO - llamafactory.extras.callbacks - {'loss': 1.2154, 'learning_rate': 1.3055e-09, 'epoch': 3.00} 06/03/2024 12:27:53 - INFO - llamafactory.extras.callbacks - {'loss': 1.5118, 'learning_rate': 3.2637e-10, 'epoch': 3.00} 06/03/2024 12:27:53 - INFO - transformers.trainer - Training completed. Do not forget to share your model on huggingface.co/models =) 06/03/2024 12:27:53 - INFO - transformers.trainer - Saving model checkpoint to saves/Custom/lora/train_2024-06-03-09-50-18 06/03/2024 12:27:53 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--yanolja--EEVE-Korean-Instruct-10.8B-v1.0/snapshots/665c18a64bf18ed1755074427f81cc5a8fdc3287/config.json 06/03/2024 12:27:53 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 32000, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 48, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": false, "vocab_size": 40960 } 06/03/2024 12:27:53 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/Custom/lora/train_2024-06-03-09-50-18/tokenizer_config.json 06/03/2024 12:27:53 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/Custom/lora/train_2024-06-03-09-50-18/special_tokens_map.json 06/03/2024 12:27:53 - WARNING - llamafactory.extras.ploting - No metric eval_loss to plot. 06/03/2024 12:27:53 - INFO - transformers.modelcard - Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}