csujeong commited on
Commit
dd09c60
1 Parent(s): f199e7b

End of training

Browse files
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - sft
7
+ - generated_from_trainer
8
+ base_model: google/gemma-7b
9
+ model-index:
10
+ - name: Gemma-7B-Finetuning-JCS
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # Gemma-7B-Finetuning-JCS
18
+
19
+ This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on an unknown dataset.
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 0.0002
39
+ - train_batch_size: 2
40
+ - eval_batch_size: 8
41
+ - seed: 42
42
+ - gradient_accumulation_steps: 2
43
+ - total_train_batch_size: 4
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: cosine
46
+ - lr_scheduler_warmup_ratio: 0.03
47
+ - training_steps: 60
48
+
49
+ ### Training results
50
+
51
+
52
+
53
+ ### Framework versions
54
+
55
+ - PEFT 0.8.2
56
+ - Transformers 4.38.1
57
+ - Pytorch 2.1.0+cu121
58
+ - Datasets 2.17.1
59
+ - Tokenizers 0.15.2
wandb/debug-internal.log CHANGED
@@ -359,3 +359,29 @@
359
  2024-02-26 10:21:38,924 INFO Thread-12 :5808 [dir_watcher.py:_on_file_modified():288] file/dir modified: /content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS/wandb/run-20240226_101059-z4vsrt4l/files/wandb-summary.json
360
  2024-02-26 10:21:42,731 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: status_report
361
  2024-02-26 10:21:47,732 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: status_report
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
359
  2024-02-26 10:21:38,924 INFO Thread-12 :5808 [dir_watcher.py:_on_file_modified():288] file/dir modified: /content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS/wandb/run-20240226_101059-z4vsrt4l/files/wandb-summary.json
360
  2024-02-26 10:21:42,731 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: status_report
361
  2024-02-26 10:21:47,732 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: status_report
362
+ 2024-02-26 10:21:50,553 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: pause
363
+ 2024-02-26 10:21:50,554 INFO HandlerThread:5808 [handler.py:handle_request_pause():708] stopping system metrics thread
364
+ 2024-02-26 10:21:50,554 INFO HandlerThread:5808 [system_monitor.py:finish():203] Stopping system monitor
365
+ 2024-02-26 10:21:50,557 DEBUG SystemMonitor:5808 [system_monitor.py:_start():179] Finished system metrics aggregation loop
366
+ 2024-02-26 10:21:50,565 DEBUG SystemMonitor:5808 [system_monitor.py:_start():183] Publishing last batch of metrics
367
+ 2024-02-26 10:21:50,557 INFO HandlerThread:5808 [interfaces.py:finish():202] Joined cpu monitor
368
+ 2024-02-26 10:21:50,567 INFO HandlerThread:5808 [interfaces.py:finish():202] Joined disk monitor
369
+ 2024-02-26 10:21:50,576 INFO HandlerThread:5808 [interfaces.py:finish():202] Joined gpu monitor
370
+ 2024-02-26 10:21:50,576 INFO HandlerThread:5808 [interfaces.py:finish():202] Joined memory monitor
371
+ 2024-02-26 10:21:50,577 INFO HandlerThread:5808 [interfaces.py:finish():202] Joined network monitor
372
+ 2024-02-26 10:21:50,578 DEBUG SenderThread:5808 [sender.py:send():382] send: stats
373
+ 2024-02-26 10:21:51,012 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: internal_messages
374
+ 2024-02-26 10:21:51,013 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: stop_status
375
+ 2024-02-26 10:21:51,014 DEBUG SenderThread:5808 [sender.py:send_request():409] send_request: stop_status
376
+ 2024-02-26 10:21:53,235 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: status_report
377
+ 2024-02-26 10:21:58,240 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: status_report
378
+ 2024-02-26 10:21:58,560 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: resume
379
+ 2024-02-26 10:21:58,561 INFO HandlerThread:5808 [handler.py:handle_request_resume():699] starting system metrics thread
380
+ 2024-02-26 10:21:58,561 INFO HandlerThread:5808 [system_monitor.py:start():194] Starting system monitor
381
+ 2024-02-26 10:21:58,561 INFO SystemMonitor:5808 [system_monitor.py:_start():158] Starting system asset monitoring threads
382
+ 2024-02-26 10:21:58,563 INFO SystemMonitor:5808 [interfaces.py:start():190] Started cpu monitoring
383
+ 2024-02-26 10:21:58,567 INFO SystemMonitor:5808 [interfaces.py:start():190] Started disk monitoring
384
+ 2024-02-26 10:21:58,568 INFO SystemMonitor:5808 [interfaces.py:start():190] Started gpu monitoring
385
+ 2024-02-26 10:21:58,574 INFO SystemMonitor:5808 [interfaces.py:start():190] Started memory monitoring
386
+ 2024-02-26 10:21:58,574 INFO SystemMonitor:5808 [interfaces.py:start():190] Started network monitoring
387
+ 2024-02-26 10:21:58,975 INFO Thread-12 :5808 [dir_watcher.py:_on_file_modified():288] file/dir modified: /content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS/wandb/run-20240226_101059-z4vsrt4l/files/config.yaml
wandb/debug.log CHANGED
@@ -28,3 +28,6 @@ config: {}
28
  2024-02-26 10:11:02,661 INFO MainThread:149 [wandb_run.py:_redirect():2186] Redirects installed.
29
  2024-02-26 10:11:02,663 INFO MainThread:149 [wandb_init.py:init():847] run started, returning control to user process
30
  2024-02-26 10:11:02,670 INFO MainThread:149 [wandb_run.py:_config_callback():1343] config_cb None None {'vocab_size': 256000, 'max_position_embeddings': 8192, 'hidden_size': 3072, 'intermediate_size': 24576, 'num_hidden_layers': 28, 'num_attention_heads': 16, 'head_dim': 256, 'num_key_value_heads': 16, 'hidden_act': 'gelu', 'initializer_range': 0.02, 'rms_norm_eps': 1e-06, 'use_cache': False, 'rope_theta': 10000.0, 'attention_bias': False, 'attention_dropout': 0.0, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['GemmaForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': 2, 'pad_token_id': 0, 'eos_token_id': 1, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'google/gemma-7b', 'transformers_version': '4.38.1', 'model_type': 'gemma', 'rope_scaling': None, 'quantization_config': {'quant_method': 'QuantizationMethod.BITS_AND_BYTES', '_load_in_8bit': False, '_load_in_4bit': True, 'llm_int8_threshold': 6.0, 'llm_int8_skip_modules': None, 'llm_int8_enable_fp32_cpu_offload': False, 'llm_int8_has_fp16_weight': False, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': True, 'bnb_4bit_compute_dtype': 'bfloat16', 'load_in_4bit': True, 'load_in_8bit': False}, 'output_dir': '/content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': False, 'do_predict': False, 'evaluation_strategy': 'no', 'prediction_loss_only': False, 'per_device_train_batch_size': 2, 'per_device_eval_batch_size': 8, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 2, 'eval_accumulation_steps': None, 'eval_delay': 0, 'learning_rate': 0.0002, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 0.3, 'num_train_epochs': 3.0, 'max_steps': 60, 'lr_scheduler_type': 'cosine', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.03, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS/runs/Feb26_10-10-29_30a0ffea74aa', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 10, 'logging_nan_inf_filter': True, 'save_strategy': 'steps', 'save_steps': 10, 'save_total_limit': None, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': False, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': False, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': None, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '/content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS', 'disable_tqdm': False, 'remove_unused_columns': True, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'paged_adamw_32bit', 'optim_args': None, 'adafactor': False, 'group_by_length': True, 'length_column_name': 'length', 'report_to': ['tensorboard', 'wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'fp16_backend': 'auto', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None}
 
 
 
 
28
  2024-02-26 10:11:02,661 INFO MainThread:149 [wandb_run.py:_redirect():2186] Redirects installed.
29
  2024-02-26 10:11:02,663 INFO MainThread:149 [wandb_init.py:init():847] run started, returning control to user process
30
  2024-02-26 10:11:02,670 INFO MainThread:149 [wandb_run.py:_config_callback():1343] config_cb None None {'vocab_size': 256000, 'max_position_embeddings': 8192, 'hidden_size': 3072, 'intermediate_size': 24576, 'num_hidden_layers': 28, 'num_attention_heads': 16, 'head_dim': 256, 'num_key_value_heads': 16, 'hidden_act': 'gelu', 'initializer_range': 0.02, 'rms_norm_eps': 1e-06, 'use_cache': False, 'rope_theta': 10000.0, 'attention_bias': False, 'attention_dropout': 0.0, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['GemmaForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': 2, 'pad_token_id': 0, 'eos_token_id': 1, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'google/gemma-7b', 'transformers_version': '4.38.1', 'model_type': 'gemma', 'rope_scaling': None, 'quantization_config': {'quant_method': 'QuantizationMethod.BITS_AND_BYTES', '_load_in_8bit': False, '_load_in_4bit': True, 'llm_int8_threshold': 6.0, 'llm_int8_skip_modules': None, 'llm_int8_enable_fp32_cpu_offload': False, 'llm_int8_has_fp16_weight': False, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': True, 'bnb_4bit_compute_dtype': 'bfloat16', 'load_in_4bit': True, 'load_in_8bit': False}, 'output_dir': '/content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': False, 'do_predict': False, 'evaluation_strategy': 'no', 'prediction_loss_only': False, 'per_device_train_batch_size': 2, 'per_device_eval_batch_size': 8, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 2, 'eval_accumulation_steps': None, 'eval_delay': 0, 'learning_rate': 0.0002, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 0.3, 'num_train_epochs': 3.0, 'max_steps': 60, 'lr_scheduler_type': 'cosine', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.03, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS/runs/Feb26_10-10-29_30a0ffea74aa', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 10, 'logging_nan_inf_filter': True, 'save_strategy': 'steps', 'save_steps': 10, 'save_total_limit': None, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': False, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': False, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': None, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '/content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS', 'disable_tqdm': False, 'remove_unused_columns': True, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'paged_adamw_32bit', 'optim_args': None, 'adafactor': False, 'group_by_length': True, 'length_column_name': 'length', 'report_to': ['tensorboard', 'wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'fp16_backend': 'auto', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None}
31
+ 2024-02-26 10:21:50,549 INFO MainThread:149 [jupyter.py:save_ipynb():373] not saving jupyter notebook
32
+ 2024-02-26 10:21:50,550 INFO MainThread:149 [wandb_init.py:_pause_backend():437] pausing backend
33
+ 2024-02-26 10:21:58,558 INFO MainThread:149 [wandb_init.py:_resume_backend():442] resuming backend
wandb/run-20240226_101059-z4vsrt4l/files/config.yaml CHANGED
@@ -72,6 +72,26 @@ _wandb:
72
  5: 1
73
  6:
74
  - 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
  vocab_size:
76
  desc: null
77
  value: 256000
 
72
  5: 1
73
  6:
74
  - 1
75
+ - 1: train/train_runtime
76
+ 5: 1
77
+ 6:
78
+ - 1
79
+ - 1: train/train_samples_per_second
80
+ 5: 1
81
+ 6:
82
+ - 1
83
+ - 1: train/train_steps_per_second
84
+ 5: 1
85
+ 6:
86
+ - 1
87
+ - 1: train/total_flos
88
+ 5: 1
89
+ 6:
90
+ - 1
91
+ - 1: train/train_loss
92
+ 5: 1
93
+ 6:
94
+ - 1
95
  vocab_size:
96
  desc: null
97
  value: 256000
wandb/run-20240226_101059-z4vsrt4l/logs/debug-internal.log CHANGED
@@ -359,3 +359,29 @@
359
  2024-02-26 10:21:38,924 INFO Thread-12 :5808 [dir_watcher.py:_on_file_modified():288] file/dir modified: /content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS/wandb/run-20240226_101059-z4vsrt4l/files/wandb-summary.json
360
  2024-02-26 10:21:42,731 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: status_report
361
  2024-02-26 10:21:47,732 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: status_report
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
359
  2024-02-26 10:21:38,924 INFO Thread-12 :5808 [dir_watcher.py:_on_file_modified():288] file/dir modified: /content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS/wandb/run-20240226_101059-z4vsrt4l/files/wandb-summary.json
360
  2024-02-26 10:21:42,731 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: status_report
361
  2024-02-26 10:21:47,732 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: status_report
362
+ 2024-02-26 10:21:50,553 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: pause
363
+ 2024-02-26 10:21:50,554 INFO HandlerThread:5808 [handler.py:handle_request_pause():708] stopping system metrics thread
364
+ 2024-02-26 10:21:50,554 INFO HandlerThread:5808 [system_monitor.py:finish():203] Stopping system monitor
365
+ 2024-02-26 10:21:50,557 DEBUG SystemMonitor:5808 [system_monitor.py:_start():179] Finished system metrics aggregation loop
366
+ 2024-02-26 10:21:50,565 DEBUG SystemMonitor:5808 [system_monitor.py:_start():183] Publishing last batch of metrics
367
+ 2024-02-26 10:21:50,557 INFO HandlerThread:5808 [interfaces.py:finish():202] Joined cpu monitor
368
+ 2024-02-26 10:21:50,567 INFO HandlerThread:5808 [interfaces.py:finish():202] Joined disk monitor
369
+ 2024-02-26 10:21:50,576 INFO HandlerThread:5808 [interfaces.py:finish():202] Joined gpu monitor
370
+ 2024-02-26 10:21:50,576 INFO HandlerThread:5808 [interfaces.py:finish():202] Joined memory monitor
371
+ 2024-02-26 10:21:50,577 INFO HandlerThread:5808 [interfaces.py:finish():202] Joined network monitor
372
+ 2024-02-26 10:21:50,578 DEBUG SenderThread:5808 [sender.py:send():382] send: stats
373
+ 2024-02-26 10:21:51,012 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: internal_messages
374
+ 2024-02-26 10:21:51,013 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: stop_status
375
+ 2024-02-26 10:21:51,014 DEBUG SenderThread:5808 [sender.py:send_request():409] send_request: stop_status
376
+ 2024-02-26 10:21:53,235 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: status_report
377
+ 2024-02-26 10:21:58,240 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: status_report
378
+ 2024-02-26 10:21:58,560 DEBUG HandlerThread:5808 [handler.py:handle_request():146] handle_request: resume
379
+ 2024-02-26 10:21:58,561 INFO HandlerThread:5808 [handler.py:handle_request_resume():699] starting system metrics thread
380
+ 2024-02-26 10:21:58,561 INFO HandlerThread:5808 [system_monitor.py:start():194] Starting system monitor
381
+ 2024-02-26 10:21:58,561 INFO SystemMonitor:5808 [system_monitor.py:_start():158] Starting system asset monitoring threads
382
+ 2024-02-26 10:21:58,563 INFO SystemMonitor:5808 [interfaces.py:start():190] Started cpu monitoring
383
+ 2024-02-26 10:21:58,567 INFO SystemMonitor:5808 [interfaces.py:start():190] Started disk monitoring
384
+ 2024-02-26 10:21:58,568 INFO SystemMonitor:5808 [interfaces.py:start():190] Started gpu monitoring
385
+ 2024-02-26 10:21:58,574 INFO SystemMonitor:5808 [interfaces.py:start():190] Started memory monitoring
386
+ 2024-02-26 10:21:58,574 INFO SystemMonitor:5808 [interfaces.py:start():190] Started network monitoring
387
+ 2024-02-26 10:21:58,975 INFO Thread-12 :5808 [dir_watcher.py:_on_file_modified():288] file/dir modified: /content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS/wandb/run-20240226_101059-z4vsrt4l/files/config.yaml
wandb/run-20240226_101059-z4vsrt4l/logs/debug.log CHANGED
@@ -28,3 +28,6 @@ config: {}
28
  2024-02-26 10:11:02,661 INFO MainThread:149 [wandb_run.py:_redirect():2186] Redirects installed.
29
  2024-02-26 10:11:02,663 INFO MainThread:149 [wandb_init.py:init():847] run started, returning control to user process
30
  2024-02-26 10:11:02,670 INFO MainThread:149 [wandb_run.py:_config_callback():1343] config_cb None None {'vocab_size': 256000, 'max_position_embeddings': 8192, 'hidden_size': 3072, 'intermediate_size': 24576, 'num_hidden_layers': 28, 'num_attention_heads': 16, 'head_dim': 256, 'num_key_value_heads': 16, 'hidden_act': 'gelu', 'initializer_range': 0.02, 'rms_norm_eps': 1e-06, 'use_cache': False, 'rope_theta': 10000.0, 'attention_bias': False, 'attention_dropout': 0.0, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['GemmaForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': 2, 'pad_token_id': 0, 'eos_token_id': 1, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'google/gemma-7b', 'transformers_version': '4.38.1', 'model_type': 'gemma', 'rope_scaling': None, 'quantization_config': {'quant_method': 'QuantizationMethod.BITS_AND_BYTES', '_load_in_8bit': False, '_load_in_4bit': True, 'llm_int8_threshold': 6.0, 'llm_int8_skip_modules': None, 'llm_int8_enable_fp32_cpu_offload': False, 'llm_int8_has_fp16_weight': False, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': True, 'bnb_4bit_compute_dtype': 'bfloat16', 'load_in_4bit': True, 'load_in_8bit': False}, 'output_dir': '/content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': False, 'do_predict': False, 'evaluation_strategy': 'no', 'prediction_loss_only': False, 'per_device_train_batch_size': 2, 'per_device_eval_batch_size': 8, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 2, 'eval_accumulation_steps': None, 'eval_delay': 0, 'learning_rate': 0.0002, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 0.3, 'num_train_epochs': 3.0, 'max_steps': 60, 'lr_scheduler_type': 'cosine', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.03, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS/runs/Feb26_10-10-29_30a0ffea74aa', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 10, 'logging_nan_inf_filter': True, 'save_strategy': 'steps', 'save_steps': 10, 'save_total_limit': None, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': False, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': False, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': None, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '/content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS', 'disable_tqdm': False, 'remove_unused_columns': True, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'paged_adamw_32bit', 'optim_args': None, 'adafactor': False, 'group_by_length': True, 'length_column_name': 'length', 'report_to': ['tensorboard', 'wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'fp16_backend': 'auto', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None}
 
 
 
 
28
  2024-02-26 10:11:02,661 INFO MainThread:149 [wandb_run.py:_redirect():2186] Redirects installed.
29
  2024-02-26 10:11:02,663 INFO MainThread:149 [wandb_init.py:init():847] run started, returning control to user process
30
  2024-02-26 10:11:02,670 INFO MainThread:149 [wandb_run.py:_config_callback():1343] config_cb None None {'vocab_size': 256000, 'max_position_embeddings': 8192, 'hidden_size': 3072, 'intermediate_size': 24576, 'num_hidden_layers': 28, 'num_attention_heads': 16, 'head_dim': 256, 'num_key_value_heads': 16, 'hidden_act': 'gelu', 'initializer_range': 0.02, 'rms_norm_eps': 1e-06, 'use_cache': False, 'rope_theta': 10000.0, 'attention_bias': False, 'attention_dropout': 0.0, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['GemmaForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': 2, 'pad_token_id': 0, 'eos_token_id': 1, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'google/gemma-7b', 'transformers_version': '4.38.1', 'model_type': 'gemma', 'rope_scaling': None, 'quantization_config': {'quant_method': 'QuantizationMethod.BITS_AND_BYTES', '_load_in_8bit': False, '_load_in_4bit': True, 'llm_int8_threshold': 6.0, 'llm_int8_skip_modules': None, 'llm_int8_enable_fp32_cpu_offload': False, 'llm_int8_has_fp16_weight': False, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': True, 'bnb_4bit_compute_dtype': 'bfloat16', 'load_in_4bit': True, 'load_in_8bit': False}, 'output_dir': '/content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': False, 'do_predict': False, 'evaluation_strategy': 'no', 'prediction_loss_only': False, 'per_device_train_batch_size': 2, 'per_device_eval_batch_size': 8, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 2, 'eval_accumulation_steps': None, 'eval_delay': 0, 'learning_rate': 0.0002, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 0.3, 'num_train_epochs': 3.0, 'max_steps': 60, 'lr_scheduler_type': 'cosine', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.03, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS/runs/Feb26_10-10-29_30a0ffea74aa', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 10, 'logging_nan_inf_filter': True, 'save_strategy': 'steps', 'save_steps': 10, 'save_total_limit': None, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': False, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': False, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': None, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '/content/gdrive/MyDrive/LLM/Gemma-7B-Finetuning-JCS', 'disable_tqdm': False, 'remove_unused_columns': True, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'paged_adamw_32bit', 'optim_args': None, 'adafactor': False, 'group_by_length': True, 'length_column_name': 'length', 'report_to': ['tensorboard', 'wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'fp16_backend': 'auto', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None}
31
+ 2024-02-26 10:21:50,549 INFO MainThread:149 [jupyter.py:save_ipynb():373] not saving jupyter notebook
32
+ 2024-02-26 10:21:50,550 INFO MainThread:149 [wandb_init.py:_pause_backend():437] pausing backend
33
+ 2024-02-26 10:21:58,558 INFO MainThread:149 [wandb_init.py:_resume_backend():442] resuming backend