--- library_name: peft license: llama3.1 base_model: unsloth/Llama-3.1-Storm-8B tags: - axolotl - generated_from_trainer model-index: - name: 81ee6aa8-1abc-4905-8446-40b88b66ce39 results: [] --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.1-Storm-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - e8ca0ac66aa11e96_train_data.json ds_type: json format: custom path: /workspace/input_data/e8ca0ac66aa11e96_train_data.json type: field_instruction: Hausa field_output: English format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: dimasik87/81ee6aa8-1abc-4905-8446-40b88b66ce39 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 70GiB max_steps: 50 micro_batch_size: 1 mlflow_experiment_name: /tmp/e8ca0ac66aa11e96_train_data.json model_type: AutoModelForCausalLM num_epochs: 4 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 2028 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 81ee6aa8-1abc-4905-8446-40b88b66ce39 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 81ee6aa8-1abc-4905-8446-40b88b66ce39 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ```

# 81ee6aa8-1abc-4905-8446-40b88b66ce39 This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2479 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.969 | 0.0003 | 1 | 11.5726 | | 9.5194 | 0.0012 | 4 | 11.4298 | | 9.8745 | 0.0024 | 8 | 9.6370 | | 5.6577 | 0.0036 | 12 | 5.3106 | | 2.7349 | 0.0048 | 16 | 4.8557 | | 3.5522 | 0.0060 | 20 | 4.0696 | | 3.0378 | 0.0072 | 24 | 3.7071 | | 3.2366 | 0.0084 | 28 | 3.5185 | | 3.5875 | 0.0096 | 32 | 3.4018 | | 4.0769 | 0.0108 | 36 | 3.3534 | | 2.8831 | 0.0120 | 40 | 3.2889 | | 3.5078 | 0.0132 | 44 | 3.2595 | | 2.8259 | 0.0144 | 48 | 3.2479 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1