See axolotl config
axolotl version: 0.4.1
base_model: mistralai/Mistral-7B-Instruct-v0.2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
chat_template: chatml
datasets:
- path: Howard881010/medical
type: alpaca
train_on_split: train
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./finetune/outputs/medical
adapter: qlora
lora_model_dir:
sequence_len: 1500
sample_packing: false
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: finetune
wandb_entity:
wandb_watch:
wandb_name: medical
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 1
num_epochs: 10
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
eval_sample_packing: False
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
# For finetune
seed: 42
finetune/outputs/medical
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.4607
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.581 | 0.0005 | 1 | 2.3899 |
0.7439 | 0.2502 | 536 | 0.9957 |
0.6364 | 0.5004 | 1072 | 1.0250 |
0.4046 | 0.7505 | 1608 | 1.0972 |
0.2551 | 1.0007 | 2144 | 1.2306 |
0.1894 | 1.2509 | 2680 | 1.2541 |
0.1015 | 1.5011 | 3216 | 1.3733 |
0.1441 | 1.7512 | 3752 | 1.4618 |
0.0604 | 2.0014 | 4288 | 1.5229 |
0.058 | 2.2516 | 4824 | 1.5635 |
0.0669 | 2.5018 | 5360 | 1.6184 |
0.0604 | 2.7519 | 5896 | 1.6690 |
0.0352 | 3.0021 | 6432 | 1.6985 |
0.0296 | 3.2523 | 6968 | 1.7366 |
0.0262 | 3.5025 | 7504 | 1.7928 |
0.0214 | 3.7526 | 8040 | 1.8352 |
0.0134 | 4.0028 | 8576 | 1.9588 |
0.0108 | 4.2530 | 9112 | 1.9946 |
0.0112 | 4.5032 | 9648 | 1.9847 |
0.0107 | 4.7533 | 10184 | 1.9900 |
0.0052 | 5.0035 | 10720 | 2.0806 |
0.0067 | 5.2537 | 11256 | 2.1444 |
0.0053 | 5.5039 | 11792 | 2.2294 |
0.0055 | 5.7540 | 12328 | 2.3097 |
0.0067 | 6.0042 | 12864 | 2.3069 |
0.0004 | 6.2544 | 13400 | 2.3435 |
0.0005 | 6.5046 | 13936 | 2.2964 |
0.0004 | 6.7547 | 14472 | 2.3073 |
0.0002 | 7.0049 | 15008 | 2.3668 |
0.0002 | 7.2551 | 15544 | 2.3933 |
0.0001 | 7.5053 | 16080 | 2.4192 |
0.0002 | 7.7554 | 16616 | 2.4246 |
0.0001 | 8.0056 | 17152 | 2.4351 |
0.0001 | 8.2558 | 17688 | 2.4445 |
0.0002 | 8.5060 | 18224 | 2.4529 |
0.0002 | 8.7561 | 18760 | 2.4571 |
0.0001 | 9.0063 | 19296 | 2.4593 |
0.0001 | 9.2565 | 19832 | 2.4603 |
0.0001 | 9.5067 | 20368 | 2.4605 |
0.0013 | 9.7568 | 20904 | 2.4607 |
Framework versions
- PEFT 0.11.1
- Transformers 4.43.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 1
Model tree for Rose-STL-Lab/medical
Base model
mistralai/Mistral-7B-Instruct-v0.2