See axolotl config
axolotl version: 0.4.0
base_model: AlekseyKorshuk/ultrachat-phi-2-sft-chatml
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
hub_model_id: AlekseyKorshuk/ultrachat-evolcode-phi-2-sft-chatml
hub_strategy: every_save
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: AlekseyKorshuk/evol-codealpaca-v1-sft
type: sharegpt
conversation: chatml
dataset_prepared_path:
val_set_size: 0
output_dir: ./output
sequence_len: 2048
sample_packing: false
pad_to_sequence_len:
lora_r:
lora_alpha:
lora_dropout:
lora_target_modules:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project: ui-thesis
wandb_entity:
wandb_watch:
wandb_name: ultrachat-evolcode-phi-2-sft-chatml
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 16
num_epochs: 1
optimizer: paged_adamw_8bit
adam_beta1: 0.9
adam_beta2: 0.95
max_grad_norm: 1.0
adam_epsilon: 0.00001
lr_scheduler: cosine
cosine_min_lr_ratio: 0.1
learning_rate: 2e-5
warmup_ratio: 0.1
weight_decay: 0.1
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: true
#bf16: false
#fp16: false
#tf32: false
#float16: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
evals_per_epoch: 0
eval_table_size: 8 # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0
eval_table_max_new_tokens: 768 # Total number of tokens generated for predictions sent to wandb. Default is 128
eval_sample_packing: false
chat_template: chatml
saves_per_epoch: 5
save_total_limit: 1
seed: 42
debug:
deepspeed:
fsdp:
fsdp_config:
resize_token_embeddings_to_32x: true
ultrachat-evolcode-phi-2-sft-chatml
This model is a fine-tuned version of AlekseyKorshuk/ultrachat-phi-2-sft-chatml on the None dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 7
- num_epochs: 1
Training results
Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
- Downloads last month
- 1
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for AlekseyKorshuk/ultrachat-evolcode-phi-2-sft-chatml
Base model
microsoft/phi-2