|
--- |
|
library_name: transformers |
|
license: other |
|
base_model: Qwen/Qwen2.5-72B |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: EVA-Qwen2.5-72B-SFFT-v0.2 |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) |
|
<details><summary>See axolotl config</summary> |
|
|
|
axolotl version: `0.4.1` |
|
```yaml |
|
base_model: Qwen/Qwen2.5-72B |
|
|
|
load_in_8bit: false |
|
load_in_4bit: false |
|
strict: false |
|
|
|
plugins: |
|
- axolotl.integrations.liger.LigerPlugin |
|
liger_rope: true |
|
liger_rms_norm: true |
|
liger_swiglu: true |
|
liger_fused_linear_cross_entropy: true |
|
|
|
# plugins: |
|
# - axolotl.integrations.spectrum.SpectrumPlugin |
|
|
|
# spectrum_top_fraction: 0.5 |
|
# # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror |
|
# spectrum_model_name: Qwen/Qwen2.5-32B |
|
|
|
datasets: |
|
- path: datasets/Celeste_Filtered_utf8fix.jsonl |
|
type: sharegpt |
|
- path: datasets/deduped_not_samantha_norefusals.jsonl |
|
type: sharegpt |
|
- path: datasets/deduped_SynthRP-Gens_processed_ShareGPT_converted_cleaned.jsonl |
|
type: sharegpt |
|
- path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl |
|
type: sharegpt |
|
- path: datasets/Gryphe-4o-WP-filtered-sharegpt_utf8fix.jsonl |
|
type: sharegpt |
|
- path: datasets/opus-instruct-22k-no_refusals-filtered_utf8fix.jsonl |
|
type: sharegpt |
|
- path: datasets/Sonnet3-5-charcard-names-filtered-sharegpt_utf8fix.jsonl |
|
type: sharegpt |
|
- path: datasets/SystemChat_subset_filtered_sharegpt_utf8fix.jsonl |
|
type: sharegpt |
|
|
|
chat_template: chatml |
|
shuffle_merged_datasets: true |
|
val_set_size: 0.001 |
|
output_dir: EVA-Qwen2.5-72B-SFFT-v0.2 |
|
|
|
sequence_len: 10240 |
|
sample_packing: true |
|
eval_sample_packing: false |
|
pad_to_sequence_len: false |
|
|
|
# adapter: qlora |
|
# lora_model_dir: |
|
# lora_r: 64 |
|
# lora_alpha: 128 |
|
# lora_dropout: 0.05 |
|
# lora_target_linear: true |
|
# peft_use_dora: true |
|
|
|
unfrozen_parameters: |
|
- ^lm_head.weight$ |
|
- ^model.embed_tokens.weight$ |
|
# mlp.down_proj layers |
|
- model.layers.62.mlp.down_proj |
|
- model.layers.64.mlp.down_proj |
|
- model.layers.63.mlp.down_proj |
|
- model.layers.66.mlp.down_proj |
|
- model.layers.65.mlp.down_proj |
|
- model.layers.67.mlp.down_proj |
|
- model.layers.68.mlp.down_proj |
|
- model.layers.31.mlp.down_proj |
|
- model.layers.60.mlp.down_proj |
|
- model.layers.69.mlp.down_proj |
|
- model.layers.61.mlp.down_proj |
|
- model.layers.59.mlp.down_proj |
|
- model.layers.30.mlp.down_proj |
|
- model.layers.70.mlp.down_proj |
|
- model.layers.32.mlp.down_proj |
|
- model.layers.34.mlp.down_proj |
|
- model.layers.33.mlp.down_proj |
|
- model.layers.76.mlp.down_proj |
|
- model.layers.72.mlp.down_proj |
|
- model.layers.71.mlp.down_proj |
|
- model.layers.58.mlp.down_proj |
|
- model.layers.75.mlp.down_proj |
|
- model.layers.29.mlp.down_proj |
|
- model.layers.56.mlp.down_proj |
|
- model.layers.26.mlp.down_proj |
|
- model.layers.35.mlp.down_proj |
|
- model.layers.28.mlp.down_proj |
|
- model.layers.57.mlp.down_proj |
|
- model.layers.77.mlp.down_proj |
|
- model.layers.36.mlp.down_proj |
|
- model.layers.27.mlp.down_proj |
|
- model.layers.25.mlp.down_proj |
|
- model.layers.78.mlp.down_proj |
|
- model.layers.37.mlp.down_proj |
|
- model.layers.73.mlp.down_proj |
|
- model.layers.55.mlp.down_proj |
|
- model.layers.54.mlp.down_proj |
|
- model.layers.74.mlp.down_proj |
|
- model.layers.24.mlp.down_proj |
|
- model.layers.53.mlp.down_proj |
|
# mlp.gate_proj layers |
|
- model.layers.78.mlp.gate_proj |
|
- model.layers.77.mlp.gate_proj |
|
- model.layers.76.mlp.gate_proj |
|
- model.layers.79.mlp.gate_proj |
|
- model.layers.75.mlp.gate_proj |
|
- model.layers.74.mlp.gate_proj |
|
- model.layers.73.mlp.gate_proj |
|
- model.layers.72.mlp.gate_proj |
|
- model.layers.71.mlp.gate_proj |
|
- model.layers.70.mlp.gate_proj |
|
- model.layers.69.mlp.gate_proj |
|
- model.layers.57.mlp.gate_proj |
|
- model.layers.54.mlp.gate_proj |
|
- model.layers.55.mlp.gate_proj |
|
- model.layers.68.mlp.gate_proj |
|
- model.layers.63.mlp.gate_proj |
|
- model.layers.53.mlp.gate_proj |
|
- model.layers.44.mlp.gate_proj |
|
- model.layers.45.mlp.gate_proj |
|
- model.layers.49.mlp.gate_proj |
|
- model.layers.58.mlp.gate_proj |
|
- model.layers.46.mlp.gate_proj |
|
- model.layers.56.mlp.gate_proj |
|
- model.layers.67.mlp.gate_proj |
|
- model.layers.62.mlp.gate_proj |
|
- model.layers.50.mlp.gate_proj |
|
- model.layers.64.mlp.gate_proj |
|
- model.layers.52.mlp.gate_proj |
|
- model.layers.40.mlp.gate_proj |
|
- model.layers.43.mlp.gate_proj |
|
- model.layers.48.mlp.gate_proj |
|
- model.layers.66.mlp.gate_proj |
|
- model.layers.47.mlp.gate_proj |
|
- model.layers.59.mlp.gate_proj |
|
- model.layers.65.mlp.gate_proj |
|
- model.layers.61.mlp.gate_proj |
|
- model.layers.60.mlp.gate_proj |
|
- model.layers.42.mlp.gate_proj |
|
- model.layers.51.mlp.gate_proj |
|
- model.layers.41.mlp.gate_proj |
|
# mlp.up_proj layers |
|
- model.layers.70.mlp.up_proj |
|
- model.layers.69.mlp.up_proj |
|
- model.layers.71.mlp.up_proj |
|
- model.layers.68.mlp.up_proj |
|
- model.layers.72.mlp.up_proj |
|
- model.layers.67.mlp.up_proj |
|
- model.layers.66.mlp.up_proj |
|
- model.layers.73.mlp.up_proj |
|
- model.layers.46.mlp.up_proj |
|
- model.layers.63.mlp.up_proj |
|
- model.layers.75.mlp.up_proj |
|
- model.layers.76.mlp.up_proj |
|
- model.layers.74.mlp.up_proj |
|
- model.layers.45.mlp.up_proj |
|
- model.layers.62.mlp.up_proj |
|
- model.layers.64.mlp.up_proj |
|
- model.layers.65.mlp.up_proj |
|
- model.layers.44.mlp.up_proj |
|
- model.layers.53.mlp.up_proj |
|
- model.layers.47.mlp.up_proj |
|
- model.layers.49.mlp.up_proj |
|
- model.layers.48.mlp.up_proj |
|
- model.layers.57.mlp.up_proj |
|
- model.layers.43.mlp.up_proj |
|
- model.layers.42.mlp.up_proj |
|
- model.layers.56.mlp.up_proj |
|
- model.layers.61.mlp.up_proj |
|
- model.layers.54.mlp.up_proj |
|
- model.layers.40.mlp.up_proj |
|
- model.layers.55.mlp.up_proj |
|
- model.layers.77.mlp.up_proj |
|
- model.layers.60.mlp.up_proj |
|
- model.layers.41.mlp.up_proj |
|
- model.layers.35.mlp.up_proj |
|
- model.layers.37.mlp.up_proj |
|
- model.layers.58.mlp.up_proj |
|
- model.layers.34.mlp.up_proj |
|
- model.layers.38.mlp.up_proj |
|
- model.layers.33.mlp.up_proj |
|
- model.layers.39.mlp.up_proj |
|
# self_attn.k_proj layers |
|
- model.layers.36.self_attn.k_proj |
|
- model.layers.79.self_attn.k_proj |
|
- model.layers.35.self_attn.k_proj |
|
- model.layers.34.self_attn.k_proj |
|
- model.layers.37.self_attn.k_proj |
|
- model.layers.33.self_attn.k_proj |
|
- model.layers.38.self_attn.k_proj |
|
- model.layers.39.self_attn.k_proj |
|
- model.layers.74.self_attn.k_proj |
|
- model.layers.77.self_attn.k_proj |
|
- model.layers.41.self_attn.k_proj |
|
- model.layers.69.self_attn.k_proj |
|
- model.layers.32.self_attn.k_proj |
|
- model.layers.78.self_attn.k_proj |
|
- model.layers.30.self_attn.k_proj |
|
- model.layers.70.self_attn.k_proj |
|
- model.layers.25.self_attn.k_proj |
|
- model.layers.42.self_attn.k_proj |
|
- model.layers.29.self_attn.k_proj |
|
- model.layers.31.self_attn.k_proj |
|
- model.layers.68.self_attn.k_proj |
|
- model.layers.66.self_attn.k_proj |
|
- model.layers.22.self_attn.k_proj |
|
- model.layers.65.self_attn.k_proj |
|
- model.layers.44.self_attn.k_proj |
|
- model.layers.40.self_attn.k_proj |
|
- model.layers.63.self_attn.k_proj |
|
- model.layers.23.self_attn.k_proj |
|
- model.layers.28.self_attn.k_proj |
|
- model.layers.24.self_attn.k_proj |
|
- model.layers.26.self_attn.k_proj |
|
- model.layers.67.self_attn.k_proj |
|
- model.layers.75.self_attn.k_proj |
|
- model.layers.27.self_attn.k_proj |
|
- model.layers.57.self_attn.k_proj |
|
- model.layers.64.self_attn.k_proj |
|
- model.layers.71.self_attn.k_proj |
|
- model.layers.61.self_attn.k_proj |
|
- model.layers.72.self_attn.k_proj |
|
- model.layers.73.self_attn.k_proj |
|
# self_attn.o_proj layers |
|
- model.layers.69.self_attn.o_proj |
|
- model.layers.39.self_attn.o_proj |
|
- model.layers.16.self_attn.o_proj |
|
- model.layers.14.self_attn.o_proj |
|
- model.layers.19.self_attn.o_proj |
|
- model.layers.42.self_attn.o_proj |
|
- model.layers.12.self_attn.o_proj |
|
- model.layers.15.self_attn.o_proj |
|
- model.layers.17.self_attn.o_proj |
|
- model.layers.38.self_attn.o_proj |
|
- model.layers.23.self_attn.o_proj |
|
- model.layers.22.self_attn.o_proj |
|
- model.layers.13.self_attn.o_proj |
|
- model.layers.29.self_attn.o_proj |
|
- model.layers.41.self_attn.o_proj |
|
- model.layers.44.self_attn.o_proj |
|
- model.layers.46.self_attn.o_proj |
|
- model.layers.45.self_attn.o_proj |
|
- model.layers.43.self_attn.o_proj |
|
- model.layers.49.self_attn.o_proj |
|
- model.layers.30.self_attn.o_proj |
|
- model.layers.26.self_attn.o_proj |
|
- model.layers.25.self_attn.o_proj |
|
- model.layers.37.self_attn.o_proj |
|
- model.layers.47.self_attn.o_proj |
|
- model.layers.11.self_attn.o_proj |
|
- model.layers.18.self_attn.o_proj |
|
- model.layers.28.self_attn.o_proj |
|
- model.layers.20.self_attn.o_proj |
|
- model.layers.27.self_attn.o_proj |
|
- model.layers.53.self_attn.o_proj |
|
- model.layers.52.self_attn.o_proj |
|
- model.layers.35.self_attn.o_proj |
|
- model.layers.71.self_attn.o_proj |
|
- model.layers.10.self_attn.o_proj |
|
- model.layers.3.self_attn.o_proj |
|
- model.layers.21.self_attn.o_proj |
|
- model.layers.24.self_attn.o_proj |
|
- model.layers.68.self_attn.o_proj |
|
- model.layers.48.self_attn.o_proj |
|
# self_attn.q_proj layers |
|
- model.layers.1.self_attn.q_proj |
|
- model.layers.2.self_attn.q_proj |
|
- model.layers.3.self_attn.q_proj |
|
- model.layers.0.self_attn.q_proj |
|
- model.layers.5.self_attn.q_proj |
|
- model.layers.4.self_attn.q_proj |
|
- model.layers.6.self_attn.q_proj |
|
- model.layers.8.self_attn.q_proj |
|
- model.layers.7.self_attn.q_proj |
|
- model.layers.9.self_attn.q_proj |
|
- model.layers.10.self_attn.q_proj |
|
- model.layers.68.self_attn.q_proj |
|
- model.layers.25.self_attn.q_proj |
|
- model.layers.12.self_attn.q_proj |
|
- model.layers.54.self_attn.q_proj |
|
- model.layers.55.self_attn.q_proj |
|
- model.layers.61.self_attn.q_proj |
|
- model.layers.18.self_attn.q_proj |
|
- model.layers.49.self_attn.q_proj |
|
- model.layers.66.self_attn.q_proj |
|
- model.layers.72.self_attn.q_proj |
|
- model.layers.11.self_attn.q_proj |
|
- model.layers.52.self_attn.q_proj |
|
- model.layers.64.self_attn.q_proj |
|
- model.layers.15.self_attn.q_proj |
|
- model.layers.60.self_attn.q_proj |
|
- model.layers.50.self_attn.q_proj |
|
- model.layers.59.self_attn.q_proj |
|
- model.layers.53.self_attn.q_proj |
|
- model.layers.48.self_attn.q_proj |
|
- model.layers.57.self_attn.q_proj |
|
- model.layers.70.self_attn.q_proj |
|
- model.layers.17.self_attn.q_proj |
|
- model.layers.67.self_attn.q_proj |
|
- model.layers.71.self_attn.q_proj |
|
- model.layers.62.self_attn.q_proj |
|
- model.layers.51.self_attn.q_proj |
|
- model.layers.19.self_attn.q_proj |
|
- model.layers.58.self_attn.q_proj |
|
- model.layers.13.self_attn.q_proj |
|
# self_attn.v_proj layers |
|
- model.layers.23.self_attn.v_proj |
|
- model.layers.25.self_attn.v_proj |
|
- model.layers.26.self_attn.v_proj |
|
- model.layers.27.self_attn.v_proj |
|
- model.layers.28.self_attn.v_proj |
|
- model.layers.29.self_attn.v_proj |
|
- model.layers.30.self_attn.v_proj |
|
- model.layers.31.self_attn.v_proj |
|
- model.layers.34.self_attn.v_proj |
|
- model.layers.35.self_attn.v_proj |
|
- model.layers.36.self_attn.v_proj |
|
- model.layers.37.self_attn.v_proj |
|
- model.layers.38.self_attn.v_proj |
|
- model.layers.42.self_attn.v_proj |
|
- model.layers.48.self_attn.v_proj |
|
- model.layers.57.self_attn.v_proj |
|
- model.layers.58.self_attn.v_proj |
|
- model.layers.61.self_attn.v_proj |
|
- model.layers.63.self_attn.v_proj |
|
- model.layers.64.self_attn.v_proj |
|
- model.layers.65.self_attn.v_proj |
|
- model.layers.66.self_attn.v_proj |
|
- model.layers.69.self_attn.v_proj |
|
- model.layers.70.self_attn.v_proj |
|
- model.layers.74.self_attn.v_proj |
|
- model.layers.75.self_attn.v_proj |
|
- model.layers.72.self_attn.v_proj |
|
- model.layers.39.self_attn.v_proj |
|
- model.layers.41.self_attn.v_proj |
|
- model.layers.40.self_attn.v_proj |
|
- model.layers.33.self_attn.v_proj |
|
- model.layers.59.self_attn.v_proj |
|
- model.layers.16.self_attn.v_proj |
|
- model.layers.15.self_attn.v_proj |
|
- model.layers.76.self_attn.v_proj |
|
- model.layers.24.self_attn.v_proj |
|
- model.layers.68.self_attn.v_proj |
|
- model.layers.67.self_attn.v_proj |
|
- model.layers.55.self_attn.v_proj |
|
- model.layers.44.self_attn.v_proj |
|
|
|
|
|
|
|
wandb_project: EVA-Qwen2.5-72B-SFFT-v0.2 |
|
wandb_entity: |
|
wandb_watch: |
|
wandb_name: Unit-02 |
|
wandb_log_model: |
|
|
|
gradient_accumulation_steps: 8 |
|
micro_batch_size: 1 |
|
num_epochs: 3 |
|
optimizer: paged_ademamix_8bit |
|
lr_scheduler: cosine |
|
learning_rate: 0.00003 |
|
max_grad_norm: 1.5 |
|
|
|
train_on_inputs: false |
|
group_by_length: false |
|
bf16: auto |
|
fp16: |
|
tf32: false |
|
|
|
gradient_checkpointing: "unsloth" |
|
# gradient_checkpointing_kwargs: |
|
# use_reentrant: true |
|
early_stopping_patience: |
|
resume_from_checkpoint: EVA-Qwen2.5-72B-SFFT-v0.2/checkpoint-128 |
|
local_rank: |
|
logging_steps: 1 |
|
xformers_attention: |
|
flash_attention: true |
|
|
|
warmup_steps: 20 |
|
evals_per_epoch: 4 |
|
saves_per_epoch: 4 |
|
save_safetensors: true |
|
save_total_limit: 1 |
|
hub_model_id: |
|
hub_strategy: |
|
debug: |
|
deepspeed: deepspeed_configs/zero3_bf16_cpuoffload_params.json |
|
weight_decay: 0.12 |
|
# fsdp: |
|
# - full_shard |
|
# - auto_wrap |
|
# fsdp_config: |
|
# fsdp_limit_all_gathers: true |
|
# fsdp_sync_module_states: false |
|
# fsdp_offload_params: true |
|
# fsdp_cpu_ram_efficient_loading: true |
|
# fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP |
|
# fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer |
|
# fsdp_activation_checkpointing: true |
|
# fsdp_state_dict_type: SHARDED_STATE_DICT # Changed from FULL_STATE_DICT |
|
# fsdp_sharding_strategy: FULL_SHARD |
|
# fsdp_forward_prefetch: false # Added |
|
# fsdp_backward_prefetch: "BACKWARD_PRE" # Added |
|
# fsdp_backward_prefetch_limit: 1 # Added |
|
# fsdp_mixed_precision: BF16 # Added |
|
``` |
|
|
|
</details><br> |
|
|
|
# EVA-Qwen2.5-72B-SFFT-v0.2 |
|
|
|
This model is a fine-tuned version of [Qwen/Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) on the None dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 1.0002 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 3e-05 |
|
- train_batch_size: 1 |
|
- eval_batch_size: 1 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- num_devices: 8 |
|
- gradient_accumulation_steps: 8 |
|
- total_train_batch_size: 64 |
|
- total_eval_batch_size: 8 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_steps: 20 |
|
- num_epochs: 3 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:------:|:----:|:---------------:| |
|
| 1.3381 | 0.0078 | 1 | 1.2881 | |
|
| 1.0803 | 0.2493 | 32 | 0.9745 | |
|
| 0.8733 | 0.4985 | 64 | 0.9413 | |
|
| 0.9273 | 0.7478 | 96 | 0.9224 | |
|
| 0.8413 | 0.9971 | 128 | 0.9133 | |
|
| 0.5342 | 1.2493 | 160 | 0.9449 | |
|
| 0.5184 | 1.4985 | 192 | 0.9414 | |
|
| 0.5177 | 1.7478 | 224 | 0.9300 | |
|
| 0.5494 | 1.9971 | 256 | 0.9273 | |
|
| 0.303 | 2.2449 | 288 | 0.9968 | |
|
| 0.314 | 2.4946 | 320 | 1.0009 | |
|
| 0.3022 | 2.7444 | 352 | 1.0004 | |
|
| 0.2901 | 2.9941 | 384 | 1.0002 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.45.1 |
|
- Pytorch 2.3.1+cu121 |
|
- Datasets 2.21.0 |
|
- Tokenizers 0.20.3 |
|
|