Edit model card

Built with Axolotl

Model description

WitchLM is cool!

Benchmarks

image/png

"leaderboard": { "inst_level_strict_acc,none": 0.33573141486810554, "inst_level_strict_acc_stderr,none": "N/A", "inst_level_loose_acc,none": 0.39568345323741005, "inst_level_loose_acc_stderr,none": "N/A", "acc_norm,none": 0.3493319496692178, "acc_norm_stderr,none": 0.005120138265236575, "acc,none": 0.24418218085106383, "acc_stderr,none": 0.003916649280281885, "exact_match,none": 0.04078549848942598, "exact_match_stderr,none": 0.005354025092648956, "prompt_level_strict_acc,none": 0.1977818853974122, "prompt_level_strict_acc_stderr,none": 0.01714125471908492, "prompt_level_loose_acc,none": 0.25693160813308685, "prompt_level_loose_acc_stderr,none": 0.018802962575636854, "alias": "leaderboard" }, "leaderboard_bbh": { "acc_norm,none": 0.3591390383613956, "acc_norm_stderr,none": 0.0058684522608536275, "alias": " - leaderboard_bbh" }, "leaderboard_gpqa": { "acc_norm,none": 0.29194630872483224, "acc_norm_stderr,none": 0.013178882651123217, "alias": " - leaderboard_gpqa" }, "leaderboard_ifeval": { "prompt_level_strict_acc,none": 0.1977818853974122, "prompt_level_strict_acc_stderr,none": 0.01714125471908492, "inst_level_strict_acc,none": 0.33573141486810554, "inst_level_strict_acc_stderr,none": "N/A", "prompt_level_loose_acc,none": 0.25693160813308685, "prompt_level_loose_acc_stderr,none": 0.018802962575636854, "inst_level_loose_acc,none": 0.39568345323741005, "inst_level_loose_acc_stderr,none": "N/A", "alias": " - leaderboard_ifeval" }, "leaderboard_math_hard": { "exact_match,none": 0.04078549848942598, "exact_match_stderr,none": 0.005354025092648956, "alias": " - leaderboard_math_hard" }, "leaderboard_mmlu_pro": { "acc,none": 0.24418218085106383, "acc_stderr,none": 0.003916649280281885, "alias": " - leaderboard_mmlu_pro" }, "leaderboard_musr": { "acc_norm,none": 0.36507936507936506, "acc_norm_stderr,none": 0.01715613678641816, "alias": " - leaderboard_musr" }

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • total_eval_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 5

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
117
Safetensors
Model size
1.54B params
Tensor type
BF16
ยท
Inference API
Unable to determine this model's library. Check the docs .

Model tree for arcee-ai/WitchLM-1.5B

Base model

Qwen/Qwen2-1.5B
Finetuned
(21)
this model

Space using arcee-ai/WitchLM-1.5B 1