Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
Update Model Categorization System ?
#2
by
CombinHorizon
- opened
btw the main leaderboard updated to use a different categorization system
๐ข pretrained
๐ฉ continuously pretrained
๐ถ fine-tuned on domain-specific datasets
๐ฌ chat models (RLHF, DPO, IFT, ...
๐ค base merges and moerges
maybe also add a [category]:
๐ : language adapted (FP, FT, ...)
the change was due to the fact that it: either wasn't always clear , or from a practical standpoint, which category to put them in
so it's
๐ข pretrained โ ๐ข or ๐ฉ
๐ถ fine-tuned on domain-specific datasets โ ๐ถ
โญ instruction-tuned โ ๐ฌ
๐ฆ RL-tuned โ ๐ฌ
the following are direct merges/MoE's
- SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_SLERP
- uygarkurt/llama-3-merged-linear
- kekmodel/StopCarbon-10.7B-v5
- jeonsworld/CarbonVillain-en-10.7B-v4
- invalid-coder/Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp
- shadowml/BeagSake-7B
- zhengr/MixTAO-7Bx2-MoE-v8.1
- yunconglong/DARE_TIES_13B
- yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B
- shanchen/llama3-8B-slerp-med-chinese
๐ฉ continuously pretrained:
- yam-peleg/Hebrew-Mistral-7B
- yam-peleg/Hebrew-Gemma-11B-V2
๐ : language adapted (FP, FT, ...) :
- ronigold/dictalm2.0-instruct-fine-tuned-alpaca-gpt4-hebrew
- SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned
- ronigold/dictalm2.0-instruct-fine-tuned
- SicariusSicariiStuff/Zion_Alpha
maybe add these models?
- ๐ฉ yam-peleg/Hebrew-Mistral-7B-200K (FP32, BF16 is closest)
- ๐ฉ yam-peleg/Hebrew-Mixtral-8x22B (FP16)
- ๐ฌ yam-peleg/Hebrew-Gemma-11B-Instruct (FP16)