--- license: other tags: - axolotl - generated_from_trainer - Mistral - instruct - finetune - chatml - gpt4 - synthetic data - science - physics - chemistry - biology - math base_model: alpindale/Mistral-7B-v0.2-hf datasets: - allenai/ai2_arc - camel-ai/physics - camel-ai/chemistry - camel-ai/biology - camel-ai/math - metaeval/reclor - openbookqa - mandyyyyii/scibench - derek-thomas/ScienceQA - TIGER-Lab/ScienceEval - jondurbin/airoboros-3.2 - LDJnr/Capybara - Cot-Alpaca-GPT4-From-OpenHermes-2.5 - STEM-AI-mtl/Electrical-engineering - knowrohit07/saraswati-stem - sablo/oasst2_curated - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - bigbio/med_qa - meta-math/MetaMathQA-40K - openbookqa - piqa - metaeval/reclor - derek-thomas/ScienceQA - scibench - sciq - Open-Orca/SlimOrca - migtissera/Synthia-v1.3 - TIGER-Lab/ScienceEval - allenai/WildChat - microsoft/orca-math-word-problems-200k - openchat/openchat_sharegpt4_dataset - teknium/GPTeacher-General-Instruct - m-a-p/CodeFeedback-Filtered-Instruction model-index: - name: Einstein-v5-v0.2-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 60.92 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 80.99 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 61.02 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 52.59 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 59.67 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B name: Open LLM Leaderboard --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/4dxDYjxqLOaALlh4xUXjF.png) # 🔬 Einstein-v5-v0.2-7B This model is a full fine-tuned version of [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) on diverse datasets. This model is finetuned using `8xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). This model's training was sponsored by [sablo.ai](https://sablo.ai).
See axolotl config axolotl version: `0.4.0` ```yaml base_model: alpindale/Mistral-7B-v0.2-hf model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer is_mistral_derived_model: true load_in_8bit: false load_in_4bit: false strict: false chat_template: chatml datasets: - path: data/merged_all.json ds_type: json type: alpaca conversation: chatml - path: data/gpteacher-instruct-special-alpaca.json ds_type: json type: gpteacher conversation: chatml - path: data/capybara_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/synthia-v1.3_sharegpt_12500.json ds_type: json type: sharegpt conversation: chatml - path: data/cot_alpaca_gpt4_extracted_openhermes_2.5_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/slimorca_dedup_filtered_95k_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/airoboros_3.2_without_contextual_slimorca_orca_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/allenai_wild_chat_gpt4_english_toxic_random_half_4k_sharegpt.json ds_type: json type: sharegpt strict: false conversation: chatml - path: data/pippa_bagel_repo_3k_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/gpt4_data_lmys_1m_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/sharegpt_gpt4_english.json ds_type: json type: sharegpt conversation: chatml dataset_prepared_path: last_run_prepared # val_set_size: 0.005 val_set_size: 0.0 do_bench_eval: true output_dir: ./Einstein-v5-Mistral-v0.2-beta-model sequence_len: 8192 sample_packing: true pad_to_sequence_len: true eval_sample_packing: false wandb_project: Einstein wandb_entity: wandb_watch: wandb_name: wandb_log_model: hub_model_id: Weyaxi/Einstein-v5-Mistral-v0.2-beta save_safetensors: true gradient_accumulation_steps: 4 micro_batch_size: 1 num_epochs: 2 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.000005 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 3 # changed eval_table_size: eval_table_max_new_tokens: 128 saves_per_epoch: 3 # changed debug: deepspeed: zero3_bf16.json weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "" eos_token: "<|im_end|>" unk_token: "" tokens: - "<|im_start|>" ```

# 💬 Prompt Template You can use this prompt template while using the model: ### ChatML ``` <|im_start|>system {system}<|im_end|> <|im_start|>user {user}<|im_end|> <|im_start|>assistant {asistant}<|im_end|> ``` This prompt template is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are helpful AI asistant."}, {"role": "user", "content": "Hello!"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` # 🔄 Quantizationed versions Quantizationed versions of this model is available. ## GGUF [@bartowski](https://huggingface.co/bartowski) - https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF ## ExLlamaV2 [@bartowski](https://huggingface.co/bartowski) - https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-exl2 # 🎯 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v5-v0.2-7B) | Metric |Value| |---------------------------------|----:| |Avg. |65.65| |AI2 Reasoning Challenge (25-Shot)|60.92| |HellaSwag (10-Shot) |80.99| |MMLU (5-Shot) |61.02| |TruthfulQA (0-shot) |52.59| |Winogrande (5-shot) |78.69| |GSM8k (5-shot) |59.67| # 🤖 Additional information about training This model is full fine-tuned for 1 epoch. Total number of steps was 1124.
Loss graph ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/TkzKdxZZHznGjYLWiSmLS.png)

# 🤝 Acknowledgments Thanks to [sablo.ai](https://sablo.ai) for sponsoring this model. Thanks to all the dataset authors mentioned in the datasets section. Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model. Thanks to all open source AI community. [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) If you would like to support me: [☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)