--- license: other tags: - axolotl - generated_from_trainer - Mistral - instruct - finetune - chatml - gpt4 - synthetic data - science - physics - chemistry - biology - math base_model: alpindale/Mistral-7B-v0.2-hf datasets: - allenai/ai2_arc - camel-ai/physics - camel-ai/chemistry - camel-ai/biology - camel-ai/math - metaeval/reclor - openbookqa - mandyyyyii/scibench - derek-thomas/ScienceQA - TIGER-Lab/ScienceEval - jondurbin/airoboros-3.2 - LDJnr/Capybara - Cot-Alpaca-GPT4-From-OpenHermes-2.5 - STEM-AI-mtl/Electrical-engineering - knowrohit07/saraswati-stem - sablo/oasst2_curated - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - bigbio/med_qa - meta-math/MetaMathQA-40K - openbookqa - piqa - metaeval/reclor - derek-thomas/ScienceQA - scibench - sciq - Open-Orca/SlimOrca - migtissera/Synthia-v1.3 - TIGER-Lab/ScienceEval - allenai/WildChat - microsoft/orca-math-word-problems-200k - openchat/openchat_sharegpt4_dataset - teknium/GPTeacher-General-Instruct - m-a-p/CodeFeedback-Filtered-Instruction quantized_by: suparious pipeline_tag: text-generation --- ## Exllama v2 Quantizations of Weyaxi/Einstein-v5-v0.2-7B Using turboderp's ExLlamaV2 v0.0.16 for quantization. Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: Weyaxi/Einstein-v5-v0.2-7B Model Size: 7b | Branch | Bits | lm_head bits | Dataset | Size | Description | | ----- | ---- | ------- | ------- | ------- | ------------ | | [8_0](https://huggingface.co/suparious/Einstein-v5-v0.2-7B-exl2/tree/8_0) | 8.0 | 8.0 | Default | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/suparious/Einstein-v5-v0.2-7B-exl2/tree/6_5) | 6.5 | 8.0 | Default | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/suparious/Einstein-v5-v0.2-7B-exl2/tree/5_0) | 5.0 | 6.0 | Default | 7.4 GB | Slightly lower perplexity vs 6.5. | | [4_0](https://huggingface.co/suparious/Einstein-v5-v0.2-7B-exl2/tree/4_0) | 4.0 | 6.0 | Default | 6.5 GB | Just under GPTQ equivalent bits per weight. | All VRAM requirements estimated from 16k context. For 32k context add ~2 GB. 4.0 bits per weight 5.0 bits per weight 6.5 bits per weight 8.0 bits per weight ## Download instructions With git: ```shell git clone --single-branch --branch 4_0 https://huggingface.co/suparious/Einstein-v5-v0.2-7B-exl2 ``` With huggingface hub (credit to TheBloke and bartowski for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Einstein-v5-v0.2-7B-exl2`: ```shell mkdir Einstein-v5-v0.2-7B-exl2 huggingface-cli download suparious/Einstein-v5-v0.2-7B-exl2 --local-dir Einstein-v5-v0.2-7B-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Einstein-v5-v0.2-7B-exl2-6_5 huggingface-cli download suparious/Einstein-v5-v0.2-7B-exl2 --revision 6_5 --local-dir Einstein-v5-v0.2-7B-exl2-6_5 --local-dir-use-symlinks False ```