Exllama v2 Quantizations of DPOpenHermes-7B-v2-experimental
Using turboderp's ExLlamaV2 experimental for quantization.
Each branch contains an individual bits per weight. This is an experimental ExLlamaV2 quantization with no measurement.json produced.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2
8.0 bits per weight
6.0 bits per weight
5.0 bits per weight
4.0 bits per weight
3.5 bits per weight
Download instructions
With git:
git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/DPOpenHermes-7B-v2-experimental-exl2
With huggingface hub (credit to TheBloke for instructions):
pip3 install huggingface-hub
To download the main
(only useful if you only care about measurement.json) branch to a folder called DPOpenHermes-7B-v2-experimental-exl2
:
mkdir DPOpenHermes-7B-v2-experimental-exl2
huggingface-cli download bartowski/DPOpenHermes-7B-v2-experimental-exl2 --local-dir DPOpenHermes-7B-v2-experimental-exl2 --local-dir-use-symlinks False
To download from a different branch, add the --revision
parameter:
mkdir DPOpenHermes-7B-v2-experimental-exl2
huggingface-cli download bartowski/DPOpenHermes-7B-v2-experimental-exl2 --revision 4_0 --local-dir DPOpenHermes-7B-v2-experimental-exl2 --local-dir-use-symlinks False