TheBloke's picture
Initial GGML model commit
74d2346
|
raw
history blame
13.6 kB
metadata
inference: false
license: llama2
model_creator: Zaraki Quem Parte
model_link: https://huggingface.co/zarakiquemparte/zarablend-l2-7b
model_name: Zarablend L2 7B
model_type: llama
quantized_by: TheBloke
tags:
  - llama2
TheBlokeAI

Zarablend L2 7B - GGML

Description

This repo contains GGML format model files for Zaraki Quem Parte's Zarablend L2 7B.

GGML files are for CPU + GPU inference using llama.cpp and libraries and UIs which support this format, such as:

  • text-generation-webui, the most popular web UI. Supports NVidia CUDA GPU acceleration.
  • KoboldCpp, a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
  • LM Studio, a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
  • LoLLMS Web UI, a great web UI with CUDA GPU acceleration via the c_transformers backend.
  • ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
  • llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.

Repositories available

Prompt template: Alpaca-InstructOnly

### Instruction:

{prompt}

### Response:

Compatibility

These quantised GGML files are compatible with llama.cpp as of June 6th, commit 2d43387.

They should also be compatible with all UIs, libraries and utilities which use GGML.

Explanation of the new k-quant methods

Click to see details

The new methods available are:

  • GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
  • GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
  • GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
  • GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
  • GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
  • GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.

Refer to the Provided Files table below to see what files use which methods, and how.

Provided files

Name Quant method Bits Size Max RAM required Use case
zarablend-l2-7b.ggmlv3.q2_K.bin q2_K 2 2.87 GB 5.37 GB New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors.
zarablend-l2-7b.ggmlv3.q3_K_L.bin q3_K_L 3 3.60 GB 6.10 GB New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
zarablend-l2-7b.ggmlv3.q3_K_M.bin q3_K_M 3 3.28 GB 5.78 GB New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
zarablend-l2-7b.ggmlv3.q3_K_S.bin q3_K_S 3 2.95 GB 5.45 GB New k-quant method. Uses GGML_TYPE_Q3_K for all tensors
zarablend-l2-7b.ggmlv3.q4_0.bin q4_0 4 3.83 GB 6.33 GB Original quant method, 4-bit.
zarablend-l2-7b.ggmlv3.q4_1.bin q4_1 4 4.24 GB 6.74 GB Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
zarablend-l2-7b.ggmlv3.q4_K_M.bin q4_K_M 4 4.08 GB 6.58 GB New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K
zarablend-l2-7b.ggmlv3.q4_K_S.bin q4_K_S 4 3.83 GB 6.33 GB New k-quant method. Uses GGML_TYPE_Q4_K for all tensors
zarablend-l2-7b.ggmlv3.q5_0.bin q5_0 5 4.65 GB 7.15 GB Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference.
zarablend-l2-7b.ggmlv3.q5_1.bin q5_1 5 5.06 GB 7.56 GB Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference.
zarablend-l2-7b.ggmlv3.q5_K_M.bin q5_K_M 5 4.78 GB 7.28 GB New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K
zarablend-l2-7b.ggmlv3.q5_K_S.bin q5_K_S 5 4.65 GB 7.15 GB New k-quant method. Uses GGML_TYPE_Q5_K for all tensors
zarablend-l2-7b.ggmlv3.q6_K.bin q6_K 6 5.53 GB 8.03 GB New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization
zarablend-l2-7b.ggmlv3.q8_0.bin q8_0 8 7.13 GB 9.63 GB Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.

Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.

How to run in llama.cpp

I use the following command line; adjust for your tastes and needs:

./main -t 10 -ngl 32 -m zarablend-l2-7b.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"

Change -t 10 to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use -t 8.

Change -ngl 32 to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.

Change -c 2048 to the desired sequence length for this model. For example, -c 4096 for a Llama 2 model. For models that use RoPE, add --rope-freq-base 10000 --rope-freq-scale 0.5 for doubled context, or --rope-freq-base 10000 --rope-freq-scale 0.25 for 4x context.

If you want to have a chat-style conversation, replace the -p <PROMPT> argument with -i -ins

For other parameters and how to use them, please refer to the llama.cpp documentation

How to run in text-generation-webui

Further instructions here: text-generation-webui/docs/llama.cpp.md.

Discord

For further support, and discussions on these models and AI in general, join us at:

TheBloke AI's Discord server

Thanks, and how to contribute.

Thanks to the chirper.ai team!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

Special thanks to: Aemon Algiz.

Patreon special mentions: Ajan Kanaga, David Ziegler, Raymond Fosdick, SuperWojo, Sam, webtim, Steven Wood, knownsqashed, Tony Hughes, Junyu Yang, J, Olakabola, Dan Guido, Stephen Murray, John Villwock, vamX, William Sang, Sean Connelly, LangChain4j, Olusegun Samson, Fen Risland, Derek Yates, Karl Bernard, transmissions 11, Trenton Dambrowitz, Pieter, Preetika Verma, Swaroop Kallakuri, Andrey, Slarti, Jonathan Leane, Michael Levine, Kalila, Joseph William Delisle, Rishabh Srivastava, Deo Leter, Luke Pendergrass, Spencer Kim, Geoffrey Montalvo, Thomas Belote, Jeffrey Morgan, Mandus, ya boyyy, Matthew Berman, Magnesian, Ai Maven, senxiiz, Alps Aficionado, Luke @flexchar, Raven Klaugh, Imad Khwaja, Gabriel Puliatti, Johann-Peter Hartmann, usrbinkat, Spiking Neurons AB, Artur Olbinski, chris gileta, danny, Willem Michiel, WelcomeToTheClub, Deep Realms, alfie_i, Dave, Leonard Tan, NimbleBox.ai, Randy H, Daniel P. Andersen, Pyrater, Will Dee, Elle, Space Cruiser, Gabriel Tamborski, Asp the Wyvern, Illia Dulskyi, Nikolai Manek, Sid, Brandon Frisco, Nathan LeClaire, Edmond Seymore, Enrico Ros, Pedro Madruga, Eugene Pentland, John Detwiler, Mano Prime, Stanislav Ovsiannikov, Alex, Vitor Caleffi, K, biorpg, Michael Davis, Lone Striker, Pierre Kircher, theTransient, Fred von Graf, Sebastain Graf, Vadim, Iucharbius, Clay Pascal, Chadd, Mesiah Bishop, terasurfer, Rainer Wilmers, Alexandros Triantafyllidis, Stefan Sabev, Talal Aujan, Cory Kujawski, Viktor Bowallius, subjectnull, ReadyPlayerEmma, zynix

Thank you to all my generous patrons and donaters!

Original model card: Zaraki Quem Parte's Zarablend L2 7B

Model Card: Zarablend L2 7b

This model uses Nous Hermes Llama2 7b (66%) as a base with Airoboros L2 7B GPT4 2.0 (34%) and the result of this merge was merged with LimaRP LLama2 7B Lora.

This merge of models(hermes and airoboros) was done with this script

This merge of Lora with Model was done with this script

Merge illustration:

illustration

Usage:

Since this is a merge between Nous Hermes, Airoboros and LimaRP, the following instruction formats should work:

Alpaca 2:

### Instruction:
<prompt>

### Response:
<leave a newline blank for model to respond>

LimaRP instruction format:

<<SYSTEM>>
<character card and system prompt>

<<USER>>
<prompt>

<<AIBOT>>
<leave a newline blank for model to respond>

Bias, Risks, and Limitations

This model is not intended for supplying factual information or advice in any form

Training Details

This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.