TheBloke's picture
Update README.md
83f775f
|
raw
history blame
14.6 kB
metadata
inference: false
language:
  - en
license: llama2
model_creator: ddobokki
model_link: https://huggingface.co/ddobokki/Llama-2-70b-orca-200k
model_name: Llama 2 70B Orca 200k
model_type: llama
pipeline_tag: text-generation
quantized_by: TheBloke
tags:
  - llama-2
  - instruct
  - instruction
TheBlokeAI

TheBloke's LLM work is generously supported by a grant from andreessen horowitz (a16z)


Llama 2 70B Orca 200k - GGUF

Description

This repo contains GGUF format model files for ddobokki's Llama 2 70B Orca 200k.

About GGUF

GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.

The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.

As of August 25th, here is a list of clients and libraries that are known to support GGUF:

  • llama.cpp
  • text-generation-webui, the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
  • KoboldCpp, now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
  • LoLLMS Web UI, should now work, choose the c_transformers backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
  • ctransformers, now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
  • llama-cpp-python, supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
  • candle, added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.

The clients and libraries below are expecting to add GGUF support shortly:

  • LM Studio, should be updated by end August 25th.

Repositories available

Prompt template: Guanaco

### Human: {prompt}
### Assistant:

Compatibility

These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit 6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9

As of August 24th 2023 they are now compatible with KoboldCpp, release 1.41 and later.

They are are not yet compatible with any other third-party UIS, libraries or utilities but this is expected to change very soon.

Explanation of quantisation methods

Click to see details

The new methods available are:

  • GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
  • GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
  • GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
  • GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
  • GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw

Refer to the Provided Files table below to see what files use which methods, and how.

Provided files

Name Quant method Bits Size Max RAM required Use case
llama-2-70b-orca-200k.Q2_K.gguf Q2_K 2 29.28 GB 31.78 GB smallest, significant quality loss - not recommended for most purposes
llama-2-70b-orca-200k.Q3_K_S.gguf Q3_K_S 3 29.92 GB 32.42 GB very small, high quality loss
llama-2-70b-orca-200k.Q3_K_M.gguf Q3_K_M 3 33.19 GB 35.69 GB very small, high quality loss
llama-2-70b-orca-200k.Q3_K_L.gguf Q3_K_L 3 36.15 GB 38.65 GB small, substantial quality loss
llama-2-70b-orca-200k.Q4_K_S.gguf Q4_K_S 4 39.07 GB 41.57 GB small, greater quality loss
llama-2-70b-orca-200k.Q4_K_M.gguf Q4_K_M 4 41.42 GB 43.92 GB medium, balanced quality - recommended
llama-2-70b-orca-200k.Q5_K_S.gguf Q5_K_S 5 47.46 GB 49.96 GB large, low quality loss - recommended
llama-2-70b-orca-200k.Q5_K_M.gguf Q5_K_M 5 48.75 GB 51.25 GB large, very low quality loss - recommended
llama-2-70b-orca-200k.Q6_K.bin Q6_K 6 56.82 GB 59.32 GB very large, extremely low quality loss
llama-2-70b-orca-200k.Q8_0.bin Q8_0 8 73.29 GB 75.79 GB very large, extremely low quality loss - not recommended

Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.

Q6_K and Q8_0 files are split and require joining

Note: HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.

Click for instructions regarding Q6_K and Q8_0 files

q6_K

Please download:

  • llama-2-70b-orca-200k.Q6_K.gguf-split-a
  • llama-2-70b-orca-200k.Q6_K.gguf-split-b

q8_0

Please download:

  • llama-2-70b-orca-200k.Q8_0.gguf-split-a
  • llama-2-70b-orca-200k.Q8_0.gguf-split-b

To join the files, do the following:

Linux and macOS:

cat llama-2-70b-orca-200k.Q6_K.gguf-split-* > llama-2-70b-orca-200k.Q6_K.gguf && rm llama-2-70b-orca-200k.Q6_K.gguf-split-*
cat llama-2-70b-orca-200k.Q8_0.gguf-split-* > llama-2-70b-orca-200k.Q8_0.gguf && rm llama-2-70b-orca-200k.Q8_0.gguf-split-*

Windows command line:

COPY /B llama-2-70b-orca-200k.Q6_K.gguf-split-a + llama-2-70b-orca-200k.Q6_K.gguf-split-b llama-2-70b-orca-200k.Q6_K.gguf
del llama-2-70b-orca-200k.Q6_K.gguf-split-a llama-2-70b-orca-200k.Q6_K.gguf-split-b

COPY /B llama-2-70b-orca-200k.Q8_0.gguf-split-a + llama-2-70b-orca-200k.Q8_0.gguf-split-b llama-2-70b-orca-200k.Q8_0.gguf
del llama-2-70b-orca-200k.Q8_0.gguf-split-a llama-2-70b-orca-200k.Q8_0.gguf-split-b

How to run in llama.cpp

Make sure you are using llama.cpp from commit 6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9 or later.

For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.

./main -t 10 -ngl 32 -m llama-2-70b-orca-200k.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"

Change -t 10 to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use -t 8.

Change -ngl 32 to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.

Change -c 4096 to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.

If you want to have a chat-style conversation, replace the -p <PROMPT> argument with -i -ins

For other parameters and how to use them, please refer to the llama.cpp documentation

How to run in text-generation-webui

Further instructions here: text-generation-webui/docs/llama.cpp.md.

Discord

For further support, and discussions on these models and AI in general, join us at:

TheBloke AI's Discord server

Thanks, and how to contribute.

Thanks to the chirper.ai team!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

Special thanks to: Aemon Algiz.

Patreon special mentions: Kacper Wikieł, knownsqashed, Leonard Tan, Asp the Wyvern, Daniel P. Andersen, Luke Pendergrass, Stanislav Ovsiannikov, RoA, Dave, Ai Maven, Kalila, Will Dee, Imad Khwaja, Nitin Borwankar, Joseph William Delisle, Tony Hughes, Cory Kujawski, Rishabh Srivastava, Russ Johnson, Stephen Murray, Lone Striker, Johann-Peter Hartmann, Elle, J, Deep Realms, SuperWojo, Raven Klaugh, Sebastain Graf, ReadyPlayerEmma, Alps Aficionado, Mano Prime, Derek Yates, Gabriel Puliatti, Mesiah Bishop, Magnesian, Sean Connelly, biorpg, Iucharbius, Olakabola, Fen Risland, Space Cruiser, theTransient, Illia Dulskyi, Thomas Belote, Spencer Kim, Pieter, John Detwiler, Fred von Graf, Michael Davis, Swaroop Kallakuri, subjectnull, Clay Pascal, Subspace Studios, Chris Smitley, Enrico Ros, usrbinkat, Steven Wood, alfie_i, David Ziegler, Willem Michiel, Matthew Berman, Andrey, Pyrater, Jeffrey Morgan, vamX, LangChain4j, Luke @flexchar, Trenton Dambrowitz, Pierre Kircher, Alex, Sam, James Bentley, Edmond Seymore, Eugene Pentland, Pedro Madruga, Rainer Wilmers, Dan Guido, Nathan LeClaire, Spiking Neurons AB, Talal Aujan, zynix, Artur Olbinski, Michael Levine, 阿明, K, John Villwock, Nikolai Manek, Femi Adebogun, senxiiz, Deo Leter, NimbleBox.ai, Viktor Bowallius, Geoffrey Montalvo, Mandus, Ajan Kanaga, ya boyyy, Jonathan Leane, webtim, Brandon Frisco, danny, Alexandros Triantafyllidis, Gabriel Tamborski, Randy H, terasurfer, Vadim, Junyu Yang, Vitor Caleffi, Chadd, transmissions 11

Thank you to all my generous patrons and donaters!

And thank you again to a16z for their generous grant.

Original model card: ddobokki's Llama 2 70B Orca 200k

Llama-2-70b-orca-200k model card

Used Datasets

  • OpenOrca (200k sampling)

Prompt Template

### Human: {Human}
### Assistant: {Assistant}

Contribute

ddobokki

YooSungHyun

License

LICENSE.txt

USE_POLICY

USE_POLICY.md

Responsible Use Guide

Responsible-Use-Guide.pdf