koala-13B-HF / README.md
TheBloke's picture
Updating model files
18e7a0a
|
raw
history blame
4.55 kB
metadata
license: other
library_name: transformers
pipeline_tag: text-generation
datasets:
  - RyokoAI/ShareGPT52K
  - Hello-SimpleAI/HC3
tags:
  - koala
  - ShareGPT
  - llama
  - gptq
TheBlokeAI

Koala: A Dialogue Model for Academic Research

This repo contains the weights of the Koala 13B model produced at Berkeley. It is the result of combining the diffs from https://huggingface.co/young-geng/koala with the original Llama 13B model.

This version has then been converted to HF format.

My Koala repos

I have the following Koala model repositories available:

13B models:

7B models:

How the Koala delta weights were merged

The Koala delta weights were merged using the following commands:

git clone https://github.com/young-geng/EasyLM

git clone https://huggingface.co/TheBloke/llama-13b

mkdir koala_diffs && cd koala_diffs && wget https://huggingface.co/young-geng/koala/resolve/main/koala_13b_diff_v2

cd EasyLM

PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.models.llama.convert_torch_to_easylm \
--checkpoint_dir=/content/llama-13b \
--output_file=/content/llama-13b-LM \
--streaming=True

PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.scripts.diff_checkpoint --recover_diff=True \
--load_base_checkpoint='params::/content/llama-13b-LM' \
--load_target_checkpoint='params::/content/koala_diffs/koala_13b_diff_v2' \
--output_file=/content/koala_13b.diff.weights \
--streaming=True

PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.models.llama.convert_easylm_to_hf --model_size=13b \
--output_dir=/content/koala-13B-HF \
--load_checkpoint='params::/content/koala_13b.diff.weights' \
--tokenizer_path=/content/llama-13b/tokenizer.model

Want to support my work?

I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.

So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.

Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.

Further info

Check out the following links to learn more about the Berkeley Koala model.

License

The model weights are intended for academic research only, subject to the model License of LLaMA, Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.