|
--- |
|
license: other |
|
--- |
|
# Koala: A Dialogue Model for Academic Research |
|
This repo contains the weights of the Koala 7B model produced at Berkeley. It is the result of combining the diffs from https://huggingface.co/young-geng/koala with the original Llama 7B model. |
|
|
|
This version has then been quantized to 4bit using https://github.com/qwopqwop200/GPTQ-for-LLaMa |
|
|
|
### WARNING: At the present time the GPTQ files uploaded here are producing garbage output. It is not recommended to use them. |
|
|
|
I'm working on diagnosing this issue and producing working files. |
|
|
|
Quantization command was: |
|
``` |
|
python3 llama.py /content/koala-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save /content/koala-7B-4bit-128g.pt |
|
``` |
|
|
|
Check out the following links to learn more about the Berkeley Koala model. |
|
* [Blog post](https://bair.berkeley.edu/blog/2023/04/03/koala/) |
|
* [Online demo](https://koala.lmsys.org/) |
|
* [EasyLM: training and serving framework on GitHub](https://github.com/young-geng/EasyLM) |
|
* [Documentation for running Koala locally](https://github.com/young-geng/EasyLM/blob/main/docs/koala.md) |
|
|
|
## License |
|
The model weights are intended for academic research only, subject to the |
|
[model License of LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md), |
|
[Terms of Use of the data generated by OpenAI](https://openai.com/policies/terms-of-use), |
|
and [Privacy Practices of ShareGPT](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb). |
|
Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited. |
|
|