Upload README.md
Browse files
README.md
CHANGED
@@ -42,6 +42,7 @@ This repo contains GGML format model files for [Meta's CodeLlama 13B Instruct](h
|
|
42 |
|
43 |
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
|
44 |
|
|
|
45 |
### About GGML
|
46 |
|
47 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
@@ -65,6 +66,7 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
65 |
[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```:
|
66 |
{prompt}
|
67 |
[/INST]
|
|
|
68 |
```
|
69 |
|
70 |
<!-- compatibility_ggml start -->
|
@@ -122,7 +124,7 @@ Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6
|
|
122 |
For compatibility with latest llama.cpp, please use GGUF files instead.
|
123 |
|
124 |
```
|
125 |
-
./main -t 10 -ngl 32 -m codellama-13b-instruct.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "
|
126 |
```
|
127 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
128 |
|
@@ -161,7 +163,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
161 |
|
162 |
**Special thanks to**: Aemon Algiz.
|
163 |
|
164 |
-
**Patreon special mentions**:
|
165 |
|
166 |
|
167 |
Thank you to all my generous patrons and donaters!
|
@@ -223,7 +225,7 @@ All variants are available in sizes of 7B, 13B and 34B parameters.
|
|
223 |
|
224 |
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
|
225 |
|
226 |
-
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)".
|
227 |
|
228 |
## Intended Use
|
229 |
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
|
|
|
42 |
|
43 |
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
|
44 |
|
45 |
+
Please use the GGUF models instead.
|
46 |
### About GGML
|
47 |
|
48 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
|
|
66 |
[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```:
|
67 |
{prompt}
|
68 |
[/INST]
|
69 |
+
|
70 |
```
|
71 |
|
72 |
<!-- compatibility_ggml start -->
|
|
|
124 |
For compatibility with latest llama.cpp, please use GGUF files instead.
|
125 |
|
126 |
```
|
127 |
+
./main -t 10 -ngl 32 -m codellama-13b-instruct.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```:\nWrite a story about llamas\n[/INST]"
|
128 |
```
|
129 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
130 |
|
|
|
163 |
|
164 |
**Special thanks to**: Aemon Algiz.
|
165 |
|
166 |
+
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
|
167 |
|
168 |
|
169 |
Thank you to all my generous patrons and donaters!
|
|
|
225 |
|
226 |
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
|
227 |
|
228 |
+
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950).
|
229 |
|
230 |
## Intended Use
|
231 |
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
|