File size: 1,427 Bytes
edee2d8 f40dbd7 7e67047 a3ec953 edee2d8 e2d9808 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
base_model: eryk-mazus/polka-1.1b-chat
inference: false
language:
- pl
license: apache-2.0
model_name: Polka-1.1B-Chat
model_type: tinyllama
model_creator: Eryk Mazuś
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
---
*I've copy-pased some information from TheBloke's model cards, hope it's ok*
For a model of this size, with stronger quantization, quality appears to decline much more than for larger models. Personally, I would advise to stick with `fp16` or `int8` for this model.
## Prompt template: ChatML
```
<|im_start|>system
Jesteś pomocnym asystentem.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Example `llama.cpp` command
```shell
./main -m ./polka-1.1b-chat-gguf/polka-1.1b-chat-Q8_0.gguf --color -c 2048 --temp 0.2 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\nJesteś pomocnym asystentem.<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|