File size: 834 Bytes
6ae81ff 1e972d6 6ae81ff 1e972d6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
library_name: llama-cpp
language:
- ru
- en
---
- Quantized from the [original diffusers BF16 version: Vikhrmodels/Vikhr-7B-instruct_0.4](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.4)
### Ollama
To get `Q4_1` version model, one can simply
```shell
ollama pull wavecut/vikhr
```
or create the model using other bpw versions using Ollama Modelfile
```Modelfile
FROM ./vikhr-7b-instruct_0.4.INSERT_YOUR_QUANT_HERE.gguf
PARAMETER temperature 0.25
PARAMETER top_k 50
PARAMETER top_p 0.98
PARAMETER num_ctx 1512
PARAMETER stop <|im_end|>
PARAMETER stop <|im_start|>
SYSTEM """"""
TEMPLATE """<s>{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
"""
```
```shell
ollama create vikhr -f Modelfile
ollama run vikhr
``` |