metadata
library_name: llama-cpp
language:
- ru
- en
- Quantized from the original diffusers BF16 version: Vikhrmodels/Vikhr-7B-instruct_0.4
Ollama
To get Q4_1
version model, one can simply
ollama pull wavecut/vikhr
or create the model using other bpw versions using Ollama Modelfile
FROM ./vikhr-7b-instruct_0.4.INSERT_YOUR_QUANT_HERE.gguf
PARAMETER temperature 0.25
PARAMETER top_k 50
PARAMETER top_p 0.98
PARAMETER num_ctx 1512
PARAMETER stop <|im_end|>
PARAMETER stop <|im_start|>
SYSTEM """"""
TEMPLATE """<s>{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
"""
ollama create vikhr -f Modelfile
ollama run vikhr