datasets: | |
- IlyaGusev/ru_turbo_saiga | |
- IlyaGusev/ru_sharegpt_cleaned | |
- IlyaGusev/oasst1_ru_main_branch | |
- IlyaGusev/gpt_roleplay_realm | |
- lksy/ru_instruct_gpt4 | |
language: | |
- ru | |
pipeline_tag: conversational | |
license: cc-by-4.0 | |
base_model: IlyaGusev/saiga_mistral_7b_lora | |
tags: | |
- llama-cpp | |
- gguf-my-lora | |
# Mortido/saiga_mistral_7b_lora-F16-GGUF | |
This LoRA adapter was converted to GGUF format from [`IlyaGusev/saiga_mistral_7b_lora`](https://huggingface.co/IlyaGusev/saiga_mistral_7b_lora) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space. | |
Refer to the [original adapter repository](https://huggingface.co/IlyaGusev/saiga_mistral_7b_lora) for more details. | |
## Use with llama.cpp | |
```bash | |
# with cli | |
llama-cli -m base_model.gguf --lora saiga_mistral_7b_lora-f16.gguf (...other args) | |
# with server | |
llama-server -m base_model.gguf --lora saiga_mistral_7b_lora-f16.gguf (...other args) | |
``` | |
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md). | |