--- license: apache-2.0 datasets: - Helsinki-NLP/opus_paracrawl - turuta/Multi30k-uk language: - uk - en metrics: - bleu library_name: peft pipeline_tag: text-generation base_model: lang-uk/dragoman tags: - translation - llama-cpp - gguf-my-lora widget: - text: '[INST] who holds this neighborhood? [/INST]' model-index: - name: Dragoman results: - task: type: translation name: English-Ukrainian Translation dataset: name: FLORES-101 type: facebook/flores config: eng_Latn-ukr_Cyrl split: devtest metrics: - type: bleu value: 32.34 name: Test BLEU --- # Tymkolt/dragoman-F16-GGUF This LoRA adapter was converted to GGUF format from [`lang-uk/dragoman`](https://huggingface.co/lang-uk/dragoman) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space. Refer to the [original adapter repository](https://huggingface.co/lang-uk/dragoman) for more details. ## Use with llama.cpp ```bash # with cli llama-cli -m base_model.gguf --lora dragoman-f16.gguf (...other args) # with server llama-server -m base_model.gguf --lora dragoman-f16.gguf (...other args) ``` To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).