dragoman-F16-GGUF / README.md
Tymkolt's picture
Upload README.md with huggingface_hub
b6f4495 verified
metadata
license: apache-2.0
datasets:
  - Helsinki-NLP/opus_paracrawl
  - turuta/Multi30k-uk
language:
  - uk
  - en
metrics:
  - bleu
library_name: peft
pipeline_tag: text-generation
base_model: lang-uk/dragoman
tags:
  - translation
  - llama-cpp
  - gguf-my-lora
widget:
  - text: '[INST] who holds this neighborhood? [/INST]'
model-index:
  - name: Dragoman
    results:
      - task:
          type: translation
          name: English-Ukrainian Translation
        dataset:
          name: FLORES-101
          type: facebook/flores
          config: eng_Latn-ukr_Cyrl
          split: devtest
        metrics:
          - type: bleu
            value: 32.34
            name: Test BLEU

Tymkolt/dragoman-F16-GGUF

This LoRA adapter was converted to GGUF format from lang-uk/dragoman via the ggml.ai's GGUF-my-lora space. Refer to the original adapter repository for more details.

Use with llama.cpp

# with cli
llama-cli -m base_model.gguf --lora dragoman-f16.gguf (...other args)

# with server
llama-server -m base_model.gguf --lora dragoman-f16.gguf (...other args)

To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.