Text Generation
Transformers
Safetensors
Finnish
bloom
text-generation-inference
Inference Endpoints
Edit model card

Model Card for Capybara-Finnish-V1-8B

This is a merge of mpasila/Capybara-Finnish-V1-8B-LoRA.

Base model used: mpasila/gpt3-finnish-8B-gptq-4bit and the original unquantized model: TurkuNLP/gpt3-finnish-8B.

Dataset used with the LoRA is Finnish-NLP/Capybara-fi-deepl-translated-sft with some modifications so it uses Alpaca formatting modified dataset.

It uses Alpaca format but with a translated instruction at the start:

{
    "instruction,output": "Alla on ohje, jossa kuvataan tehtävä. Kirjoita vastaus, joka täyttää pyynnön asianmukaisesti.\n\n### Instruction:\n%instruction%\n\n### Response:\n%output%",
    "instruction,input,output": "Alla on ohje, jossa kuvataan tehtävä ja joka on yhdistetty kontekstia lisäävään syötteeseen. Kirjoita vastaus, joka täyttää pyynnön asianmukaisesti.\n\n### Instruction:\n%instruction%\n\n### Input:\n%input%\n\n### Response:\n%output%"
}

Merged using this Colab notebook. It might not be the best way to merge a quantized LoRA on to a float16 model but I just wanted to quickly do something. You can try merging it better if you want.

Framework versions

  • PEFT 0.8.2
Downloads last month
14
Safetensors
Model size
6.98B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mpasila/Capybara-Finnish-V1-8B

Finetuned
(1)
this model

Datasets used to train mpasila/Capybara-Finnish-V1-8B

Collection including mpasila/Capybara-Finnish-V1-8B