Edit model card

SwahiliInstruct-v0.2

This is a Mistral model that has been fine-tuned on the Swahili Alpaca dataset for 3 epochs.

Prompt Template

### Maelekezo:

{query}

### Jibu:
<Leave new line for model to respond> 

Usage

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("mwitiderrick/SwahiliInstruct-v0.2")
model = AutoModelForCausalLM.from_pretrained("mwitiderrick/SwahiliInstruct-v0.2", device_map="auto")
query = "Nipe maagizo ya kutengeneza mkate wa mandizi"
text_gen = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200, do_sample=True, repetition_penalty=1.1)
output = text_gen(f"### Maelekezo:\n{query}\n### Jibu:\n")
print(output[0]['generated_text'])


"""
 Maagizo ya kutengeneza mkate wa mandazi:
1. Preheat tanuri hadi 375°F (190°C).
2. Paka sufuria ya uso na siagi au jotoa sufuria.
3. Katika bakuli la chumvi, ongeza viungo vifuatavyo: unga, sukari ya kahawa, chumvi, mdalasini, na unga wa kakao.
Koroga mchanganyiko pamoja na mbegu za kikombe 1 1/2 za mtindi wenye jamii na hatua ya maji nyepesi.
4. Kando ya uwanja, changanya zaini ya yai 2
"""

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 54.25
AI2 Reasoning Challenge (25-Shot) 55.20
HellaSwag (10-Shot) 78.22
MMLU (5-Shot) 50.30
TruthfulQA (0-shot) 57.08
Winogrande (5-shot) 73.24
GSM8k (5-shot) 11.45
Downloads last month
8
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mwitiderrick/SwahiliInstruct-v0.2

Finetuned
(366)
this model

Dataset used to train mwitiderrick/SwahiliInstruct-v0.2

Evaluation results