Edit model card

Model Card for Model ID

Model fine tuned with LoRA on an Amharic Corpus of data collected from public telegram channels and groups.

Model Details

Model Description

  • Developed by: [Biniyam Ajaw, Elias Assamnew]
  • Funded by: [10 Academy]
  • Shared by [optional]: [Biniyam Ajaw]
  • Model type: [Text Generation]
  • Language(s) (NLP): [Amharic - English]
  • License: [MIT]
  • Finetuned from model [optional]: [NousResearch-Llama2-7B-hf]

Uses

The model is still in development and significantly lacks training data so it might not generate contents the way you want it to.

Downstream Use [optional]

You can fine tune this model on labeled data for a specific domain. To get more pleasing results.

Bias, Risks, and Limitations

The model is highly biased towards generating news content. The model might repeat specific words because it is trained on a cleaned but unfiltered data because of the lack of tokens.

Recommendations

The model is better of if you train it on labeled data if you want it to generate a content.

  • PEFT 0.7.2.dev0
Downloads last month
3
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for BiniyamAjaw/llama-2-7b-finetuned-adapters

Adapter
(135)
this model

Dataset used to train BiniyamAjaw/llama-2-7b-finetuned-adapters