At the moment of writing the 🤗 transformers library doesn't have a Llama implementation for Token Classification (although there is a open PR).

This model is based on a implementation by community member @KoichiYasuoka.

  • Base Model: unsloth/llama-2-7b-bnb-4bit
  • LORA Model Adaptation with rank 16 and alpha 32, other adapter configurations can be found in adapter_config.json

This model was only trained for a single epoch, however a notebook is made available for those who want to train on other datasets for longer.

Downloads last month
2
Inference Examples
Inference API (serverless) does not yet support peft models for this pipeline type.

Dataset used to train SauravMaheshkar/unsloth-llama-2-7b-bnb-4bit-conll2003-rank-16

Collection including SauravMaheshkar/unsloth-llama-2-7b-bnb-4bit-conll2003-rank-16