File size: 1,141 Bytes
085aaa5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35af781
 
 
085aaa5
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
---
license: mit
datasets:
- conll2003
language:
- en
metrics:
- f1
library_name: peft
pipeline_tag: token-classification
tags:
- unsloth
- llama-2
---

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="150"/>](https://github.com/unslothai/unsloth)


At the moment of writing the 🤗 transformers library doesn't have a Llama implementation for Token Classification ([although there is a open PR](https://github.com/huggingface/transformers/pull/29878)).

This model is based on a [implementation](https://github.com/huggingface/transformers/issues/26521#issuecomment-1868284434) by community member [@KoichiYasuoka](https://github.com/KoichiYasuoka).

* Base Model: `unsloth/llama-2-7b-bnb-4bit`
* LORA Model Adaptation with rank 4 and alpha 32, other adapter configurations can be found in [`adapter_config.json`](https://huggingface.co/SauravMaheshkar/unsloth-llama-2-7b-bnb-4bit-conll2003-rank-4/blob/main/adapter_config.json)

This model was only trained for a single epoch, however a notebook is made available for those who want to train on other datasets for longer.