File size: 4,529 Bytes
ec20d24
6fa2622
 
 
 
 
 
 
ec20d24
6fa2622
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
license: openrail
datasets:
- NbAiLab/norwegian-alpaca
language:
- 'no'
- nb
pipeline_tag: text-generation
---

# NB-Alpaca-LoRA 7B

This is an Norwegian adapter generated by fine-tuning LLaMA-7B on a [Norwegian Alpaca](https://huggingface.co/datasets/NbAiLab/norwegian-alpaca) dataset.

## Usage

```python
from peft import PeftModel
from transformers import LLaMATokenizer, LLaMAForCausalLM

base_model = "decapoda-research/llama-7b-hf"
tokenizer = LLaMATokenizer.from_pretrained(base_model)
model = LLaMAForCausalLM.from_pretrained(
    base_model,
    load_in_8bit=True,
    device_map="auto",
)
model = PeftModel.from_pretrained(model, "NbAiLab/nb-alpaca-lora-7b")
```

For generation, the promtp still needs the English template:

```python
from transformers import pipeline

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
instruction = "Skriv en e-post der du ønsker velkommen til en ny medarbeider ved navn Svein"
pipe.generate(f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:
""")
# Kjære Svein,
#
# Velkommen til vårt team! Vi er så glade for å ha deg med oss. Vi ser frem til å hjelpe deg med å nå dine mål og oppnå dine drømmer.
#
# Vi er alltid tilgjengelige hvis du har noen spørsmål eller ønsker å diskutere noen av våre prosjekter.
#
# Vi ser frem til å jobbe sammen med deg!
#
# Med vennlig
```


## Data

The dataset is a translation to Norwegian Bokmål of [alpaca_data_cleaned.json](https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_cleaned.json) (a clean version of the [Alpaca dataset made at Stanford](https://huggingface.co/datasets/tatsu-lab/alpaca)) using OpenAI's `gpt-3.5-turbo` model. We translated using a full-sample prompt instead of per strings, which resulted in more coherent tuples of `(intruction, input, output)` and costed around $60.0.

**This dataset cannot be used to create models that compete in any way with OpenAI.**

## Finetuning

To fine-tune the LLaMA model we used the code available on [Alpaca Lora](https://github.com/tloen/alpaca-lora), which provides code to finetune the LLaMA model using PEFT from Hugging Face. We run finetuning for 3 epochs using sequence length of 512 with no gradient accumulation on a single A6000 with 48GB VRAM for 12 hours on top of LLaMA-7B.

## Example outputs

Stanford Alpaca:
```
Instruction: Kom med en kreativ unnskyldning for å si at jeg ikke trenger å gå på festen.
Response: I'm sorry, but I don't need to go to the party.
```

NB-Alpaca:
```
Instruction: Kom med en kreativ unnskyldning for å si at jeg ikke trenger å gå på festen.
Response: Jeg beklager at jeg ikke kan gå på festen, men jeg tror at det er bedre at jeg ikke gjør det.
```


Stanford Alpaca:
```
Instruction: Hvordan kan jeg starte en karriere som dataforsker? Skriv i listeform.
Response:
Starting a career as a data scientist requires a combination of education, experience, and skills. 

1. Obtain a Bachelor's or Master's degree in computer science, statistics, mathematics, or a related field. 
2. Gain practical experience through internships or part-time jobs. 
3. Learn programming languages such as Python, R, and SQL. 
4. Develop strong analytical and problem-solving skills. 
5. Build a portfolio of projects to showcase your work. 
6. Network with other data scientists to learn
```

NB-Alpaca:
```
Instruction: Hvordan kan jeg starte en karriere som dataforsker? Skriv i listeform.
Response:
1. Søk etter en universitetsgrad i datavitenskap, matematikk, informatikk eller en relatert fag. 
2. Søk etter en jobb som dataforsker eller dataanalytiker. 
3. Utvikle dine ferdigheter i datavitenskap, matematikk, informatikk eller en relatert fag. 
4. Utvikle dine ferdigheter innen dataanalyse.
```

You can test it using the eval notebook [here](https://colab.research.google.com/github/22-hours/cabrita/blob/main/notebooks/cabrita-lora.ipynb).

## References

- [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Norwegian Alpaca](https://huggingface.co/datasets/NbAiLab/norwegian-alpaca)
- [Alpaca LoRA](https://github.com/tloen/alpaca-lora)
- [ChatGPT](https://openai.com/blog/chatgpt)
- [Hugging Face](https://huggingface.co/)
 
## Hardware Requirements

For training we have used an A6000 48GB VRAM Nvidia GPU. For eval, you can use a T4.