File size: 2,980 Bytes
998adce
 
 
23e20be
998adce
 
 
 
149b168
 
 
cdf62e0
6d5a5b5
 
 
 
 
 
 
 
cdf62e0
602a29a
23e20be
e390c55
efe2d41
968dbf3
cdf62e0
efe2d41
 
 
 
 
23e20be
efe2d41
cdf62e0
8df6308
149b168
 
 
 
 
 
 
 
 
 
0a73c31
149b168
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
efe2d41
602a29a
 
d2fb617
602a29a
 
149b168
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
license: cc
datasets:
- VMware/open-instruct-v1-oasst-dolly-hhrlhf
language:
- en
library_name: transformers
pipeline_tag: conversational
---

# VMware/open-llama-0.3T-7B-open-instruct-v1.1

---

# UPDATE: Final Version Now Available!

Please use the final version: [Open LLaMA 7B Open Instruct](https://huggingface.co/VMware/open-llama-7b-open-instruct)

---

## License
- <b>Commercially Viable </b>
- Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0
- Language Model ([openlm-research/open_llama_7b_preview_300bt](https://huggingface.co/openlm-research/open_llama_7b_preview_300bt/tree/main/open_llama_7b_preview_300bt_transformers_weights)) is under apache-2.0


## Nomenclature 

- Model : Open-llama
- Model trained on : 300B or 0.3 T tokens
- Model Size: 7B parameters
- Dataset: Open-instruct-v1.1 (oasst,dolly, hhrlhf)
- Version: V1 (Alpaca Prompt template)


## Use in Transformers


```
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = 'VMware/open-llama-0.3T-7B-open-instruct-v1.1'


tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype= torch.float16, device_map = 'sequential')

prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"

prompt=  'Explain in simple terms how the attention mechanism of a transformer model works'


inputt = prompt_template.format(instruction= prompt)
input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda")

output1 = model.generate(input_ids, max_length=512)
input_length = input_ids.shape[1]
output1 = output1[:, input_length:]
output= tokenizer.decode(output1[0])

print(output)

'''
The attention mechanism of a transformer model is designed to help the model understand the relationship between different parts of a sentence.
The model uses a weighted attention score to determine how much each input token contributes to the output.
The attention score is calculated by looking at the similarity between each input token and the output token,and assigning a weight to each input token based on this similarity.
This way, the model can better understand the relationship between different parts of a sentence and generate more accurate predictions.

'''
```

## Drawbacks

- The model was trained on a partially trained Open-LLaMA checkpoint (300B tokens or 30% training life cycle), there is a huge potential for improvement when trained on fully trained Open-LLaMA checkpoints
- From what we have observed, the model strugles with few-shot prompting (We plan on addressing it with future iterations)
- When asked for code, it may or may not include the code within markdown format (```)
- It doesn't indent python code
  

## Evaluation

<B>TODO</B>