Teja-Gollapudi
commited on
Commit
•
9488e4b
1
Parent(s):
c2fa2d4
Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ Instruction-tuned version of the fully trained Open LLama 7B v2 model. The mode
|
|
14 |
- This model performs better on code compared to v1 due to the improvements made on the base model by the openlm-research team.
|
15 |
- The instruction model is trained on an improved instruction tuning dataset compared to v1
|
16 |
|
17 |
-
**NOTE**: The model was trained using the Alpaca prompt template
|
18 |
**NOTE**: Fast tokenizer results in incorrect encoding, set the ```use_fast = False``` parameter, when instantiating the tokenizer
|
19 |
|
20 |
|
|
|
14 |
- This model performs better on code compared to v1 due to the improvements made on the base model by the openlm-research team.
|
15 |
- The instruction model is trained on an improved instruction tuning dataset compared to v1
|
16 |
|
17 |
+
**NOTE**: The model was trained using the Alpaca prompt template <br>
|
18 |
**NOTE**: Fast tokenizer results in incorrect encoding, set the ```use_fast = False``` parameter, when instantiating the tokenizer
|
19 |
|
20 |
|