|
--- |
|
license: llama2 |
|
datasets: |
|
- bertin-project/alpaca-spanish |
|
language: |
|
- es |
|
library_name: transformers |
|
--- |
|
## Llama 2-13b-alpaca-spanish LoRA |
|
This is a LoRA for Llama 2 13B trained on a translated [alpaca dataset](https://huggingface.co/datasets/bertin-project/alpaca-spanish) on an attempt to improve spanish performance of the Llama-2 foundation model with a conversational focus. |
|
|
|
Base model used was [The Bloke's Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) trained in 4bit precision with an added padding token. |
|
|
|
| Training parameteres | | |
|
| ----------- | ----------- | |
|
| LoRA scale | 2 | |
|
| Epochs | 0.75 | |
|
| Learning Rate| 2e-5 | |
|
| Warmup Steps| 100 | |
|
| Loss | 1.07 | |