|
--- |
|
license: mit |
|
datasets: |
|
- tatsu-lab/alpaca |
|
--- |
|
|
|
This repo contains a low-rank adapter for LLaMA-13b fit on the Stanford Alpaca dataset. |
|
|
|
### How to use (8-bit) |
|
|
|
```python |
|
import torch |
|
from peft import PeftModel |
|
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig |
|
|
|
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-13b-hf") |
|
model = LlamaForCausalLM.from_pretrained( |
|
"decapoda-research/llama-13b-hf", |
|
load_in_8bit=True, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
model = PeftModel.from_pretrained( |
|
model, "baruga/alpaca-lora-13b", |
|
torch_dtype=torch.float16 |
|
) |
|
``` |
|
|
|
For further information, check out this Github repo: https://github.com/tloen/alpaca-lora. |