# 🦙🎛️ LLaMA-LoRA
Making evaluating and fine-tuning LLaMA models with low-rank adaptation (LoRA) easy.
## Features
* [1-click up and running in Google Colab](https://colab.research.google.com/github/zetavg/LLaMA-LoRA/blob/main/LLaMA_LoRA.ipynb).
* Loads and stores data in Google Drive.
* Evaluate various LLaMA LoRA models stored in your folder or from Hugging Face.
* Fine-tune LLaMA models with different prompt templates and training dataset format.
* Load JSON and JSONL datasets from your folder, or even paste plain text directly into the UI.
* Supports Stanford Alpaca [seed_tasks](https://github.com/tatsu-lab/stanford_alpaca/blob/main/seed_tasks.jsonl), [alpaca_data](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json) and [OpenAI "prompt"-"completion"](https://platform.openai.com/docs/guides/fine-tuning/data-formatting) format.
## Acknowledgements
* https://github.com/tloen/alpaca-lora
* https://github.com/lxe/simple-llama-finetuner
* ...
TBC