LLaMA-LoRA-Tuner-UI-Demo / LLaMA_LoRA.ipynb
zetavg
latest peft makes model.save_pretrained in finetune.py save a 443 B adapter_model.bin which is clearly incorrect (normally adapter_model.bin should > 16 MB)
7dd8f96 unverified
raw
history blame
11.8 kB
Open in Colab
Rendering notebook...