How to save and load the Peft/LoRA Finetune
I am trying to further finetune Starchat-Beta
, save my progress, load my progress, and continue training. But whatever I do, it doesn't come together. Whenever I load my progress and continue training, my loss starts back from zero (3.xxx in my case).
I'll run you through my code and then the problem.
tokenizer = AutoTokenizer.from_pretrained(BASEPATH)
model = AutoModelForCausalLM.from_pretrained(
"/notebooks/starbaseplus"
...
)
# I get both the Tokenizer and the Foundation model from the starbaseplus repo (which I have locally).
peftconfig = LoraConfig(
"/notebooks/starchat-beta"
base_model_name_or_path = "/notebooks/starbaseplus",
...
)
model = get_peft_model(model, peftconfig)
# All Gucci so far, the model and the LoRA fine-tune are loaded from the starchat-beta repo (also local).
# important for later:
print_trainable_parameters(model)
# trainable params: 306 Million || all params: 15 Billion || trainable: 1.971%
trainer = Trainer(
model=model,
...
)
trainer.train()
# I train, loss drops. from 3.xx to 1.xx.
# Now, either I follow the HugginFace docks:
model.save_pretrained("./huggingface_model")
# -> saves /notebooks/huggingface_model/adapter_model.bin 16mb.
# or an alternative I found on SO:
trainer.save_model("./torch_model")
# -> saves /notebooks/torch_model/pytorch_model.bin 60gb.
I have two alternatives saved to disk. Lets restart and try either of these approaches
First the huggingface docs approach:
I now have three sets of weights.
- the foundation model - starbase plus
- the chat finetune - starchat-beta
- the 16mb saved bin - adapter_model.bin
But I only have two opportunities to load weights.
AutoModelForCausalLM.from_pretrained
- either
get_peft_model
orPeftModel.from_pretrained
Neither works. training restarts at a loss of 3.x.
Second approach:
Load the 60bg instead of the old starchat-beta repo model.get_peft_model("/notebooks/torch_model/pytorch_model.bin", peftconfig)
Also doesn't work. The print_trainable_parameters(model)
drops to trainable: 0.02%
and training restarts at a loss of 3.x
There are 4 different ways to save a model.model.save_pretrained(PATH)
torch.save({'model_state_dict': model.state_dict()})
trainer.save_model(PATH)
andTrainerArgs(save_strategy='steps')
.
Which one can I use to store the PeftModelForCausalLM(AutoModelForCausalLM())
and how to load it again?
Any update on this? I am willing to continue the training for the LoRA tuned model.
Actually
I have finetuned the T5-large with LoRA on my task for one epoch. Now, I want to tune it more for several epochs, is it possible to use the existing LoRA weights and update them on the new dataset so that I don't have to do it from scratch?
Please share some links, I have followed the https://www.philschmid.de/fine-tune-flan-t5-peft guide for training.
Thanks ππ»
Use trainer API to save to a local path
trainer.save_model("./torch_model")
Load model from the saved path
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig
peft_model_id = "./torch_model"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)