Fine tuning by LoRA

#5
by konbraphat51 - opened

I tried fine tune this model by PEFT library and Trainer from transformers library

I got this error:

japanese-stablelm-base-alpha-7b\69e1599948909cccca2369385fb6c82ef59086f4\modeling_japanese_stablelm_alpha.py", line 383, in apply_rotary_pos_emb
q_embed = (q * cos) + (rotate_half(q) * sin)
RuntimeError: The size of tensor a (32) must match the size of tensor b (64) at non-singleton dimension 3

The loading of the model is:

tokenizer = LlamaTokenizer.from_pretrained("novelai/nerdstash-tokenizer-v1", load_in_8bit=self.finetuner_properties.useint8, device_map="auto")

lm_model = AutoModelForCausalLM.from_pretrained("stabilityai/japanese-stablelm-base-alpha-7b", load_in_8bit=true, device_map="auto", trust_remote_code=True)

My arguments are:

#for LoraConfig
        lora_r:int = 8,
        lora_alpha:int = 16,
        lora_dropout:float = 0.05,
        lora_bias:str = "none",
        
#for Trainer
ta_epochs:int = 4,
        ta_logging_steps:int = 200,
        ta_save_steps:int = 100000,
        ta_save_total_limit:int = 3,
        ta_train_batch_size:int = 8,
        ta_warmup_steps:int = 200,
        ta_weight_decay:float = 0.1,
        ta_learning_rate:float = 5e-4,

What's wrong? This model unable to fine-tuning in int8 condition?

I also tried variant="int8" but didn't solve

Sign up or log in to comment