LORA fine tuning error
Hi, thanks for releasing this model. I'm using text generation web-ui to fine-tune a LoRA. The model loader is AutoGPTQ. I've formatted the dataset in Alpaca format. However, it gets this error:
2023-08-14 09:51:01 INFO:Loading TheBloke_Llama-2-7b-Chat-GPTQ...
2023-08-14 09:51:02 INFO:The AutoGPTQ params are: {'model_basename': 'gptq_model-4bit-128g', 'device': 'cuda:0', 'use_triton': False, 'inject_fused_attention': True, 'inject_fused_mlp': True, 'use_safetensors': True, 'trust_remote_code': False, 'max_memory': {0: '30000MiB', 1: '30000MiB', 'cpu': '2000MiB'}, 'quantize_config': None, 'use_cuda_fp16': True}
2023-08-14 09:51:02 WARNING:CUDA extension not installed.
2023-08-14 09:51:03 WARNING:The safetensors archive passed at models/TheBloke_Llama-2-7b-Chat-GPTQ/gptq_model-4bit-128g.safetensors does not contain metadata. Make sure to save your model with the save_pretrained
method. Defaulting to 'pt' metadata.
2023-08-14 09:51:10 WARNING:skip module injection for FusedLlamaMLPForQuantizedModel not support integrate without triton yet.
2023-08-14 09:51:10 INFO:Loaded the model in 8.85 seconds.
2023-08-14 09:51:10 INFO:Loading the extension "gallery"...
Running on local URL: http://0.0.0.0:7860
To create a public link, set share=True
in launch()
.
2023-08-14 09:52:55 WARNING:LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models. (Found model type: LlamaGPTQForCausalLM)
2023-08-14 09:53:00 INFO:Loading JSON datasets...
Map: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 18980/18980 [00:16<00:00, 1163.44 examples/s]
2023-08-14 09:53:17 INFO:Getting model ready...
/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/peft/utils/other.py:102: FutureWarning: prepare_model_for_int8_training is deprecated and will be removed in a future version. Use prepare_model_for_kbit_training instead.
warnings.warn(
2023-08-14 09:53:17 INFO:Prepping for training...
2023-08-14 09:53:17 INFO:Backing up existing LoRA adapter...
- Backup already exists. Skipping backup process.
2023-08-14 09:53:17 INFO:Creating LoRA model...
2023-08-14 09:53:17 INFO:Loading existing LoRA data...
2023-08-14 09:53:17 INFO:Starting training...
Training 'llama' model using (q, v) projections
Trainable params: 16,777,216 (1.5391 %), All params: 1,090,048,000 (Model: 1,073,270,784)
2023-08-14 09:53:17 INFO:Log file 'train_dataset_sample.json' created in the 'logs' directory.
wandb: Tracking run with wandb version 0.15.7
wandb: W&B syncing is set tooffline
in this directory.
wandb: Runwandb online
or set WANDB_MODE=online to enable cloud syncing.
Exception in thread Thread-3 (threaded_run):
Traceback (most recent call last):
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/data/01/dv/data/ve2/dtl/ecp1/phi/no_gbd/r000/work/llama-2/text-generation-webui/modules/training.py", line 665, in threaded_run
trainer.train()
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/trainer.py", line 1809, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/trainer.py", line 2654, in training_step
loss = self.compute_loss(model, inputs)
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/trainer.py", line 2679, in compute_loss
outputs = model(**inputs)
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/accelerate/utils/operations.py", line 581, in forward
return model_forward(*args, **kwargs)
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/accelerate/utils/operations.py", line 569, in call
return convert_to_fp32(self.model_forward(*args, **kwargs))
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 14, in decorate_autocast
return func(*args, **kwargs)
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/peft/peft_model.py", line 947, in forward
return self.base_model(
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/auto_gptq/modeling/_base.py", line 433, in forward
return self.model(*args, **kwargs)
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 806, in forward
outputs = self.model(
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 693, in forward
layer_outputs = decoder_layer(
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 408, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/auto_gptq/nn_modules/fused_llama_attn.py", line 53, in forward
qkv_states = self.qkv_proj(hidden_states)
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/local_home/al50860/anaconda3/envs/textgen/lib/python3.10/site-packages/peft/tuners/lora.py", line 840, in forward
result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)
RuntimeError: self and mat2 must have the same dtype
2023-08-14 09:53:26 INFO:Training complete, saving...
2023-08-14 09:53:26 INFO:Training complete!