FirstLast's picture
Update README.md
b48a5bf
|
raw
history blame
784 Bytes
metadata
library_name: peft

OpenLLaMa 3B PersonaChat

This is a LoRA finetune of OpenLLaMa 3B on the personachat-truecased dataset with 3 epochs of 500 steps.

Naming Format

[model name]-finetuned-[dataset]-e[number of epochs]-s[number of steps]

Training procedure

The following bitsandbytes quantization config was used during training:

  • load_in_8bit: True
  • load_in_4bit: False
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: fp4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float32

Framework versions

  • PEFT 0.4.0.dev0