ohtaman's picture
Update README.md
82fca5f
|
raw
history blame
2.04 kB
metadata
library_name: peft
datasets:
  - ohtaman/kokkai2022
language:
  - ja
pipeline_tag: text-generation

Training procedure

Finetune tiiuae/falcon-7b with ohtaman/kokkai2022(currentry, private) dataset with LoRA. The training parameters are

param value
r 4
lora_alpha 2
target_modules - query_key_value
- dense
- dense_h_to_4h
- dense_4h_to_h
lora_dropout 0.01
bias None
task_type CAUSAL_LM
optimizer AdamW
lr 4e-4

the prompt is something like

# question
{questioner}

{question_text}

# answer
{answerer}

{answer_text}

Framework versions

  • PEFT 0.4.0.dev0

Example Notebook (Colab)

Colaboratory (Pro is not needed.)

Example Code

tokenizer = transformers.AutoTokenizer.from_pretrained(base_model_name, trust_remote_code=True)
base_model = transformers.AutoModelForCausalLM.from_pretrained(base_model_name, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
peft_model = peft.PeftModelForCausalLM.from_pretrained(base_model, peft_model_name, torch_dtype=torch.bfloat16)


prompt = "# question\n麻生太郎\n\n増税すべきとお考えか?\n# answer\n岸田文雄\n\n〔内閣総理大臣岸田文雄君登壇〕"
input_tokens = tokenizer(prompt, return_tensors="pt").to(peft_model.device)
input_length = input_tokens.input_ids.shape[1]

with torch.no_grad():
    outputs = peft_model.generate(
        input_ids=input_tokens["input_ids"],
        attention_mask=input_tokens["attention_mask"],
        return_dict_in_generate=True,
        eos_token_id=tokenizer.eos_token_id,
        pad_token_id=tokenizer.pad_token_id,
        max_length=max_length,
        temperature=0.7,
        top_p=0.9,
        repetition_penalty=1.05,
    )
    output_tokens = outputs.sequences[0, input_length:-1]

print(tokenizer.decode(output_tokens))