|
--- |
|
library_name: peft |
|
datasets: |
|
- ohtaman/kokkai2022 |
|
language: |
|
- ja |
|
pipeline_tag: text-generation |
|
--- |
|
## Training procedure |
|
|
|
Finetune [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) with [ohtaman/kokkai2022](https://huggingface.co/datasets/ohtaman/kokkai2022)(currentry, private) dataset with LoRA. |
|
The training parameters are |
|
|
|
|param|value| |
|
|:--:|:--:| |
|
|r| 4| |
|
|lora_alpha| 2| |
|
|target_modules|- query_key_value<br> - dense<br> - dense_h_to_4h<br> - dense_4h_to_h| |
|
|lora_dropout| 0.01| |
|
|bias| None| |
|
|task_type| CAUSAL_LM| |
|
|optimizer|AdamW| |
|
|lr|4e-4| |
|
|
|
the prompt is something like |
|
|
|
``` |
|
# question |
|
{questioner} |
|
|
|
{question_text} |
|
|
|
# answer |
|
{answerer} |
|
|
|
{answer_text} |
|
|
|
``` |
|
|
|
### Framework versions |
|
|
|
- PEFT 0.4.0.dev0 |
|
|
|
### Example Notebook (Colab) |
|
|
|
[Colaboratory](https://colab.research.google.com/drive/1oWHM5_DbltvrD27oZL4-fumXChkMkrC5?usp=sharing) (Pro is not needed.) |
|
|
|
### Example Code |
|
|
|
```python |
|
tokenizer = transformers.AutoTokenizer.from_pretrained(base_model_name, trust_remote_code=True) |
|
base_model = transformers.AutoModelForCausalLM.from_pretrained(base_model_name, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True) |
|
peft_model = peft.PeftModelForCausalLM.from_pretrained(base_model, peft_model_name, torch_dtype=torch.bfloat16) |
|
|
|
|
|
prompt = "# question\nιΊ»ηε€ͺι\n\nε’η¨γγΉγγ¨γθγγοΌ\n# answer\nε²Έη°ζι\n\nγε
ι£η·η倧θ£ε²Έη°ζιεη»ε£γ" |
|
input_tokens = tokenizer(prompt, return_tensors="pt").to(peft_model.device) |
|
input_length = input_tokens.input_ids.shape[1] |
|
|
|
with torch.no_grad(): |
|
outputs = peft_model.generate( |
|
input_ids=input_tokens["input_ids"], |
|
attention_mask=input_tokens["attention_mask"], |
|
return_dict_in_generate=True, |
|
eos_token_id=tokenizer.eos_token_id, |
|
pad_token_id=tokenizer.pad_token_id, |
|
max_length=max_length, |
|
temperature=0.7, |
|
top_p=0.9, |
|
repetition_penalty=1.05, |
|
) |
|
output_tokens = outputs.sequences[0, input_length:-1] |
|
|
|
print(tokenizer.decode(output_tokens)) |
|
``` |