ohtaman commited on
Commit
c5b33c1
1 Parent(s): 2f33103

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -7
README.md CHANGED
@@ -43,14 +43,13 @@ and the respons is:
43
 
44
  ## Training procedure
45
 
46
- Finetune [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) with [ohtaman/kokkai2022](https://huggingface.co/datasets/ohtaman/kokkai2022)(currentry, private) dataset with LoRA.
47
- The training parameters are
48
 
49
  |param|value|
50
  |:--:|:--:|
51
  |r| 4|
52
  |lora_alpha| 2|
53
- |target_modules|- query_key_value<br> - dense<br> - dense_h_to_4h<br> - dense_4h_to_h|
54
  |lora_dropout| 0.01|
55
  |bias| None|
56
  |task_type| CAUSAL_LM|
@@ -72,12 +71,12 @@ the prompt template is as follows:
72
 
73
  ```
74
 
75
- ### Example Notebook (Colab)
76
-
77
- [Colaboratory](https://colab.research.google.com/drive/1oWHM5_DbltvrD27oZL4-fumXChkMkrC5?usp=sharing) (Pro is not needed.)
78
-
79
  ### Example Code
80
 
 
 
 
 
81
  ```python
82
  tokenizer = transformers.AutoTokenizer.from_pretrained(base_model_name, trust_remote_code=True)
83
  base_model = transformers.AutoModelForCausalLM.from_pretrained(base_model_name, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
 
43
 
44
  ## Training procedure
45
 
46
+ Finetune [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) with [ohtaman/kokkai2022](https://huggingface.co/datasets/ohtaman/kokkai2022)(currentry, private) dataset using LoRA with following configurations.
 
47
 
48
  |param|value|
49
  |:--:|:--:|
50
  |r| 4|
51
  |lora_alpha| 2|
52
+ |target_modules|query_key_value<br>dense<br>dense_h_to_4h<br>dense_4h_to_h|
53
  |lora_dropout| 0.01|
54
  |bias| None|
55
  |task_type| CAUSAL_LM|
 
71
 
72
  ```
73
 
 
 
 
 
74
  ### Example Code
75
 
76
+ You can try the model with [Colaboratory](https://colab.research.google.com/drive/1oWHM5_DbltvrD27oZL4-fumXChkMkrC5?usp=sharing) .
77
+ No Pro or Pro+ is needed.
78
+ The typical code to generate texts with this model is as follows:
79
+
80
  ```python
81
  tokenizer = transformers.AutoTokenizer.from_pretrained(base_model_name, trust_remote_code=True)
82
  base_model = transformers.AutoModelForCausalLM.from_pretrained(base_model_name, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)