HachiML commited on
Commit
c03573d
1 Parent(s): c4761a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -1
README.md CHANGED
@@ -2,6 +2,9 @@
2
  library_name: peft
3
  datasets:
4
  - HachiML/databricks-dolly-15k-ja-for-peft
 
 
 
5
  ---
6
  ## JGLUE Score
7
  We evaluated our model using the following JGLUE tasks. Here are the scores:
@@ -16,8 +19,34 @@ We evaluated our model using the following JGLUE tasks. Here are the scores:
16
  - The JGLUE scores were measured using the following script:
17
  [Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable)
18
 
19
- ## Training procedure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
 
 
 
 
 
 
 
 
21
 
22
  The following `bitsandbytes` quantization config was used during training:
23
  - load_in_8bit: False
 
2
  library_name: peft
3
  datasets:
4
  - HachiML/databricks-dolly-15k-ja-for-peft
5
+ language:
6
+ - en
7
+ - ja
8
  ---
9
  ## JGLUE Score
10
  We evaluated our model using the following JGLUE tasks. Here are the scores:
 
19
  - The JGLUE scores were measured using the following script:
20
  [Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable)
21
 
22
+ ## How to use
23
+
24
+ ```python
25
+ import torch
26
+ from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, AutoTokenizer
27
+ from peft import PeftModel
28
+
29
+ model_name = "meta-llama/Llama-2-13b-hf"
30
+ bnb_config = BitsAndBytesConfig(
31
+ load_in_4bit=True,
32
+ bnb_4bit_use_double_quant=True,
33
+ bnb_4bit_quant_type="nf4",
34
+ bnb_4bit_compute_dtype=torch.float16,
35
+ )
36
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
37
+ pt_model = AutoModelForCausalLM.from_pretrained(
38
+ model_name,
39
+ quantization_config=bnb_config,
40
+ )
41
 
42
+ peft_name = "HachiML/Llama-2-13b-hf-qlora-dolly-ja-2ep"
43
+ model = PeftModel.from_pretrained(
44
+ pt_model,
45
+ peft_name,
46
+ )
47
+ ```
48
+
49
+ ## Training procedure
50
 
51
  The following `bitsandbytes` quantization config was used during training:
52
  - load_in_8bit: False