HachiML commited on
Commit
08fb93e
1 Parent(s): 5560c1a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md CHANGED
@@ -1,6 +1,51 @@
1
  ---
2
  library_name: peft
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ## Training procedure
5
 
6
 
 
1
  ---
2
  library_name: peft
3
+ datasets:
4
+ - HachiML/databricks-dolly-15k-ja-for-peft
5
+ language:
6
+ - en
7
+ - ja
8
  ---
9
+ ## JGLUE Score
10
+ We evaluated our model using the following JGLUE tasks. Here are the scores:
11
+ | Task | Score |
12
+ |---------------------|----------:|
13
+ | JCOMMONSENSEQA(acc) | 75.78 |
14
+ | JNLI(acc) | 50.69 |
15
+ | MARC_JA(acc) | 79.64 |
16
+ | JSQUAD(exact_match) | 62.83 |
17
+ | **Average** | **67.23** |
18
+ - Note: Use v0.3 prompt template
19
+ - The JGLUE scores were measured using the following script:
20
+ [Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable)
21
+
22
+ ## How to use
23
+
24
+ ```python
25
+ import torch
26
+ from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, AutoTokenizer
27
+ from peft import PeftModel
28
+
29
+ model_name = "meta-llama/Llama-2-13b-hf"
30
+ bnb_config = BitsAndBytesConfig(
31
+ load_in_4bit=True,
32
+ bnb_4bit_use_double_quant=True,
33
+ bnb_4bit_quant_type="nf4",
34
+ bnb_4bit_compute_dtype=torch.float16,
35
+ )
36
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
37
+ pt_model = AutoModelForCausalLM.from_pretrained(
38
+ model_name,
39
+ quantization_config=bnb_config,
40
+ )
41
+
42
+ peft_name = "HachiML/Llama-2-13b-hf-qlora-dolly-ja-2ep"
43
+ model = PeftModel.from_pretrained(
44
+ pt_model,
45
+ peft_name,
46
+ )
47
+ ```
48
+
49
  ## Training procedure
50
 
51